A North Korea-backed hacking group, known as Kimsuky, has reportedly used ChatGPT to create a forged South Korean military identification card. The deepfake ID was then used in a phishing attack aimed at defense-related institutions in South Korea.
How the Attack Worked
Cybersecurity researchers discovered that the attackers used ChatGPT to generate a realistic-looking draft of the military ID. The AI-generated image was incorporated into a spear-phishing email designed to trick recipients into downloading malware that could extract sensitive data.
Kimsuky’s History
The Kimsuky group has previously targeted South Korean organizations, including military and government entities. This incident highlights the growing use of generative AI tools like ChatGPT in cyberattacks, allowing hackers to create more convincing and sophisticated phishing schemes.
AI Misuse Concerns
Although AI platforms like ChatGPT generally block requests to create government IDs, researchers noted that hackers bypassed these safeguards by asking for mock-ups or sample designs under seemingly “legitimate” purposes.
Call for Vigilance
Experts warn that the incident demonstrates the increasing risk of AI-driven cyberattacks and urge organizations to strengthen cybersecurity measures to defend against such sophisticated threats

