North Korean hackers are increasingly weaponizing artificial intelligence, posing a significant challenge to cybersecurity efforts worldwide. Experts are warning that AI platforms are being exploited for fraud, and that it may be nearly impossible to block.
OpenAI and Google have taken steps to shut down accounts linked to North Korean operatives, but cybersecurity experts say these measures are unlikely to halt the activities.

Since late January, OpenAI, the creator of ChatGPT, and Google have announced measures to shut down accounts suspected of being tied to North Korean operatives. They have also revealed how their platforms have been manipulated for illicit purposes. However, the regime’s hackers can easily bypass restrictions using virtual private networks, shell companies, and brokers, industry insiders warn.
“Threat actors will use the cheapest and most efficient tool to get the job done,” said Rafe Pilling, director of threat intelligence at the US-based cybersecurity firm Secureworks. “Many cybercriminals prefer online services that are free to sign up for or can be paid for via cryptocurrency, and this would likely be true for North Korean IT workers as well.”
Analysts also point out that North Korean operatives need not rely solely on US-based AI tools like ChatGPT or Google Gemini. Cheaper, more accessible generative AI platforms are developing worldwide, and some may offer fewer safeguards against misuse.