Microsoft Corp. announced that it has identified a global network of criminal hackers who exploited vulnerabilities in generative artificial intelligence tools, including Microsoft’s Azure OpenAI services. These hackers, based in various locations worldwide, bypassed built-in safety measures to create and distribute tools designed for generating harmful content.
The hackers, identified as part of a network Microsoft calls Storm-2139, allegedly used stolen customer credentials and manipulated the AI products to produce content like non-consensual intimate images of celebrities. They then sold access to these modified AI tools to other malicious actors along with instructions on how to generate harmful content.
Microsoft has pinpointed the hackers’ locations in Iran, the UK, Hong Kong, and Vietnam. Two other members are located in Florida and Illinois; however, Microsoft is refraining from releasing their names to avoid interfering with ongoing criminal investigations. The software maker is actively preparing criminal referrals for both U.S. and international law enforcement agencies.

The rise of generative AI has sparked growing concerns about the potential for its misuse, including the creation of deepfakes and child sexual abuse material. Companies like Microsoft and OpenAI have implemented technological safeguards to prevent such activity; nonetheless, malicious groups continue to seek unauthorized access.
This discovery highlights the ongoing battle to secure AI systems from exploitation and the need for continuous vigilance as the technology evolves.