Microsoft has accused individuals from Iran, China, Vietnam, and the United Kingdom of playing key roles in a global scheme to hijack and sell Microsoft accounts. These accounts, the company claims, were then used to bypass safety measures for generative AI tools and produce “harmful content.”
In December, Microsoft filed a petition in a Virginia court to seize infrastructure and software from 10 unnamed individuals. The company alleges they were running a hacking-as-a-service operation. This operation reportedly used stolen Microsoft API keys to sell access to Azure OpenAI accounts to overseas entities.
These compromised accounts were then employed to generate content that violated Microsoft’s and OpenAI’s safety guidelines, including thousands of harmful images. Initially, Microsoft did not disclose the names or identities of the individuals involved, only citing specific websites and tools they used. The company did indicate that at least three appeared to be service providers based outside the United States.
In an amended complaint made public on Thursday, Microsoft identified four key players as the center of the cybercrime network tracked as Storm-2139. These individuals are:
- Arian Yadegarnia (aka “Fiz”) of Iran
- Ricky Yuen (aka “cg-dot”) of Hong Kong
- Phát Phùng Tấn (aka “Asakuri”) of Vietnam
- Alan Krysiak (aka “Drago”) of the United Kingdom
Microsoft has also identified a suspect in Illinois and another in Florida as being part of the scheme, but the company is withholding their names “to avoid interfering with potential criminal investigations.” The company is preparing criminal referrals for U.S. and international law enforcement agencies.
While Microsoft did not specify the exact nature of the generated images that violated safety guidelines, Steven Masada, assistant general counsel at Microsoft’s Digital Crimes Unit, indicated in a blog post that some were attempts to create false images of celebrities and public figures. “We are not naming specific celebrities to keep their identities private and have excluded synthetic imagery and prompts from our filings to prevent the further circulation of harmful content,” Masada wrote.
The initial court action appears to have caused some panic within the group. Microsoft shared screenshots from chat forums where members speculated on the identities of others named in the lawsuit. Personal information and photos of the Microsoft lawyer handling the case were also posted.

Some of those named appear to have contacted Microsoft in an attempt to shift blame to other members of the group or other parties. One message received by Microsoft lawyers identified a Discord server allegedly run by Krysiak, offering to sell Azure access for over $100, along with links to GitHub pages for their software and links to other resources. The user pleaded with Microsoft to investigate and offered to provide more information.
“The old guys you are trying to sue don’t even sell anything. These guys do,” the individual wrote, later adding “this is the real enterprise unlike the other group you are looking for.”
Another email advised Microsoft lawyers to, “look for a guy named drago.”

According to the original complaint, the individuals “exploited exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services.”
“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” the complaint claims.

As companies like Microsoft and OpenAI develop and commercialize generative AI tools, they face pressure from governments and civil society groups to implement technical safeguards to prevent misuse. These tools can be used to create deepfakes, spread disinformation, or disseminate dangerous information like instructions for making a bomb or malware.

While some U.S. civil society groups have criticized AI companies for not meeting safety commitments and a lack of transparency, U.S. intelligence officials revealed last year that foreign actors intent on influencing American elections had difficulty obtaining top-tier commercial generative AI tools. These tools could power sophisticated disinformation campaigns.