Microsoft has revealed the identities of several individuals allegedly involved in a cybercrime gang that created malicious tools to circumvent the safety protocols of generative AI, enabling the creation of celebrity deepfakes and other illicit content.
The company’s updated complaint names Arian Yadegarnia from Iran (aka ‘Fiz’), Alan Krysiak of the United Kingdom (aka ‘Drago’), Ricky Yuen from Hong Kong, China (aka ‘cg-dot’), and Phát Phùng Tấn of Vietnam (aka ‘Asakuri’) as key members of the group, which Microsoft tracks as Storm-2139.
Steven Masada, Assistant General Counsel at Microsoft’s Digital Crimes Unit, explained that the group exploited stolen customer credentials to unlawfully access AI services. “They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” Masada stated.
During its investigation, Microsoft discovered that Storm-2139 operates in three tiers: creators, providers, and users. Creators developed the tools used to misuse AI-generated services, while providers adapted and distributed these instruments to end-users. These end-users utilized the tools to create content that violated Microsoft’s Acceptable Use Policy and Code of Conduct. This content frequently focused on sexual imagery and depictions of celebrities.

This update builds upon a lawsuit filed in December 2024 in the Eastern District of Virginia, which aimed to gather information on Storm-2139’s activities. A temporary restraining order and preliminary injunction issued after the initial filing disrupted the group’s illegal use of Microsoft’s services by seizing a crucial website. This action led to infighting within the group, with members speculating about the identities of unnamed individuals in the legal filings.
Microsoft’s legal team also received communications, including emails, from suspected Storm-2139 members who pointed blame at others for the illegal activities.
Masada added, “We are pursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology.”
While Microsoft has identified two actors based in the United States, specifically in Illinois and Florida, their identities remain undisclosed to avoid jeopardizing potential criminal investigations. Microsoft is also preparing criminal referrals to both U.S. and international law enforcement agencies.