Microsoft has amended a lawsuit to identify four multinational developers allegedly involved in bypassing safety measures and abusing the company’s AI tools. Their actions reportedly led to the creation of deepfake celebrity pornography and other harmful content. The tech giant revealed this update in a recent blog post. According to Microsoft, all four developers are members of Storm-2139, a cybercrime network.
The alleged cybercriminals operate under nicknames that evoke a sense of early-2000s hacker culture: Arian Yadegarnia, also known as “Fiz” from Iran; Alan Krysiak, or “Drago” from the United Kingdom; Ricky Yuen, also known as “cg-dot” from Hong Kong; and Phát Phùng Tấn, or “Asakuri” from Vietnam.
Microsoft’s blog post categorized the members of Storm-2139 into three tiers: “creators, providers, and users.” Together, these members form a dark marketplace that relies on the jailbreaking and modification of Microsoft’s AI tools. The modifications are intended to create unlawful or destructive material.
“Creators developed the illicit tools that enabled the abuse of AI-generated services,” the post explains. “Providers then modified and supplied these tools to end users often with varying tiers of service and payment.” It continues, “Users then used these tools to generate violating synthetic content, often centered around celebrities and sexual imagery.”
The civil suit was initially filed in December, with the defendants listed as “John Doe” at the time. Newly discovered evidence from Microsoft’s investigation into Storm-2139 has led to the unmasking of these alleged perpetrators. The tech giant noted that the investigation is ongoing, and the names of other individuals, including at least two Americans, remain confidential. Microsoft cited the need for future deterrence as the motivation for this action.
“We are pursuing this legal action now against identified defendants,” Microsoft stated in the post, “to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology.”
This action demonstrates a significant show of force by Microsoft, which is understandably motivated to prevent the abuse of its generative AI tools to create highly objectionable content such as nonconsensual fake pornography. The potential consequences of being targeted by one of the world’s wealthiest and most powerful organizations are a strong deterrent.
According to Microsoft, the legal pressure has already begun to destabilize Storm-2139. The group’s website was “seized”, and legal documents were “unsealed” in January. This led to an internal reaction, with some members reportedly turning on each other.
As Gizmodo points out, Microsoft’s decision to aggressively pursue legal action against alleged abusers also highlights the complexities of the ongoing debate regarding AI safety. The debate centers on how companies should manage and limit potential misuse of their AI technology.
Alternatives include a more decentralized approach, such as Meta’s decision to open-source its frontier AI models. Some experts believe that open-sourcing may enable bad actors to exploit sophisticated AI technology without public oversight. The AI industry currently operates with limited regulation, although companies like Meta, Microsoft, and Google remain subject to public scrutiny.
Microsoft uses a mixed approach, developing some models publicly while keeping others private. Despite the tech giant’s considerable resources and commitments to AI safety, malicious actors have allegedly found ways to bypass its safeguards and profit from harmful uses. As Microsoft continues to invest heavily in AI, it can’t rely solely on litigation to curb exploitation.
As Ina Fried of Axios noted, “While Microsoft and others have established systems designed to prevent misuse of generative AI, those protections only work when the technological and legal systems can effectively enforce them.”