Microsoft is intensifying its efforts to combat the misuse of its AI tools by amending a December 2023 lawsuit. The amendment aims to unmask four developers accused of circumventing the safety protocols on its AI systems to generate celebrity deepfakes. A court order allowing Microsoft to seize a website linked to the operation played a key role in identifying the individuals involved.
The four developers are reportedly linked to a global cybercrime network known as Storm-2139. They are identified as: Arian Yadegarnia, also known as “Fiz,” from Iran; Alan Krysiak, also known as “Drago,” from the United Kingdom; Ricky Yuen, also known as “cg-dot,” from Hong Kong; and Phát Phùng Tấn, also known as “Asakuri,” from Vietnam. Microsoft has identified other individuals involved but is withholding their names to avoid disrupting the ongoing investigation.
According to Microsoft, the group compromised accounts with access to its generative AI tools, successfully “jailbreaking” them to create images at will. They then sold access to others who used the tools to generate deepfake nudes of celebrities and engage in other forms of abuse. Following the lawsuit and website seizure, Microsoft noted a significant reaction from the defendants. “The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another,” the company stated in a blog post.
Celebrities, including Taylor Swift, have frequently been targeted by deepfake pornography. This involves superimposing a real person’s face onto a nude body, and the ease of AI-generated content significantly contributes to its spread. In January 2024, Microsoft updated its text-to-image models after the proliferation of fake images of Swift, highlighting the challenge of preventing users from bypassing safety measures.
Generative AI has lowered the technical barriers to creating deepfakes, contributing to a rise in scandals, even in high schools across the U.S. Recent stories from victims underscore that the creation of these images is not a victimless act. It causes real-world harm, leaving targets feeling anxious, afraid, and violated.
The AI community is actively debating the topic of safety. Some believe the concerns are genuine, while others suggest they are being used by large companies like OpenAI to control the market and promote their products. One perspective argues that closed-source AI models are important to prevent misuse by limiting users’ ability to bypass safety controls. Conversely, the open-source camp believes in making models free to modify and improve upon is necessary to accelerate innovation. Either way, the debate serves as a distraction from AI’s misuse in spreading inaccurate information and low-quality content.
Both open and closed AI models typically have license terms prohibiting certain uses, though enforcement remains a challenge. Concerns about AI’s dangers sometimes seem overblown, but the misuse of AI to create deepfakes represents a clear and present threat. Legal interventions are an important avenue for addressing such abuses, regardless of a model’s open or closed nature. Authorities across the U.S. have arrested individuals for using AI to generate deepfakes of minors. Furthermore, the NO FAKES Act introduced in Congress last year would criminalize the generation of images based on someone’s likeness. The United Kingdom and Australia have also increased legal measures, with the UK penalizing the distribution of deepfake porn and soon the production of it, and Australia criminalizing the creation and sharing of non-consensual deepfakes.