Microsoft’s Digital Crimes Unit (DCU) is actively pursuing legal action to safeguard the integrity and safety of its AI services. In a complaint filed in the Eastern District of Virginia, the company is targeting cybercriminals who are developing tools specifically designed to circumvent the safety measures of generative AI platforms, including Microsoft’s, to produce offensive and harmful content.
While Microsoft continuously invests in enhancing the resilience of its products and services against misuse, cybercriminals consistently refine their tactics, employing innovative tools and techniques to evade even the most advanced security protocols. This legal action sends a clear message: Microsoft will not tolerate the weaponization of its AI technology.
Microsoft AI services incorporate robust safety measures at the AI model, platform, and application levels. As alleged in the court filings, Microsoft has identified a foreign-based threat actor group developing sophisticated software to exploit compromised customer credentials obtained from public websites. This group sought to identify and unlawfully access accounts associated with certain generative AI services, with the intention of altering the capabilities of those services. Cybercriminals then utilized these services and resold access to other malicious actors, providing them with detailed instructions on how to generate harmful and illicit content using these customized tools.
Upon discovering this activity, Microsoft revoked the cybercriminals’ access, implemented countermeasures, and strengthened its safeguards to prevent similar malicious activities in the future. This conduct violates U.S. law, as well as the Acceptable Use Policy and Code of Conduct for Microsoft’s services.
The recently unsealed court filings are part of an ongoing investigation into the creators of these illicit tools and services. Specifically, the court order has enabled Microsoft to seize a website that was integral to the criminal operation, which will allow the company to collect vital evidence about the individuals behind these operations. This evidence will help Microsoft decipher how these services are monetized and disrupt any additional technical infrastructure that’s discovered.
Simultaneously, Microsoft has enhanced its safety mitigations to specifically address the observed activity. The company continues to strengthen its safeguards based on the findings from its investigation. Generative AI tools are used by people every day to improve their creativity and productivity. However, as with the development of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for their own malicious purposes. Microsoft recognizes the role it plays in protecting against the abuse and misuse of its tools as new capabilities are introduced across many sectors.
Last year, Microsoft committed to ongoing innovation in methods to keep users safe and outlined a comprehensive approach to combat abusive AI-generated content and protect individuals and communities. This new legal action supports that commitment.
In addition to legal action and the ongoing improvement of safety guardrails, Microsoft continues to pursue proactive measures and partnerships with other organizations to address online harms. Microsoft also advocates for new laws that provide governmental authorities with the necessary tools to combat the abuse of AI effectively, particularly to prevent harm to others.
Furthermore, Microsoft recently published a comprehensive report, “Protecting the Public from Abusive AI-Generated Content,” which provides industry and governmental recommendations to safeguard the public, particularly women and children, from actors with malicious intent.
For nearly two decades, Microsoft’s DCU has worked to disrupt and deter cybercriminals seeking to weaponize the tools that consumers and businesses rely on daily. Today, the DCU builds on this framework and leverages key learnings from past cybersecurity actions to prevent the abuse of generative AI. Microsoft is resolute in its commitment to protecting people online, transparently reporting its findings, taking legal action against those who attempt to weaponize AI technology, and working with others across public and private sectors globally to ensure that all AI platforms remain secure against harmful abuse.