Combating the Growing Threat of Abusive AI-Generated Content
AI-generated deepfakes are becoming increasingly sophisticated, alarmingly easy to create, and are being used for malicious purposes like fraud, abuse, and manipulation. Children and the elderly are particularly vulnerable. While efforts from the tech sector and non-profit organizations have begun to address this problem, new laws are urgently needed to combat the misuse of deepfakes.
We must act decisively to prevent criminals from exploiting AI-generated content to defraud seniors or abuse children. Although election interference has garnered significant attention, the broader implications of deepfakes in other types of crime and abuse also demand equal focus. Encouragingly, members of Congress have put forth various legislative proposals, the Administration is engaged in the issue, organizations like AARP and NCMEC are actively shaping the discussion, and the tech industry has built a strong foundation in related areas that can be leveraged.
One critical step the U.S. can take is to pass a comprehensive deepfake fraud statute to protect Americans from cybercriminals who are using this technology for financial gain. While solutions are not yet perfect, it’s critical to contribute to and accelerate action. This report from Microsoft provides detailed information on this challenge and a comprehensive set of ideas, including endorsements of work that’s already underway.
Below is the foreword written by Brad Smith for Microsoft’s report Protecting the Public from Abusive AI-Generated Content. The full report is accessible here: https://aka.ms/ProtectThePublic
“The greatest risk is not that the world will do too much to solve these problems. It’s that the world will do too little. And it’s not that governments will move too fast. It’s that they will be too slow.”
These words conclude the book Tools and Weapons, co-authored in 2019. As the book’s title suggests, technological innovation can be both a tool for advancement and a weapon.
The rapid advancements in artificial intelligence (AI) present significant opportunities as well as challenges. AI is revolutionizing small businesses, education, and research; it’s also helping doctors with diagnoses and medical professionals discover new treatments for diseases. AI supercharges creators’ ability to express new ideas. However, this same technology is also producing a surge in abusive AI-generated content, or as we will discuss in this paper, abusive “synthetic” content.
In the five years since the book was published, this situation has advanced rapidly. Now, anyone with internet access can use AI tools to create realistic synthetic media intended for deception: this includes voice clones, deepfake images, and altered government documents. AI has made the manipulation of media faster, simpler, more accessible, and less skill-dependent. The technology has quickly become a weapon.
As this document goes to print, the U.S. government announced the successful disruption of a nation-state sponsored AI-enhanced disinformation operation. FBI Director Christopher Wray stated that “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.” While recognizing the success of U.S. law enforcement’s work with technology platforms, it’s critical to recognize that this effort is only just beginning.
The purpose of this white paper is to advocate for swifter action against abusive AI-generated content by policymakers, civil society leaders, and the technology industry. Public and private sectors must come together to address this issue head-on. Governments have a crucial role in establishing regulatory frameworks and policies. The private sector has a responsibility to innovate and implement safeguards. Technology companies must prioritize ethical considerations in their AI research and development processes. Civil society plays an important role in ensuring that both government regulation and industry action uphold fundamental human rights, including freedom of expression and privacy. By fostering transparency and accountability, we can build public trust and confidence in AI technologies.
The following pages accomplish three things: 1) explain the harms arising from abusive AI-generated content, 2) explain what Microsoft’s approach is, and 3) offer policy recommendations to begin combating these problems.
Addressing the challenges arising from abusive AI-generated content demands a united front. By using the strengths and expertise of all sectors, a safer and more trustworthy digital environment can be created for all. Together, we can harness the power of AI for good, while safeguarding against its potential dangers.
Microsoft’s Commitment to Addressing Abusive AI-Generated Content
Earlier this year, Microsoft outlined an extensive strategy to combat abusive AI-generated content, based on six key focus areas:
- A strong safety architecture.
- Durable media provenance and watermarking.
- Safeguarding Microsoft’s services from abusive content and conduct.
- Robust collaboration across industry and with governments and civil society.
- Modernized legislation to protect people from the abuse of technology.
- Public awareness and education.
Microsoft has taken several concrete steps, including:
- Implementing a safety architecture that includes red team analysis, preemptive classifiers, blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system.
- Automatically attaching provenance metadata to images generated with OpenAI’s DALL-E 3 model in Azure OpenAI Service, Microsoft Designer, and Microsoft Paint.
- Developing standards for content provenance and authentication through the Coalition for Content Provenance and Authenticity (C2PA) and implementing the C2PA standard so that content carrying the technology is automatically labeled on LinkedIn.
- Taking ongoing steps to protect users from online harms, including by joining the Tech Coalition’s Lantern program and expanding PhotoDNA’s availability.
- Launching innovative detection tools like Azure Operator Call Protection to detect potential phone scams using AI.
- Executing commitments to the new Tech Accord to combat deceptive use of AI in elections.
Legislative and Policy Measures for Protecting Americans
Microsoft and LinkedIn, alongside many other tech companies, launched the Tech Accord to Combat Deceptive Use of AI in 2024 Elections at this year’s Munich Security Conference. The Accord requires action across three key pillars: addressing deepfake creation, detecting and responding to deepfakes, and promoting transparency and resilience.
In addition to combating AI deepfakes in elections, lawmakers and policymakers must take steps to bolster our capabilities to:
- Promote content authenticity.
- Detect and respond to abusive deepfakes.
- Equip the public with tools to understand synthetic AI harms.
Microsoft has identified policy recommendations for policymakers in the United States. When we have these complex ideas, we should also consider working in straightforward terms. These recommendations aim to:
- Protect our elections.
- Protect seniors and consumers from online fraud.
- Protect women and children from online exploitation.
Three ideas may have an outsized impact in the fight against deceptive and abusive AI-generated content. First, Congress should enact a new federal “deepfake fraud statute.” Giving law enforcement officials, including state attorneys general, a standalone legal framework to prosecute AI-generated fraud and scams is vital. Second, Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content; this is essential and will help the public better understand whether content is AI-generated or manipulated. Third, existing federal and state laws must be updated to include CSAM and NCII, whether synthetic or not. Penalties for the creation and distribution of CSAM and NCII are common-sense and sorely needed to mitigate the dangers. These are not new ideas. The good news is that some of these ideas are already starting to take root in Congress and state legislatures. Microsoft offers these recommendations to contribute to the much-needed dialogue on AI synthetic media harms. Enacting any of these proposals will require a whole-of-society approach. Microsoft welcomes ideas from stakeholders across the digital ecosystem to address synthetic content harms. Ultimately, the danger is not that we will move too fast, but that we will move too slowly or not at all.