OpenAI announced on Friday that it had found evidence of a Chinese security operation employing an artificial intelligence-powered surveillance tool. This tool was designed to collect real-time reports on anti-Chinese posts across social media platforms in Western nations.
Researchers at OpenAI identified the campaign, which they are calling Peer Review, after discovering that someone working on the tool had used OpenAI’s technologies to debug some of the underlying computer code.
Ben Nimmo, a principal investigator for OpenAI, stated that this was the first time the company had uncovered an AI-powered surveillance tool of this nature. “Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our A.I. models,” Nimmo said.
Concerns about the potential misuse of AI are growing, particularly regarding its application in surveillance, computer hacking, disinformation campaigns, and other malicious activities. While researchers like Nimmo acknowledge that the technology can be utilized for such purposes, they also emphasize that AI can play a crucial role in identifying and combating these very behaviors.
OpenAI also reported that the Chinese surveillance tool appears to be based on Llama, an AI technology developed by Meta, which has been open-sourced. This means Meta shared its technology with software developers worldwide.
In a comprehensive report on the use of AI for malicious and deceptive purposes, OpenAI disclosed a separate Chinese campaign called Sponsored Discontent. This campaign utilized OpenAI’s technologies to create English-language posts that criticized Chinese dissidents. Furthermore, OpenAI discovered that the same group had used its technologies to translate articles into Spanish for distribution in Latin America. These articles were critical of US society and politics.
A third campaign, believed to be based in Cambodia, was found to be using OpenAI’s technologies to generate and translate social media comments that aided a scam known as “pig butchering.” These AI-generated comments were used to lure individuals into fraudulent investment schemes.
(The New York Times has filed lawsuits against OpenAI and Microsoft, alleging copyright infringement related to news content and AI systems. OpenAI and Microsoft have denied these claims.)