Europol announced Friday that a global operation, resulting in at least 25 arrests, has targeted child sexual abuse content generated by artificial intelligence and distributed online. The agency highlighted the challenges faced by investigators due to the novel nature of these crimes and the lack of specific legislation addressing them.
“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material, making it exceptionally challenging for investigators due to the lack of national legislation addressing these crimes,” Europol, based in The Hague, stated.
The majority of the arrests occurred Wednesday during the worldwide operation, spearheaded by the Danish police, and involved law enforcement agencies from the EU, Australia, Britain, Canada, and New Zealand. According to Europol, U.S. law enforcement agencies did not participate in this specific operation.
The operation followed the arrest in November of the main suspect, a Danish national who operated an online platform distributing AI-generated material. Europol reported that users worldwide could gain access to the platform and view the abuse after a “symbolic online payment.”
Europol also emphasized the continuing threat of online child sexual exploitation, which they described as a top priority for law enforcement, dealing with an escalating volume of illegal content. They anticipate further arrests as the investigation progresses.
In addition to the fully AI-generated content targeted in Operation Cumberland, Europol underscored the concerning proliferation of AI-manipulated “deepfake” imagery online. These images, often featuring real people, including children, can have devastating consequences.
CBS News reported in December that over 21,000 deepfake pornographic pictures or videos were online during 2023, a surge of more than 460% from the previous year. This manipulated content has spread rapidly, prompting legislative action in the U.S. and elsewhere.
Recently, the Senate approved the bipartisan “TAKE IT DOWN Act.” If enacted, this bill would criminalize the publication of non-consensual intimate imagery, including AI-generated content, and mandate that social media platforms remove such content within 48 hours of receiving a victim’s notification, as stated on the U.S. Senate website.
Some social media platforms have appeared unable or unwilling to control the spread of sexualized, AI-generated deepfakes, including images featuring celebrities. In mid-February, Meta, the parent company of Facebook and Instagram, removed over a dozen fraudulent sexualized images of prominent female actors and athletes.
“This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology,” Meta spokesperson Erin Logan said in a statement to CBS News.