Celebrities Sound Alarm on AI Deepfake Scams
Steve Harvey, known for “Family Feud” and his radio show, is now fighting a new battle: the misuse of his image and voice through artificial intelligence. While AI has generated numerous memes, some creators are using it to create scams that impersonate celebrities to steal personal information and money.
Harvey isn’t alone; a wave of celebrities are speaking out and supporting legislative action to combat the growing problem. Last year, voices resembling Harvey, Taylor Swift, and Joe Rogan were used in a scam promising government funds. “I’ve been telling you guys for months to claim this free $6,400 dollars,” an AI-generated voice mimicking Harvey said in one video.
“I prided myself on my brand being one of authenticity, and people know that, and so they take the fact that I’m known and trusted as an authentic person, pretty sincere,” Harvey told CNN.
Harvey is now actively advocating for legislation and increased penalties for the individuals behind these scams and the platforms hosting them. Congress appears to be taking note as they consider several bills designed to address the issue.
Legislative Efforts to Curb AI Abuse
One of the prominent legislative efforts is an updated version of the No Fakes Act. This bipartisan bill aims to hold creators and platforms accountable for unauthorized AI-generated images, videos, and audio. The bill’s sponsors plan to reintroduce it in the coming weeks, according to sources familiar with the matter. Other legislation in the works includes the Take It Down Act, which focuses on criminalizing AI-generated deepfake pornography.
Actress Scarlett Johansson, and Harvey are among those backing these measures. Harvey said that scams using his likeness have reached “an all-time high.”
“It’s freedom of speech, it’s not freedom of, ‘make me speak the way you want me to speak,’” Harvey said. “That’s not freedom, that’s abuse.
Platforms that host the deepfake content also face potential penalties under the proposed legislation. For each violation, the current version of the bill would fine the host platforms $5,000. Sources confirmed that no changes to the bill would be made to appease the platforms and that the creative industries are the priority.
Concerns and Criticisms
While the legislation garners support from recording artists, talent agencies, and major Hollywood names, some critics, including public advocacy organizations, express concerns about the potential for overregulation. They worry the bill as written could endanger First Amendment rights, enable misinformation and create a surge in lawsuits.
Public advocates shared concerns that the bill “goes too far in introducing an entirely new federal IP right.”
Technology Steps In
In the meantime, companies like Vermillio AI are emerging to help celebrities identify and combat deepfakes. The company, which works with major talent agencies and movie studios, uses a platform called TraceID that tracks AI instances of their clients and automates take-down requests.
Vermillio CEO Dan Neely said that his platform found a million deepfake creations are now created every minute.
Neely demonstrated Harvey’s Vermillio account to CNN, highlighting the company’s technology, which uses “fingerprinting” to differentiate between authentic and AI-generated content. Harvey’s account included AI-voice generated chatbots and fake videos.
“The sooner we do something, I think the better off we’ll all be,” Harvey said. “Because, I mean, why wait? How many people we got to watch get hurt by this before somebody does something?”