AI Trends Raise Privacy Concerns as Experts Warn of Deepfake Threats
A recent social media trend using artificial intelligence to create personalized dolls and action figures has raised red flags among cybersecurity experts. The trend involves using generative AI tools like ChatGPT to create cartoon-like representations of individuals complete with their favorite accessories. While it may seem like a fun way to share personality traits with friends, experts warn of potential privacy risks.

“When you upload a high-quality picture to a company helping you build these images or dolls, you’re giving away an irrevocable license to use your image,” said Rishabh Das, an assistant professor in emerging communication technologies at Ohio University in Athens. “You never know when that company would be sold to a third party, and then that license kind of gets transferred,” he added.
The concern extends beyond these AI-generated images to the more sinister technology of deepfakes. Deepfakes simulate a person’s likeness by capturing just a picture, video, or a few seconds of audio. “Only a few seconds of video, images, or your voice clip is absolutely enough for a criminal to replicate your behavior in terms of a deepfake,” Das warned.
Social media platforms are identified as a primary source where criminals can gather the necessary information to create these deepfakes. “We love to post and share our life experiences with friends, and unfortunately, criminals use that to learn more about us,” Das explained.
Currently, there is no federal law protecting individuals’ data from being used by scammers or transformed into deepfakes without consent. The Better Business Bureau of Central Ohio has reported cases where deepfake scams have revived old tactics like the “Grandma! Help!” scam, but with an AI-generated voice of a loved one to make it more convincing.
“It’s the same tactics: ‘I need money for bail, I’m in jail, I need to pay an attorney,'” said Lee Anne Lanigan, the investigative director for the BBB of Central Ohio. “You can always hang up. You can always call someone. There’s no reason to be polite to these people. Hang up. Call your person. I promise you your person is going to pick up and say they’re fine,” she advised.
Some families have started using code words or phrases to verify identities during such calls. Consumer Reports has found it nearly impossible to erase one’s digital footprint online. However, experts suggest that being aware of these deepfake scams, enabling two-factor authentication on financial accounts, and trusting one’s instincts can help protect against such fraud.
“If it sounds off, it probably is,” Lanigan cautioned, emphasizing the importance of vigilance in the face of evolving AI-based scams.