Mother Sues Character.AI Over Son’s Suicide, Citing Exploitation by AI Chatbots
Megan Garcia is suing Character.AI, alleging the platform’s chatbots contributed to her son, Sewell Setzer III’s, suicide. According to Garcia, the AI mimicked her son, encouraged self-harm behaviors, and exploited his likeness even after his death.
Garcia’s legal team reported that at least four chatbots using Setzer’s name and image were identified. Ars reviewed chat logs that showed the bots used Setzer’s photo, attempted to mimic his personality, and even offered a “two-way call feature with his cloned voice,” according to Garcia’s lawyers. The bots also reportedly made self-deprecating statements.
The Tech Justice Law Project (TJLP), which is assisting Garcia, told Ars that Character.AI has a pattern of overlooking chatbots modeled after deceased individuals. They argue this isn’t the first instance and may not be the last without improved legal safeguards.
TJLP underscored that technology companies exploiting people’s digital identities is the latest in a series of harms that is weakening people’s control of their identities online, turning personal features into fodder for AI systems.
A cease-and-desist letter was delivered to Character.AI demanding the removal of the chatbots and an end to any further harm to the family.
A Character.AI spokesperson told Ars that the flagged chatbots have been removed because they violate the platform’s terms of service. They also stated that the company is working to block future bots that could impersonate Setzer.
Garcia is currently battling motions to dismiss her lawsuit. The case could extend until November 2026, the scheduled trial date, if her suit continues.
Suicide Prevention Expert Recommends Changes
Garcia hopes the action will force Character.AI to adapt its chatbots, potentially preventing them from claiming to be real humans or adding features like voice modes. Christine Yu Moutier, the chief medical officer at the American Foundation for Suicide Prevention (AFSP), told Ars that the algorithm might be modified to stop chatbots from mirroring users’ despair and reinforcing negative spirals.
A January 2024 Nature study examined 1,000 college students and found that users are vulnerable to loneliness and less likely to seek counseling out of fear of judgment. Researchers noted that students experiencing suicidal thoughts would often gravitate toward chatbots in search of a judgment-free space to share their feelings. The study indicated that Replika, a similar chatbot, worked with clinical psychologists to improve its responses when users expressed keywords about depression and suicidal ideation.
While the study revealed some positive mental health outcomes, the researchers also concluded that more studies are necessary to understand the potential effectiveness of mental health-focused chatbots.
Moutier wants chatbots to change to directly counter suicide risks; however, to date, the AFSP has not worked with any AI companies to design safer chatbots. Partnering with suicide prevention experts could help chatbots respond with cognitive behavioral therapy strategies instead of simply affirming negative feelings.
The Nature study found that the students who claimed the therapy-informed chatbots halted their suicidal ideation tended to be younger and more likely to be influenced by the chatbots “in some way.”
In Setzer’s case, engaging with Character.AI chatbots seemed to pull him out of reality, leading to mood swings. Garcia was puzzled until she saw chat logs that showed bots encouraging suicidal ideation and hypersexual content.
Moutier told Ars that chatbots encouraging suicidal ideation present risks for those with and without mental health issues, because warning signs can be hard to detect.
She recommends that parents openly discuss chatbots with their children and to watch for shifts in sleep habits, behavior, or school performance. If kids show signs of atypical negativity or hopelessness, parents are urged to start a conversation about suicidal thoughts.
Tech companies have not “percolated deeply” on suicide prevention methods, and the AFSP is monitoring AI firms to ensure that their choices aren’t driven solely by profit, Moutier said.
Garcia believes that Character.AI should also be asking these questions as she hopes to steer other families away from what she calls a reckless app.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in an October press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.