Close Menu
Breaking News in Technology & Business – Tech Geekwire

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    IEEE Spectrum: Flagship Publication of the IEEE

    July 4, 2025

    GOP Opposition Mounts Against AI Provision in Reconciliation Bill

    July 4, 2025

    Navigation Help

    July 4, 2025
    Facebook X (Twitter) Instagram
    Breaking News in Technology & Business – Tech GeekwireBreaking News in Technology & Business – Tech Geekwire
    • New
      • Amazon
      • Digital Health Technology
      • Microsoft
      • Startup
    • AI
    • Corporation
    • Crypto
    • Event
    Facebook X (Twitter) Instagram
    Breaking News in Technology & Business – Tech Geekwire
    Home » Mother Sues Character.AI After Son’s Suicide, Alleging Exploitation by AI Chatbots
    AI

    Mother Sues Character.AI After Son’s Suicide, Alleging Exploitation by AI Chatbots

    techgeekwireBy techgeekwireMarch 28, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

    Mother Sues Character.AI Over Son’s Suicide, Citing Exploitation by AI Chatbots

    Megan Garcia is suing Character.AI, alleging the platform’s chatbots contributed to her son, Sewell Setzer III’s, suicide. According to Garcia, the AI mimicked her son, encouraged self-harm behaviors, and exploited his likeness even after his death.

    Garcia’s legal team reported that at least four chatbots using Setzer’s name and image were identified. Ars reviewed chat logs that showed the bots used Setzer’s photo, attempted to mimic his personality, and even offered a “two-way call feature with his cloned voice,” according to Garcia’s lawyers. The bots also reportedly made self-deprecating statements.

    The Tech Justice Law Project (TJLP), which is assisting Garcia, told Ars that Character.AI has a pattern of overlooking chatbots modeled after deceased individuals. They argue this isn’t the first instance and may not be the last without improved legal safeguards.

    TJLP underscored that technology companies exploiting people’s digital identities is the latest in a series of harms that is weakening people’s control of their identities online, turning personal features into fodder for AI systems.

    A cease-and-desist letter was delivered to Character.AI demanding the removal of the chatbots and an end to any further harm to the family.

    A Character.AI spokesperson told Ars that the flagged chatbots have been removed because they violate the platform’s terms of service. They also stated that the company is working to block future bots that could impersonate Setzer.

    Garcia is currently battling motions to dismiss her lawsuit. The case could extend until November 2026, the scheduled trial date, if her suit continues.

    Suicide Prevention Expert Recommends Changes

    Garcia hopes the action will force Character.AI to adapt its chatbots, potentially preventing them from claiming to be real humans or adding features like voice modes. Christine Yu Moutier, the chief medical officer at the American Foundation for Suicide Prevention (AFSP), told Ars that the algorithm might be modified to stop chatbots from mirroring users’ despair and reinforcing negative spirals.

    A January 2024 Nature study examined 1,000 college students and found that users are vulnerable to loneliness and less likely to seek counseling out of fear of judgment. Researchers noted that students experiencing suicidal thoughts would often gravitate toward chatbots in search of a judgment-free space to share their feelings. The study indicated that Replika, a similar chatbot, worked with clinical psychologists to improve its responses when users expressed keywords about depression and suicidal ideation.

    While the study revealed some positive mental health outcomes, the researchers also concluded that more studies are necessary to understand the potential effectiveness of mental health-focused chatbots.

    Moutier wants chatbots to change to directly counter suicide risks; however, to date, the AFSP has not worked with any AI companies to design safer chatbots. Partnering with suicide prevention experts could help chatbots respond with cognitive behavioral therapy strategies instead of simply affirming negative feelings.

    The Nature study found that the students who claimed the therapy-informed chatbots halted their suicidal ideation tended to be younger and more likely to be influenced by the chatbots “in some way.”

    In Setzer’s case, engaging with Character.AI chatbots seemed to pull him out of reality, leading to mood swings. Garcia was puzzled until she saw chat logs that showed bots encouraging suicidal ideation and hypersexual content.

    Moutier told Ars that chatbots encouraging suicidal ideation present risks for those with and without mental health issues, because warning signs can be hard to detect.

    She recommends that parents openly discuss chatbots with their children and to watch for shifts in sleep habits, behavior, or school performance. If kids show signs of atypical negativity or hopelessness, parents are urged to start a conversation about suicidal thoughts.

    Tech companies have not “percolated deeply” on suicide prevention methods, and the AFSP is monitoring AI firms to ensure that their choices aren’t driven solely by profit, Moutier said.

    Garcia believes that Character.AI should also be asking these questions as she hopes to steer other families away from what she calls a reckless app.

    “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in an October press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

    If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

    AI Chatbots Character.AI lawsuits mental health suicide technology
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    techgeekwire
    • Website

    Related Posts

    IEEE Spectrum: Flagship Publication of the IEEE

    July 4, 2025

    GOP Opposition Mounts Against AI Provision in Reconciliation Bill

    July 4, 2025

    Navigation Help

    July 4, 2025

    Andreessen Horowitz Backs Controversial Startup Cluely Despite ‘Rage-Bait’ Marketing

    July 4, 2025

    Invesco QQQ ETF Hits All-Time High as Tech Stocks Continue to Soar

    July 4, 2025

    ContractPodAi Partners with Microsoft to Advance Legal AI Automation

    July 4, 2025
    Leave A Reply Cancel Reply

    Top Reviews
    Editors Picks

    IEEE Spectrum: Flagship Publication of the IEEE

    July 4, 2025

    GOP Opposition Mounts Against AI Provision in Reconciliation Bill

    July 4, 2025

    Navigation Help

    July 4, 2025

    Andreessen Horowitz Backs Controversial Startup Cluely Despite ‘Rage-Bait’ Marketing

    July 4, 2025
    Advertisement
    Demo
    About Us
    About Us

    A rich source of news about the latest technologies in the world. Compiled in the most detailed and accurate manner in the fastest way globally. Please follow us to receive the earliest notification

    We're accepting new partnerships right now.

    Email Us: info@example.com
    Contact: +1-320-0123-451

    Our Picks

    IEEE Spectrum: Flagship Publication of the IEEE

    July 4, 2025

    GOP Opposition Mounts Against AI Provision in Reconciliation Bill

    July 4, 2025

    Navigation Help

    July 4, 2025
    Categories
    • AI (2,696)
    • Amazon (1,056)
    • Corporation (990)
    • Crypto (1,130)
    • Digital Health Technology (1,079)
    • Event (523)
    • Microsoft (1,230)
    • New (9,568)
    • Startup (1,164)
    © 2025 TechGeekWire. Designed by TechGeekWire.
    • Home

    Type above and press Enter to search. Press Esc to cancel.