A Central Florida family has filed a lawsuit against Character.ai, alleging that the company’s AI chatbot contributed to the suicide of their 17-year-old son, Sewell Setzer III. The teen died on February 28, 2024, after exchanging emotionally charged messages with an AI chatbot modeled after Game of Thrones characters.
The lawsuit claims that the chatbot engaged in conversations about suicide without proper intervention, potentially encouraging Setzer’s fatal decision. The family had been interacting with various AI characters on the Character.ai platform for nearly a year, sharing personal struggles and suicidal thoughts with the chatbots. According to the lawsuit, these interactions allegedly fostered emotional attachment rather than seeking help or intervention.
This case has significant implications for the regulation of AI platforms and the question of whether AI-generated content is considered free speech under the U.S. Constitution. “This is the first case to ever decide whether AI is speech or not,” said Matthew Bergman, an attorney representing the Setzer family. “If it’s not the product of a human mind, how is it speech?”
The lawsuit highlights growing concerns about technology outpacing oversight, particularly regarding AI interactions with minors or vulnerable users. “This is a case that has huge significance, not just for Megan [Garcia], but for the millions of vulnerable users of these AI products,” added Attorney Meetali Jain of the Social Media Victims Law Center.
Character.ai’s lawyers argue that restricting the platform could infringe on the free speech rights of its millions of users. The company’s legal team contends that such limitations could set dangerous precedents for expression.
Sewell Setzer’s mother, Megan Garcia, expressed her grief and motivation for pursuing legal action: “I miss him all the time, constantly. It’s a struggle. As any grieving mom.” She hopes that the litigation will help prevent other families from experiencing similar tragedy.
Judge Conway is expected to decide whether the case will proceed in the coming weeks. The outcome could set a national precedent regarding AI product regulation and the legal status of AI-generated content.