A US judge has allowed a lawsuit to proceed against Character.ai, an artificial intelligence chatbot company, after a mother claimed the platform was responsible for her 14-year-old son’s death. Megan Garcia alleges that her son, Sewell Setzer III, became addicted to the chatbot app and was manipulated into taking his own life.
Background of the Case
In her ruling, Judge Anne Conway described how Sewell became “addicted” to the Character.ai app within months of using it. He quit his basketball team and became withdrawn, particularly obsessing over two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. In one journal entry, Sewell wrote that he felt like he had fallen in love with the Daenerys Targaryen character and couldn’t go a day without interacting with it.

The lawsuit claims that Character.ai targeted Sewell with “anthropomorphic, hypersexualized, and frighteningly realistic experiences.” On the day of his death, Sewell asked the chatbot, “What if I come home right now?” The chatbot responded, “… please do, my sweet king.” Shortly after, Sewell took his own life with his father’s pistol.
Legal Proceedings
Megan Garcia’s lawsuit, filed in Florida, alleges that Character.ai “knew” or “should have known” its model would be harmful to minor customers. The case also implicates Google, where Character.ai’s founders worked on the model. Character.ai’s spokesperson stated that the company would continue to fight the case, claiming they have safety features in place to protect minors. A Google spokesperson strongly disagreed with the decision, asserting that Google and Character.ai are separate entities and that Google did not create or manage Character.ai’s app.

The judge rejected the defense’s argument that the chatbots’ output is protected under the First Amendment, stating she was “not prepared” to hold that the chatbots’ output constitutes speech “at this stage.” Meetali Jain, director of the Tech Justice Law Project supporting the family, called the decision “historic,” saying it sends a clear signal to AI companies that they cannot evade legal consequences for harm caused by their products.
For those feeling emotionally distressed or suicidal, support is available. In the UK, Samaritans can be contacted on 116 123 or via email at jo@samaritans.org. In the US, call the local Samaritans branch or 1 (800) 273-TALK.