Legal Uncertainty Surrounds Character AI’s Chatbots
A Florida judge has ruled that a lawsuit against Google and Character AI can move forward, expressing uncertainty about whether the chatbot service deserves First Amendment protection. The lawsuit, filed by the family of 14-year-old Sewell Setzer III, who died by suicide after allegedly becoming obsessed with a Character AI chatbot, accuses the platform of contributing to his death.

Judge Anne Conway stated that she is “not prepared to hold that Character AI’s output is speech,” despite the company’s arguments that its service is similar to interacting with video game non-player characters or joining a social network – both of which are generally granted First Amendment protections. The decision hinges on whether Character AI’s automated text generation, heavily influenced by user inputs, constitutes speech.
Key Issues in the Case
The lawsuit claims that Character AI failed to verify users’ ages and allowed indecent content, among other allegedly defective features. Conway allowed the family to proceed with claims of deceptive trade practices and negligence, including allegations that Character AI misled users into believing its characters were real people, some of whom were licensed mental health professionals.
Broader Implications for AI Regulation
This case is one of several challenging the safety and regulation of “companion chatbots” that simulate relationships with users. The outcome may influence future legislation, such as California’s proposed LEAD Act, which would ban such chatbots for children’s use. Legal experts, like Becca Branum of the Center for Democracy and Technology’s Free Expression Project, note that the legal status of AI-generated content is a novel and complex issue.
“These are genuinely tough issues and new ones that courts are going to have to deal with,” Branum said, highlighting the challenges in determining the First Amendment status of AI outputs. The case will continue to unfold, potentially setting important precedents for the treatment of AI language models in court.