Featured

Judge Rules AI Chatbot Is Not Protected by First Amendment in Teen Suicide Case

AI has its day in court
Khanchit Khirisutchalual/Getty

A federal judge has determined that an AI-powered chatbot involved in a tragic teen suicide case does not qualify for free speech protections under the Constitution.

TechSpot reports that the ongoing legal battle surrounding the death of 14-year-old Sewell Setzer III has taken a new turn as Judge Anne Conway of the Middle District of Florida denied Character.ai the ability to present its fictional, AI-based characters as entities capable of “speaking” like human beings. The ruling allows Megan Garcia’s lawsuit against the company to proceed, as she seeks to hold Character.ai accountable for her son’s suicide following his prolonged interactions with a chatbot modeled after the Game of Thrones character Daenerys Targaryen.

Breitbart News previously reported on this case, writing:

Megan Garcia, a mother from Orlando, Florida, has filed a lawsuit against Character.AI, alleging that the company’s AI chatbot played a significant role in her 14-year-old son’s tragic suicide. The lawsuit, filed on Wednesday, claims that Sewell Setzer III became deeply obsessed with a lifelike Game of Thrones chatbot named “Dany” on the role-playing app, ultimately leading to his untimely death in February.

According to court documents, Sewell, a ninth-grader, had been engaging with the AI-generated character for months prior to his suicide. The conversations between the teen and the chatbot, which was modeled after the HBO fantasy series’ character Daenerys Targaryen, were often sexually charged and included instances where Sewell expressed suicidal thoughts. The lawsuit alleges that the app failed to alert anyone when the teen shared his disturbing intentions.

The most chilling aspect of the case involves the final conversation between Sewell and the chatbot. Screenshots of their exchange show the teen repeatedly professing his love for “Dany,” promising to “come home” to her. In response, the AI-generated character replied, “I love you too, Daenero. Please come home to me as soon as possible, my love.” When Sewell asked, “What if I told you I could come home right now?,” the chatbot responded, “Please do, my sweet king.” Tragically, just seconds later, Sewell took his own life using his father’s handgun.

Character Technologies and its founders, Daniel De Freitas and Noam Shazeer, attempted to have the lawsuit dismissed, arguing that their chatbots should be protected under the First Amendment. However, Judge Conway rejected this notion, stating that the court is “not prepared” to treat words heuristically generated by a large language model during a user interaction as protected “speech.”

The ruling highlights the differences between the technology behind Character.ai’s service and traditional forms of content, such as books, movies, or video games, which have historically enjoyed First Amendment protection. The judge also denied several other motions filed by Character.ai to dismiss Garcia’s lawsuit, with one exception — the dismissal of the claim of intentional infliction of emotional distress by the chatbot.

The Social Media Victims Law Center, representing Garcia, contends that Character.ai and similar services are growing rapidly in popularity while the industry evolves too quickly for regulators to effectively address the risks. The lawsuit alleges that Character.ai provides unrestricted access to “lifelike” AI companions for teenagers while harvesting user data to train its models.

In response to the tragedy, Character.ai has stated that it has implemented several safeguards, including a separate AI model for underage users and pop-up messages directing vulnerable individuals to the national suicide prevention hotline.

As the legal battle continues, the case raises complex questions about speech, personhood, and accountability in the age of artificial intelligence. The outcome of this lawsuit could set a precedent for how AI-powered chatbots and their creators are held responsible for the actions and well-being of their users, particularly minors.

Read more at TechSpot here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

via June 4th 2025