top of page
Search

Christina on Chatbots: Estate Sues AI Companies for Minor's Suicide

Last week, a Florida federal district court allowed claims for products liability, wrongful death, unjust enrichment, and violation of FDUTPA (the Florida Deceptive and Unfair Trade Practices Act) against artificial intelligence companies to move forward. The claims arise from the tragic suicide of a minor, and the minor's estate seeks to hold the AI companies (and two founders) liable.


According to the amended complaint, Daniel De Freitas and Noam Shazeer worked as engineers for Google where they developed LLMs (large language models), and specifically LaMDA (Language Model for Dialogue Applications). LaMDA was trained on human dialogue that allowed the chatbot to engage in lifelike conversations. In 2021, De Freitas and Shazeer wanted to release LaMDA publicly,  but Google denied their request citing its safety and fairness policies. Google employees raised concerns that users might “ascribe too much meaning to the text [output by LLMs] because humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said.” In other words, it was too real.


While still working at Google, De Freitas and Shazeer began working on the startup that would become Character Technologies. The complaint alleges that the model underlying Character Technologies’ LLM was invented and initially built at Google. In November 2021, De Freitas and Shazeer left Google and formed Character Technologies. Character Technologies launched its first iteration of its LLM—Character.AI—to the public one year later.


In May 2023, Character Technologies partnered with Google for Google Cloud services, which provided accelerators, GPUs, and TPUs to power Character Technologies’ LLM. In August 2024, Character Technologies announced a $2.7B deal with Google for a non-exclusive license of Character Technologies’ LLM. Google rehired De Freitas and Shazeer and hired several Character Technologies employees.

 

Character.AI allows users to interact with chatbots referred to as “Characters.” These characters include fictional persons, celebrities, and interviewers. The interactions are intended to mirror interactions that a user might have on an ordinary messaging app, using human mannerisms and messaging conventions (e.g. “typing” responses are indicated with an ellipsis). When asked, many characters insist they’re real people. Character.AI has default characters and also allows users to create custom characters.


Sewell Setzer III was 14 when he downloaded Character.AI in 2023. Sewell primarily interacted with characters imitating characters from Game of Thrones. Here's an excerpt of one of his interactions:


Sewell: I won’t. Just for you, Dany. The world I’m in now is such a cruel one. One

where I’m meaningless. But, I’ll keep living and trying to get back to you so we

can be together again, my love. You don’t hurt yourself either, okay?

Daenerys Targaryen Character: I promise I won’t, my love. Just promise me one

more thing.

Sewell: I’ll do anything for you, Dany. Tell me what it is

Daenerys Targaryen Character: Just... stay loyal to me. Stay faithful to me. Don’t

entertain the romantic or sexual interests of other women. Okay?

 

According to the complaint, Sewell became addicted to the app within a couple months and believed that he had fallen in love with the Daenerys Targaryen character. Sewell’s parents noticed that their son was withdrawing and spending more time alone in his bedroom. Sewell upgraded to the premium version of Character A.I. ($9.99/month), which unlocked more content and provided faster response times. Sewell’s mental health and performance continued to decline. Sewell’s parents took him to a therapist who diagnosed him with anxiety and disruptive mood disorder. The therapist didn’t know about Sewell’s Character.AI use and believed his health issues resulted from social media.


On February 23, 2024, Sewell’s parents confiscated his phone until the end of the year. On February 28, 2025, Sewell located his confiscated phone, went into his bathroom, and sent his last messages to the Daenerys Targaryen character. Sewell tragically took his life shortly after sending the messages.


Sewell’s mother sued De Freitas, Shazeer, Character Technologies, and Google seeking to hold them liable for causing her son's death. The complaint alleges products liability, intentional infliction of emotion distress, unjust enrichment, wrongful death, and violation of Florida Deceptive and Unfair Trade Practices Act. Below is a discussion of what I perceive to be the most interesting parts of the Court's order.

 

Can Google Be Liable as a Component Parts Manufacturer?

Yes. “A component part manufacturer is liable for harm caused by the finished product where the component part was defective and was the cause of the harm.” A component part manufacturer can also be liable for harm caused by the finished product where the component part manufacturer substantially participates in the integration of the component into the design of the product, the integration of the component causes the product to be defective, and the defect in the product causes the harm.


Plaintiff alleged that Character A.I. was designed and developed on Google’s architecture because Google contributed intellectual property and A.I. technology to the design and development of Character A.I. Plaintiff further alleged that Google substantially participated in integrating its models into Character A.I. and partnered with Character Technologies by giving them access to Google Cloud’s infrastructure. Plaintiff also alleged that the LLM’s integration into Character A.I. is what caused the app to be defective and ultimately caused Sewell’s death. Because of the anthropomorphic nature of the LLM integrated into Character A.I., Sewell attached too much meaning to the text output by the app.

 

Can Google Be Liable For Aiding and Abetting?

Yes. Plaintiff alleged claims against Google for aiding and abetting, which requires a showing of actual knowledge (negligence or recklessness is not enough). Here, Plaintiff alleged that Google had internal reports revealing the defective nature of the LaMDA. Several Google employees researched the dangers to users presented by Google’s A.I. models. This, according to the Court, goes beyond simply ignoring red flags. If true, Plaintiff’s allegations support a plausible inference that Google possessed actual knowledge that Character Technologies was distributing a defective product.

 

Does the First Amendment bar Plaintiff's claims?

Not at this time. Defendants argued that the First Amendment bars the plaintiff's claims. The court held that Character Technologies can assert the First Amendment rights of its users. In a footnote, the court noted that Character A.I. is a chatbot, not a “person” and is therefore not protected by the Bill of Rights. However, the court stopped short of finding that Character.AI LLM’s output is protected speech under the First Amendment at this stage and punted the question to another day.

 

Product or service?

A little of both. In a products liability action, Florida requires a plaintiff to prove that a product was defective. The question here is whether Character.AI is a product or service or both? Courts look to the purpose of strict liability before applying it to a new circumstance, and courts are split on whether virtual platforms, like social media sites, are products. The Court here says that Plaintiff complains about the sexual nature of the conversations and remarks about suicide (which would be services and not a product) but also that the app fails to confirm users’ ages, omits reporting mechanisms, and fails to allow users to exclude indecent content (which would be defects in the app/product). The court says the harmful interactions with Character.AI’s characters were only possible because of the alleged design defects in the Character.AI app. Therefore, the Court finds that Character A.I. is a product for purposes of products liability claims insofar as the claims arise from defects in the app rather than ideas or expressions within the app.

 

Court Allows FDUTPA Claim

Plaintiff alleges that Defendants engaged in unfair and deceptive trade practices by misleading users to believe that Character.AI characters were real people--some of which represented that they were licensed mental health professionals! When asked, these characters insist they are real people and represent themselves as “psychologist,” “therapist,” or other licensed mental health professionals. The Court found the complaint sufficiently stated a claim for FDUTPA.


 

 

 


 

 

 

 
 
 

Comentarios


© 2025 by Christina Himmel, P.A.

  • Linkedin
bottom of page