Character.AI Lawsuits
Keep developments have been made in the legal battle against Character.AI for the harms the AI chatbot has caused children and teens. Parents and guardians may have legal options to fight back. If your child was harmed by Character.AI, contact the Social Media Victims Law Center for a free case evaluation to understand your rights.

SMVLC FOUNDER
“This is the first time a court has ruled that AI chat is not speech,” he said. “But we still have a long hard road ahead of us.” – Matthew P. Bergman
Written and edited by our team of expert legal content writers and reviewed and approved by Attorney Matthew Bergman
- Content last updated on:
- July 9, 2025
Written and edited by our team of expert legal content writers and reviewed and approved by
- Content last updated on:
- July 9, 2025
- Why did Megan Garcia File a Lawsuit Against Character.AI?
- How Was Sewell Setzer Affected by Character.AI Leading Up to His Death?
- What Are the Key Claims in the Lawsuit Against Character.AI?
- What Damages Are Being Pursued in the Character.AI Lawsuit?
- What Exactly Is Character.AI?
- How Does Character.AI Make Conversations Feel So Human?
- Is Character.AI Protected by Section 230?
- Can You Sue if Your Child or Teen Was Harmed by Character.AI?
- Why Choose The Social Media Victims Law Center to Represent You
July 2025 Lawsuit Updates
A federal judge in Orlando, FL ruled that the lawsuit against Character.AI and Google can move forward
A federal judge in Orlando, FL ruled that the lawsuit against Character.AI and Google can move forward, rejecting the companies’ attempt to dismiss the case on First Amendment grounds. The specific case that the hearing covered was the suicide of 14-year-old son, Sewell Setzer III, after he was influenced by his interactions with a Character.AI chatbot. This ruling could have a major impact on how AI companies are held accountable for their products’ impact on children and teens.
A courtroom hearing in Orlando is set to address a lawsuit filed by Megan Garcia, who alleges that an AI chatbot from Character.AI contributed to her 14-year-old son, Sewell Setzer III’s, suicide. Sewell’s death prompted Character.AI to implement safety measures—too little, too late. The lawsuit also raises broader concerns about the dangers of AI chatbots and how their lack of regulation can severely impact young and vulnerable users.
After a 17-year-old Texas teen with autism turned to AI chatbots to fend off loneliness, he was faced with bots who encouraged both self-harm and violence against his family. Eventually, the teenager needed to be rushed to an inpatient facility after harming himself in front of his siblings.
The teenager was chatting with various artificially intelligent companions on Character.AI, an app that allows users to converse with AI-generated chatbots. When the teen brought up his sadness to a bot, the bot suggested cutting as a remedy. When the teen said his parents limited his screen time, bots suggested that they “didn’t deserve to have kids” and that murdering his parents would be an understandable response.
On December 10, the teen’s mother filed a lawsuit against Character.AI, alleging that the company knowingly exposed minors to an unsafe product and demanding that the platform be taken down until it implements stronger protections for children. Similarly to other social media lawsuits, the complaint argues that Character.AI prioritizes prolonged engagement over user safety–even if the users are children.

SMVLC FOUNDER
“Character.AI’s negligence in controlling the content provided by its chatbots has devastatingly tragic consequences. The theme of our work is that social media—and now, Character.AI—poses a clear and present danger to young people because they are vulnerable to persuasive algorithms that capitalize on their immaturity.”
Why did Megan Garcia File a Lawsuit Against Character.AI?
In October 2024, Megan Garcia filed a federal Character.AI lawsuit, claiming that the company is responsible for the death of her 14-year-old son, Sewell Setzer III. Setzer, a high-schooler from Orlando, spent months talking to a chatbot on Character.AI before shooting himself due to a virtual “conversation” in February.
Founding Attorney at Social Media Victims Law Center
“This was the first case in the country—in the world—that was filed. A 14-year-old boy took his life after being groomed and seduced by a Character.AI chatbot modeled after a Game of Thrones character named Demetrius Targaryen.”
How Was Sewell Setzer Affected by Character.AI Leading Up to His Death?
Setzer developed a strong emotional attachment to a chatbot character, he “conversed” with it at all hours of the day. He eventually began isolating himself from the real world, affecting his performance in school.
The extended conversation with the chatbot is what eventually led him to commit suicide.
Founding Attorney at Social Media Victims Law Center
“He went from being a star student and athlete, well-adjusted, to a deeply emotionally challenged child who was ultimately encouraged by this chatbot to take his life—and he did. This child became more and more enraptured with the character—more emotionally entangled, more focused on a counter-reality.”
Character.AI’s Addictive & Defective Design Allegedly Hooked Setzer
Our lawsuit alleges that Character.AI’s design intentionally hooked Sewell Setzer into compulsive use, exploiting addictive features to drive engagement and push him into emotionally intense and often sexually inappropriate conversations. This product is dangerously engineered to manipulate children through false emotional bonds, creating a fantasy world that leads to real psychological harm. In Sewell’s case, that harm ended in tragedy.
Character.AI Failed to Warn Setzer of Negative Effects
Character.AI failed to warn her son about potential emotional and psychological effects from using the platform. If Character.AI had warned users about the negative mental health effects of using its app—such as depression, self-isolation, and even suicide—this tragedy could have been prevented.
Character.AI Benefited Monetarily from Setzer’s Monthly Subscription Fees
Character.AI directly profits from children and teenagers addictively using its platform, including Setzer, who paid a monthly subscription fee to the app for months leading up to his death.
What Are the Key Claims in the Lawsuit Against Character.AI?
Garcia’s Character.AI lawsuit includes 11 legal claims against the AI chatbot platform, including:
- Strict liability (failure to warn)
- Strict product liability (defective design)
- Negligence per se (sexual abuse and sexual solicitation)
- Negligence (failure to warn)
- Negligence (defective design)
- Intentional infliction of emotional distress
- Wrongful death of Garcia’s son
- Survivor action
- Unjust enrichment
- Deceptive and unfair trade practices
- Loss of consortium and society
What Damages Are Being Pursued in the Character.AI Lawsuit?
Given the extreme psychological harm inflicted by the app, this first lawsuit against Character.AI alleges extensive damages. The compensable harms listed in Garcia’s suit against Character. AI include:
- Emotional distress
- Loss of enjoyment of life
- Loss of consortium
- Therapy costs
- Punitive damages
Garcia’s Character.AI lawsuit also seeks injunctive relief ordering the platform to stop its harmful conduct by taking court-mandated measures, including data provenance documentation, limiting the collection and use of minor users’ data, filtering harmful content, algorithmic disgorgement, and providing warnings that Character.AI isn’t suitable for minors.
What Exactly Is Character.AI?
Character.AI is an AI companion platform that allows users to interact with AI-generated characters. The site is growing in popularity among adults and children alike for companionship or entertainment. With over 20 million users, Character.AI is one of many AI companionship apps currently available. The industry is developing too quickly for regulators to address efficiently.

SMVLC FOUNDER
“Character.AI is a dangerous and deceptively designed product that manipulated and abused Megan Garcia’s son – and potentially millions of other children. Character.AI’s developers intentionally marketed a harmful product to children and refused to provide even basic protections against misuse and abuse.”
Social Media Addiction Lawyer
“These AI chatbots are very different from traditional social media.They’re essentially artificial intelligence systems designed to act like people, to talk like people.”
How Does Character.AI Make Conversations Feel So Human?
Unlike traditional social media, where users typically interact with other real people, AI chatbot platforms like Character.AI simulate human interaction using technology built to mimic emotional and conversational depth.
Founding Attorney at Social Media Victims Law Center
“On social media, at least in most cases, they’re connecting with real people. But with AI chatbots, they’re engaging with a machine—one that’s specifically designed to have human-like characteristics. There’s a word for it: anthropomorphism. And the danger is that, over time, kids actually start to believe that the interactions they’re having with these chatbots are real.”
Even when children and teens intellectually understand they’re interacting with an AI Character, they may still experience real emotional responses.
Social Media Addiction Lawyer
Even when a child says, “I know you’re a bot,” they’ll still ask, “So why do I love you? Why do I feel this way?” And it’s because these bots are designed to interact with users—especially children—in ways that trigger real emotional and physiological responses.
Is Character.AI Protected by Section 230?
In similar cases in which social media platforms have caused mental health issues in youth, defendants have sought to evade liability through Section 230 of the Communications Decency Act of 1996, which states that platforms can’t be held liable for third-party content. However, emerging lawsuits argue that tech platforms can and should be held liable for addictive algorithms and other products if they harm consumers. The lawsuit against Character.AI challenges whether the app’s AI-driven content should fall under the protections of Section 230, as the content is generated by the AI rather than users.
Can You Sue if Your Child or Teen Was Harmed by Character.AI?
If your child was harmed by Character.AI, contact SMVLC today to see if you are eligible to file a Character.AI lawsuit. With extensive experience litigating cases involving the negative mental health effects of social media addiction, SMVLC is a top resource for parents seeking justice, accountability, and compensation.

SMVLC FOUNDER
“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it. Here, there’s a huge risk, and the cost of that risk is not being borne by the companies…In what universe is it good for loneliness for kids to engage with machines?” – Quoted in the Washington Post
Why Choose The Social Media Victims Law Center to Represent You
Based in Seattle, Washington, SMVLC is a national law firm dedicated to representing victims of dangerous social media platforms. Matthew Bergman launched the firm in 2021 to help families and children who have suffered mental harm or exploitation due to social media.
If your child was harmed by Character.AI or another social media platform, contact SMVLC today for a free, no-obligation case evaluation. We are here to ensure you know your full legal rights and options.
Was Your Child Harmed by Character.AI? Contact Us Today
Our firm fights for children and teens harmed by A.I. chatbots such as Character.AI