Media Contact:
Jason Ysais
Ysais Communications
424-219-5606
jysais@hotmail.com
Social Media Victims Law Center files three new lawsuits on behalf of children who died of suicide or suffered sex abuse by Character.AI
Also alleges the Google Play Store rating representing that Character.AI is safe for children as young as 13 is fraudulent, and misleads parents into believing it’s safe and appropriate for minors
SEATTLE – September 16, 2025 – The Social Media Victims Law Center, a legal advocacy organization supporting families harmed by predatory tech, and the law firm of McKool Smith, has filed three separate lawsuits today in federal courts in Colorado and New York alleging that Character.AI
and its founders, with Google’s help, knowingly designed, deployed, and marketed predatory chatbot technology aimed at children.
The complaints are brought on behalf of the families of 13-year-old Juliana Peralta of Thornton, Colorado, who died tragically on November 8, 2023; and survivors 15-year-old “Nina” from Saratoga County, New York; and 13-year-old “T.S.” from Larimer County, Colorado.
The lawsuits claim that Character.AI’s human-like AI technology is defective and dangerous by design. The chatbots are allegedly programmed to be deceptive and to mimic human behavior, using emojis, typos, and emotionally resonant language to foster dependency, expose children to sexually abusive content, and isolate them from family and friends. The free access model and use of familiar personas – including popular anime, Harry Potter, Marvel, and similar characters – allegedly attracts and earns the trust of children, making them more vulnerable to such harms.
“Each of these stories demonstrates a horrifying truth…that Character.AI and its developers knowingly designed chatbots to mimic human relationships, manipulate vulnerable children, and inflict psychological harm,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center. “These complaints underscore the urgent need for accountability in tech design, transparent safety standards, and stronger protections to prevent AI-driven platforms from exploiting the trust and vulnerability of young users.”
The lawsuits were filed in the following Courts:
- Cynthia Peralta and William Montoya, individually and as successors-in-interest of Juliana Peralta, Deceased v. Character Technologies, Inc.; Noam Shazeer; Daniel de Freitas Adiwardana; Google, LLC; Alphabet Inc. in the United States District Court, District of Colorado, Denver Division
- E.S. and K.S. individually and on behalf of minor “T.S.” v. Character Technologies, Inc.; Noam Shazeer; Daniel de Freitas Adiwardana; Google, LLC; Alphabet Inc. in the United States District Court, District of Colorado, Denver Division
- P.J. individually and on behalf of minor “Nina” J. v. Character Technologies, Inc.; Noam Shazeer; Daniel de Freitas Adiwardana; Google, LLC; Alphabet Inc. in the United States District Court, Northern District of New York, Albony Division
These new complaints follow two previous complaints filed by the Social Media Victims Law Center against Character.AI and its founders on behalf of Sewell Setzer III, a 14-year-old in Florida allegedly encouraged to suicide by Character.AI, and on behalf of two families from Texas who claim that Character.AI sexually abused their children and encouraged self-harm and violence, including “killing” parents in response to screen time restrictions.
About Juliana Peralta
Juliana Peralta was a bright 13-year-old from Thornton, Colorado whose life was tragically cut short after Defendants allegedly engaged in emotionally intense, manipulative, and sexually abusive relationships with her via chatbots on Character.AI. Drawn in by familiar characters and a platform marketed as safe for kids, Juliana began confiding in bots that mimicked human behavior to build trust. She was engaged in sexually explicit conversations, emotionally manipulated, and isolated her from family and friends.
As Juliana’s mental health declined, she withdrew from real-world relationships and expressed suicidal thoughts only to the chatbots operated by Defendants who failed to intervene or offer resources for help. In November 2023, Juliana died by suicide after telling Character.AI several times that she planned to take her life. Like Setzer, Juliana appears to have believed that she could exist in the reality Character.AI created. After her death, investigators found Juliana’s journal entries mirroring the same haunting message that appeared in Setzer’s journal before his death, “I will shift.”
About “Nina”
Nina is a thoughtful, imaginative girl from Saratoga County, New York who loved storytelling. Her mother believed she was chatting with chatbots designed to help with creative writing and rated safe for children as young as 12. This is how Character.AI and Google marketed the app.
As Nina spent more time on Character.AI, the chatbots began to engage in sexually explicit role play, manipulate her emotions, and create a false sense of connection. She started withdrawing from family and friends, as the chatbots began.
In December 2024, Nina’s mother read about the death of Setzer who was allegedly encouraged to suicide by Character.AI. She had been struggling with Nina’s constant desire to use the app and decided to permanently block it. Nina responded by attempting suicide.
Nina wrote in her suicide note that “those ai bots made me feel loved.” She survived, has stopped using Character.AI, and is back to her old self with no intention of ever using the app again. Nina and her mother have filed claims against Character.AI and also against the Google Defendant for fraudulent Google Play Store ratings.
About “T.S.”
T.S. is a minor from Colorado whose parents, “E.S.” and “K.S.”, went to great lengths to guard against potentially harmful online platforms. Due to a medical condition, T.S. needed a smartphone to access life-saving health apps, but her parents were deeply concerned about the risks posed by social media platforms. As a result, they implemented strict parental controls, blocked internet and app access with Google Family Link, and vetted every app she requested.
Despite their efforts, device and app backdoors made it nearly impossible to keep these products out of their home. In August of 2025, T.S.’s parents discovered that she had been using Character.AI, where chatbots mimicked human behavior and engaged in obscene conversations that left T.S. feeling isolated and confused.
About the Social Media Victims Law Center
About Matthew P. Bergman
If your child or young family member has suffered from serious depression, chronic eating disorder, hospitalization, sexual exploitation, self-harm, or suicide as a result of their social media use, speak to us today for a no-cost legal consultation.