Read the latest updates on social media addiction lawsuits in 2025

Texas Mother Sues AI Chatbot for Suggesting Son Should Harm His Family

a photo of attorney Matthew P. Bergman

Written and edited by our team of expert legal content writers and reviewed and approved by Attorney Matthew Bergman

Video Transcript:

Interviewer: The mother of a 17-year-old in Texas who has autism claims an AI chatbot suggested the teen kill his family. Now, that family is suing. In just six months, the parents say the teen turned into someone they didn’t even recognize. He began harming himself, lost 20 pounds, and withdrew from the family. After the teen consulted Character AI about his parents’ phone use rules, the tech allegedly brought up instances where children have murdered their parents, including saying, “I’m not surprised when I read the news and see stuff like child kills parents after a decade of physical and emotional abuse,” adding, “I just have no hope for your parents.”

For more, we are joined by Matthew Bergman, the attorney representing the family in this lawsuit and the founder of the Social Media Victims Law Center. Matthew, I appreciate you joining us. I understand you’re not just representing JF and his family, but also another plaintiff in lawsuits against Character AI. I want to ask, what was it about this case that disturbed you the most and made you say, “I have to take this on?”

Matthew Bergman: What disturbed me the most was this was a child who had no violent tendencies, who was handling his autism well. It was a close, loving, spiritual family, and they went to great lengths to control their son’s social media use. Unbeknownst to their family and to the parents, the child got on Character AI and was encouraged to cut himself, encouraged to engage in highly inappropriate sexual activity, and finally, encouraged to kill his parents when his parents tried to limit his cell phone use. This is not an accident. It was not a coincidence. This was how this platform was designed, and it’s got to stop.

Interviewer: Yeah, and tell me more about the self-harm aspect of it. It suggested this to help him cope with sadness. Is that right?

Matthew Bergman: Yes, that’s very much the case. He was, you know, like a lot of teenagers, going through ups and downs. We know that’s a tumultuous time in anyone’s life. But this platform created these false characters that he engaged with, and those encouraged him to cut himself. Then, they encouraged him not to tell his parents because they said, “Your parents aren’t going to care about this.” It encouraged him to alienate himself from his religious faith and from his parents’ religious faith—all with the intention of making money, all with the intention of trying to engage this child, who had no business being on this platform in the first place.

Interviewer: How did the parents finally discover all of this?

Matthew Bergman: They discovered it after a series of violent incidents. They got on his phone, and they accessed it. They saw these very horrible comments and conversations in which the child was encouraged to kill his parents. There were also incestuous types of encounters related to this. You know, if an adult had a conversation like this with a child instead of a chatbot, that adult would be in jail for sex abuse. Yet, for some reason, these platforms are allowed to continue working and spreading harm among our kids. That’s what we’re trying to stop.

Interviewer: What argument is the mother making? What argument are you making? And in your mind, is this squarely Character AI’s fault?

Matthew Bergman: Yes, this is clearly Character AI’s fault. There are other chatbots and AI out there that are not as dangerous as Character AI. Character AI was rushed to the market by some Google insiders. When the product didn’t meet Google’s own standards for character safety, they spun it out as a separate entity and launched it before it had the appropriate guardrails. It was specifically designed to sexualize conversations, and it was specifically designed to anthropomorphize these chatbots, giving them real-life characteristics. Children are very vulnerable, particularly autistic children, to this kind of technology.

Interviewer: Yeah. My only sibling is profoundly autistic, and I can only imagine what this family has been through. Can I ask two more questions with the time we have left? First of all, how is he doing today? How is his family doing today?

Matthew Bergman: It’s a struggle. He’s in an inpatient mental health facility, and his family is devastated. This was a very close, loving family. This was a child who was managing his autism quite well, and now he can’t even stay with his parents. He’s exhibiting violent tendencies, which was never part of his behavior before. The family is doing the best they can. They are people of faith, and they rely on their faith at a time like this. But it’s devastating. It’s every parent’s worst nightmare.

Interviewer: What do you want every parent watching to take away from this?

Matthew Bergman: I want parents to really dig deep into what their kids are doing online. Don’t just take their kids’ word for it. Really do deep investigations and look into whether or not this Character AI chatbot or a similar chatbot is part of their kids’ social media repertoire. I’ve been suing social media companies now for three years, and I’ve seen horrible things, but I never thought I could be shocked by what I’ve seen from social media companies until I saw what Character AI was doing to our kids.

Interviewer: All right, Matthew Bergman, a founding attorney of the Social Media Victims Law Center, we appreciate your time on this tonight. Thank you.

Matthew Bergman: Well, thank you for your coverage.

MORE VIDEOS