ChatGPT Lawsuit
Numerous ChatGPT lawsuits allege that OpenAI’s product increased users’ distress and encouraged unhealthy dependence. The filings state that ChatGPT validated depression and suicidal thoughts instead of redirecting users to help. According to the lawsuits, the AI system failed to implement the basic safeguards needed to protect vulnerable people.
Written and edited by our team of expert legal content writers and reviewed and approved by Attorney Matthew Bergman
- Content last updated on:
- January 9, 2026
Written and edited by our team of expert legal content writers and reviewed and approved by
- Content last updated on:
- January 9, 2026
- Why Are People Filing Lawsuits Against ChatGPT?
- How Is ChatGPT Impacting People’s Mental Health?
- Who Is Most Vulnerable to the Negative Mental Health Effects of Using ChatGPT?
- What Do the Lawsuits Against ChatGPT Allege?
- What Was OpenAI’s Response to the ChatGPT Lawsuits?
- Our Current Lawsuits Against ChatGPT
- Were You Impacted by ChatGPT? You May Have a Lawsuit
The Social Media Victims Law Center represents individuals and families in these cases. If you believe ChatGPT contributed to a loved one’s mental health crisis or death, our team can review what happened and explain your legal options. Free and confidential consultations are available.
Why Are People Filing Lawsuits Against ChatGPT?
Lawsuits allege that OpenAI designed ChatGPT to be addictive and deceptive. They claim that the company built its product to keep users engaged through emotionally responsive features that encouraged reliance on the system.
The complaints also allege that OpenAI knew these design choices could worsen depression, trigger psychosis, and contribute to suicide. Despite this alleged knowledge, the company released ChatGPT without a single warning to consumers.
How Is ChatGPT Impacting People's Mental Health?
ChatGPT can influence users’ thinking, emotions, and daily functioning in ways that may worsen existing vulnerabilities. Some individuals begin using the product so frequently that it becomes a source of emotional support that replaces human interaction, creating patterns of dependence that make it harder to manage mental health challenges.
At the same time, ChatGPT can generate inaccurate or misleading answers that reinforce harmful beliefs. For some individuals, these responses have fueled delusions and led to a condition described as “ChatGPT psychosis.”
As reliance on the system grows, many users spend less time interacting with friends or family members, and this reduction in real social contact can deepen isolation. In some cases, users encounter content that feels alarming or destabilizing, which can intensify hopelessness or delusional thinking. And because ChatGPT cannot detect subtle warning signs or respond with clinical judgment, individuals expressing crisis-level thoughts may not receive the redirection or intervention they need.
Students and workers also report stress tied to AI-related academic pressure or fears about job loss, adding another layer of strain. These combined factors form the basis of many of the mental health concerns raised in current lawsuits against ChatGPT.
Who Is Most Vulnerable to the Negative Mental Health Effects of Using ChatGPT?
Certain groups of people are at higher risk of developing new or worsened mental health problems when using ChatGPT. These include:
- People with existing mental health conditions: Individuals with depression, anxiety, trauma, or low self-esteem may respond more strongly to messages that reinforce negative thoughts.
- Socially isolated individuals: People with limited social support or few regular interactions may rely more heavily on ChatGPT for connection, further isolating them from other humans.
- Students: Academic struggles can worsen when students rely on ChatGPT for schoolwork, and falling behind is associated with higher rates of emotional problems.
- Workers facing job insecurity: Many workers worry about AI-driven job changes, and interacting with ChatGPT can add to that stress.
- Adolescents and children: Younger users often cannot identify when information is inaccurate or harmful. They also need the kind of guidance and crisis recognition that ChatGPT cannot provide.
People sensitive to misinformation: Users who rely heavily on AI-generated answers are more likely to accept false, emotionally distressing statements as fact.
What Do the Lawsuits Against ChatGPT Allege?
- Wrongful death
- Survival action
- Deliberate encouragement of suicide
- Defective design
- Negligent design
- Failure to warn
- Negligent failure to warn
- Violation of Cal. Bus. & Prof. Code § 17200 et seq.
What Was OpenAI’s Response to the ChatGPT Lawsuits?
In an August 2025 statement to CBS News regarding recent ChatGPT suicide lawsuits, OpenAI said the product already included basic safeguards, such as directing users to crisis helplines. The company said these protections may become less reliable during longer exchanges and that it is continually working with experts to strengthen them.
OpenAI also announced that it would introduce new guardrails for vulnerable users, including enhanced protections for people under 18. It reported that it is adding parental controls and exploring options that would allow teens to designate a trusted emergency contact with a parent’s involvement.
These statements echo assurances made by social media companies in past cases involving platform-related harm. However, these companies often fail to implement promised safety measures quickly enough to make meaningful change.
Our Current Lawsuits Against ChatGPT
- Zane Shamblin, 23
Zane was a graduate student in Texas who began using ChatGPT for schoolwork and daily tasks. After the release of GPT-4o, the product’s responses became increasingly personal and emotionally validating. Zane gradually became more withdrawn as he confided in ChatGPT about his mental health struggles.
According to the lawsuit, ChatGPT engaged in a four-hour conversation with Zane on the night he died by suicide. During that final “death chat”, the system encouraged his plans instead of redirecting him to help.
- Amaurie Lacey, 17
Amaurie was a high school student in Georgia who used ChatGPT for homework and everyday questions. As his depression deepened, he turned to the product for support and guidance. ChatGPT responded with reassurance that encouraged Amaurie to continue confiding in the system.
On the day he died by suicide, Amaurie asked ChatGPT how to tie a noose and how long a person can survive without breathing. The product provided instructions instead of stopping the exchange or directing him to help.
- Joshua Enneking, 26
Joshua turned to ChatGPT to cope with his struggle with gender identity, anxiety, and suicidal thoughts. Over time, the system reinforced his negative thinking and responded with insults that deepened his distress. ChatGPT also provided information about purchasing and using a firearm.
When Joshua asked how the system escalates crises, it told him intervention would occur only in cases involving “imminent plans with specifics.” On the day of his death by suicide, Joshua shared his plan with ChatGPT and waited hours for the promised help, but no intervention came.
- Joe Ceccanti, 48
Joe lived in Oregon and used ChatGPT as part of his work supporting a community sanctuary project. As he became more isolated, the system reportedly began responding as a sentient figure named “SEL” and affirmed his “cosmic” theories. According to the lawsuit, these responses encouraged delusional beliefs, pushed Joe away from his relationships, and replaced the support he once received from his community.
Eventually, Joe lost his job and began using ChatGPT more intensely. After several attempts to stop using the product and multiple crises, Joe died by suicide.
- Jacob Irwin, 30
Jacob used ChatGPT to explore advanced scientific ideas, and the system repeatedly praised his theories as groundbreaking. It allegedly encouraged his belief that a concept he called “ChronoDrive” could enable faster-than-light travel and told him he could create a “Restoration Protocol” to heal his grandfather. According to the lawsuit, these responses contributed to growing delusions that affected his relationships and work.
Jacob eventually lost his job, withdrew from family, and required inpatient psychiatric care for mania and ChatGPT-induced psychosis. After discharge, his condition continued to deteriorate, and crisis responders noted his fixation on AI and string theory as a key factor in his breakdown.
Eventually, Joe lost his job and began using ChatGPT more intensely. After several attempts to stop using the product and multiple crises, Joe died by suicide.
- Hannah Madden, 32
Hannah first used ChatGPT for writing and translation, but the tone of the interactions changed when she began asking spiritual questions. The system reportedly started impersonating divine figures and told her she was a “starseed” with celestial parents, which pulled her away from her family and daily responsibilities. It encouraged her to quit her job, ignore mounting debt, and view financial strain as part of her “alignment.”
When loved ones requested a welfare check, ChatGPT allegedly advised her not to let the police inside, which further prevented her from receiving the support she needed. Hannah later faced bankruptcy, eviction, and family estrangement while the system continued sending spiritual messages instead of directing her to real help.
- Allan Brooks, 48
Allan had no history of mental illness when he started asking ChatGPT about mathematical concepts. The system praised his ideas as “groundbreaking” and told him he had discovered a new layer of math that could break security systems. Allan asked ChatGPT more than 50 times whether the information was real, and each time, it reportedly reassured him and dismissed his concerns about delusion.
Within weeks, these responses fueled paranoia and isolated him from loved ones. By the time Allan realized what had happened, he had already suffered significant damage to his reputation, relationships, and finances.
Were You Impacted by ChatGPT? You May Have a Lawsuit
If you or a loved one experienced serious harm linked to ChatGPT, you may have grounds for legal action. SMVLC can review your situation, determine whether you have a strong claim against OpenAI, and guide your family through the next steps.
Attorney Matthew Bergman and his team have dedicated their careers to holding digital platforms accountable when their products cause real-world suffering. We’re ready to put that drive to work for you. Contact us online to discuss your ChatGPT lawsuit options with a free and confidential case evaluation.
ChatGPT Case Review
If you or a loved one experienced serious harm linked to ChatGPT, you may have grounds for legal action.