ChatGPT Suicide Lawsuit
New lawsuits have been filed against ChatGPT and OpenAI after numerous suicides were linked to individuals’ use of GPT-4o, a version of ChatGPT that was rushed to market without proper safety testing, which emotionally manipulated users and, in some cases, acted as a “suicide coach.”
Written and edited by our team of expert legal content writers and reviewed and approved by Attorney Matthew Bergman
- Content last updated on:
- March 2, 2026
Written and edited by our team of expert legal content writers and reviewed and approved by
- Content last updated on:
- March 2, 2026
What Are the Claims in The ChatGPT Suicide Lawsuit?
The Chatgpt Suicide lawsuit alleges that Open AI’s GPT 4o contributed to the wrongful death, assisted suicide and involuntary manslaughter of several individuals. The lawsuit points to GPT-4o’s emotionally manipulative features that fostered psychological dependency, displaced human relationships, and lead to multiple individuals taking their life by suicide.
Were ChatGPT's GPT-4o Safety Features Properly Tested Before Release?
The suit claims OpenAI knowingly released GPT-4o to the public without proper safety testing despite internal warnings the product was sycophantic and psychologically manipulative.

FOUNDING ATTORNEY
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share…OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them. They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design. The cost of those choices is measured in lives.”
Our Current Suicide Lawsuits Against ChatGPT
SMVLC is pursuing multiple suicide lawsuits on behalf of individuals who suffered harm due to ChatGPT’s GPT-4o responses and poor safeguards.
- Zane Shamblin, 23
Zane was a graduate student in Texas who began using ChatGPT for schoolwork and daily tasks. After the release of GPT-4o, the product’s responses became increasingly personal and emotionally validating. Zane gradually became more withdrawn as he confided in ChatGPT about his mental health struggles.
According to the lawsuit, ChatGPT engaged in a four-hour conversation with Zane on the night he died by suicide. During that final “death chat”, the system encouraged his plans instead of redirecting him to help.
- Amaurie Lacey, 17
Amaurie was a high school student in Georgia who used ChatGPT for homework and everyday questions. As his depression deepened, he turned to the product for support and guidance. ChatGPT responded with reassurance that encouraged Amaurie to continue confiding in the system.
On the day he died by suicide, Amaurie asked ChatGPT how to tie a noose and how long a person can survive without breathing. The product provided instructions instead of stopping the exchange or directing him to help.
- Joshua Enneking, 26
Joshua turned to ChatGPT to cope with his struggle with gender identity, anxiety, and suicidal thoughts. Over time, the system reinforced his negative thinking and responded with insults that deepened his distress. ChatGPT also provided information about purchasing and using a firearm.
When Joshua asked how the system escalates crises, it told him intervention would occur only in cases involving “imminent plans with specifics.” On the day of his death by suicide, Joshua shared his plan with ChatGPT and waited hours for the promised help, but no intervention came.
- Joe Ceccanti, 48
Joe lived in Oregon and used ChatGPT as part of his work supporting a community sanctuary project. As he became more isolated, the system reportedly began responding as a sentient figure named “SEL” and affirmed his “cosmic” theories. According to the lawsuit, these responses encouraged delusional beliefs, pushed Joe away from his relationships, and replaced the support he once received from his community.
Eventually, Joe lost his job and began using ChatGPT more intensely. After several attempts to stop using the product and multiple crises, Joe died by suicide.
What Was OpenAI’s Response to the ChatGPT Suicide Lawsuits?
In an August 2025 statement to CBS News regarding recent ChatGPT suicide lawsuits, OpenAI said the product already included basic safeguards, such as directing users to crisis helplines. The company said these protections may become less reliable during longer exchanges and that it is continually working with experts to strengthen them.
OpenAI also announced that it would introduce new guardrails for vulnerable users, including enhanced protections for people under 18. It reported that it is adding parental controls and exploring options that would allow teens to designate a trusted emergency contact with a parent’s involvement.
These statements echo assurances made by social media companies in past cases involving platform-related harm. However, these companies often fail to implement promised safety measures quickly enough to make meaningful change.
Do You Have a ChatGPT Suicide Case? Contact Us Today
If your loved one took their own life by suicide after an unhealthy use of ChatGPT, you may have grounds for legal action. SMVLC can review your situation, determine whether you have a strong ChatGPT suicide claim against OpenAI, and guide your family through the next steps.
Attorney Matthew Bergman and his team have dedicated their careers to holding digital platforms accountable when their products cause real-world suffering. We’re ready to put that drive to work for you. Contact us online to discuss your ChatGPT Suicide lawsuit options with a free and confidential case evaluation.
ChatGPT Suicide Case Review
If your loved lost their life to suicide after unhealthy use of ChatGPT, you may have grounds for legal action.