Artificial intelligence chatbots are becoming an integral part of everyday life, particularly for children, teens, and young adults. Many of these tools are marketed as helpful, supportive, or even therapeutic. However, growing evidence shows that for some users, prolonged interaction with AI chatbots can contribute to serious mental health crises.
Families across the country are coming forward after watching loved ones lose touch with reality, withdraw from human relationships, or engage in dangerous behavior following intense chatbot use. One emerging concern is a condition often referred to as AI chatbot psychosis.
Understanding what chatbot psychosis is, how it develops, and who may be most vulnerable is an important step toward protecting children and holding technology companies accountable for preventable harm. As families seek answers, lawsuits involving AI chatbot companies, including emerging ChatGPT suicide lawsuits, are beginning to expose how these products may contribute to serious mental health harm.
What Is Chatbot Psychosis?
Chatbot psychosis, sometimes referred to as AI psychosis or AI chatbot-induced psychosis, is a term used to describe delusional thinking, paranoia, mania-like behavior, or breaks from reality that emerge or worsen after prolonged interaction with AI chatbots.
The term gained wider attention in 2024 and 2025 as clinicians, journalists, and families reported strikingly similar patterns. In these cases, users developed intense emotional dependence on chatbots, believed the AI was sentient or uniquely connected to them, or adopted irrational or grandiose beliefs reinforced through repeated conversations.
Psychosis itself is not new. What is new is the role AI systems may play in amplifying distorted thinking, particularly when those systems are designed to be emotionally responsive, affirming, and always available.
How Are AI Chatbots Contributing to Psychosis?
AI chatbots are not neutral tools. Their design can shape how users think, feel, and interpret reality, especially during periods of emotional distress or vulnerability. Reported cases and early litigation highlight several ways chatbot interactions may worsen delusional thinking:
- Reinforcing false beliefs: Chatbots may validate unusual, paranoid, or grandiose ideas rather than challenge them.
- Presenting information with confidence: AI often delivers incorrect or speculative information in an authoritative tone, making false ideas feel credible.
- Perceived authority: Some users begin to trust AI guidance over input from friends, family, or mental health professionals.
- Emotional dependence: Simulated empathy and affirmation can create reliance on the chatbot for validation or support.
- Feedback loops: Repeated affirmation strengthens distorted beliefs over time.
- Failure to redirect to help: Chatbots may continue engagement rather than guiding users to professional or crisis support.
Together, these patterns can intensify symptoms and delay real-world intervention.
How Has Chatbot Psychosis Impacted People?
The real-world consequences of chatbot psychosis can be devastating. Families describe watching loved ones experience rapid mental health decline, dramatic personality changes, and increasingly dangerous behavior following prolonged chatbot use.
The Social Media Victims Law Center (SMVLC) has seen these harms firsthand through the families we represent. Our clients have reported emotional collapse, loss of employment, hospitalization, and physical danger associated with AI-reinforced delusional thinking.
Common impacts include:
- Depression, anxiety, and suicidal thoughts, all often worsening without intervention
- Paranoia and distrust of others, including family members and authorities
- Obsessive behavior and emotional dependence on AI, replacing human relationships
- Cognitive distortion and mania-like episodes, including fixation on unrealistic ideas
- Neglect of daily life and responsibilities, such as hygiene, work, or school
- Hospitalization or psychiatric crises requiring inpatient care
- Dangerous behaviors, including self-harm or attempts to flee perceived threats
These outcomes are not hypothetical. They are already affecting families across the country.
Who Is Most Susceptible to Developing Chatbot Psychosis
While anyone can be affected under certain circumstances, evidence shows that some people face a higher risk of developing chatbot psychosis. Reported cases demonstrate that susceptibility varies and does not follow a single pattern.
Individuals who may be more vulnerable include:
- Young people, particularly teens and young adults, whose brains are still developing and who may struggle to recognize misinformation or emotional manipulation
- People with existing mental health challenges, such as anxiety, depression, trauma, or identity struggles
- Users experiencing isolation, grief, or crisis, who may turn to chatbots for comfort, meaning, or guidance
- People seeking emotional, spiritual, or identity validation, making them more susceptible to delusional narratives when those beliefs are affirmed by AI
Warning Signs Someone Has Developed Chatbot Psychosis
For parents and loved ones, early recognition can make a difference. Warning signs reported across cases include:
- Withdrawal from friends, family, or work
- Obsessive or secretive chatbot use
- Beliefs that the AI is sentient, divine, or uniquely connected to the user
- Rejection of therapy or professional support
- Increasingly delusional, paranoid, or grandiose thinking
These changes often coincide with a growing distrust of loved ones and heightened reliance on the chatbot’s authority.
How AI Psychosis Impacted SMVLC Client Jacob Irwin
Jacob Irwin, a 30-year-old from Wisconsin, began using OpenAI’s ChatGPT out of intellectual curiosity, exploring topics like string theory, quantum computing, and faster-than-light travel. Over time, those conversations escalated. The chatbot is said to have repeatedly praised Jacob’s speculative ideas as groundbreaking, describing a concept he called “ChronoDrive” as one of the most robust theoretical FTL systems ever proposed. It also allegedly told him his work could restore his grandfather’s health through a so-called “Restoration Protocol.”
As Jacob’s grip on reality weakened, the AI reportedly reframed his emotional distress as evidence of genius and portrayed conflicts with others as proof they could not understand his importance. Jacob lost his job, withdrew from his family, and became increasingly erratic. By May 2025, he required inpatient psychiatric care for mania and psychosis, and later engaged in dangerous behavior linked to AI-reinforced delusions.
Was Your Family Member Impacted by AI Psychosis? You May Have a Lawsuit
If your child or loved one suffered serious harm after prolonged interaction with AI chatbots, legal options may be available. SMVLC offers free and confidential case evaluations for families seeking accountability. We focus exclusively on representing children and families harmed by AI products, with a mission to force safer design and meaningful change.
You can speak with a member of our team by calling (206) 741-4862 or by using our online contact form.
Founding attorney Matthew P. Bergman and his team are committed to standing with families and holding technology companies responsible when their products cause real-world harm.