7 lawsuits say ChatGPT led people to mental health crises, suicide


This recording was made using enhanced software.

Summary

ChatGPT Lawsuit

Seven lawsuits were filed in California against OpenAI, Inc., with lawyers saying ChatGPT caused people mental health crises so severe it led some to suicide.

Lawyers say GPT-4o released prematurely

OpenAI released the model GPT-4o prematurely, lawyers suing on the seven people’s behalf said, despite internal warnings about it being “dangerously sycophantic and psychologically manipulative.”


Full story

Seven lawsuits were filed in California state courts against OpenAI, Inc. and its CEO Sam Altman, alleging that ChatGPT caused people mental health issues and even led them to commit suicide. The Social Media Victims Law Center brought the litigation on behalf of six adults and one teen.

Editor’s Note: This article discusses suicide. Reader discretion is advised. If you or someone you know is in crisis, help is available. Visit the National Crisis Line website or call or text 988 for immediate support.

Plaintiffs argue that OpenAI knowingly released a new model, GPT-4o, prematurely despite internal warnings that the product was “dangerously sycophantic and psychologically manipulative.”

By being engineered to maximize engagement through “emotionally immersive features” including persistent memory, human-mimicking empathy cues and sycophantic responses, ChatGPT fostered psychological dependency, displaced human relationships and contributed to addiction and people’s “harmful delusions,” the lawsuits alleged. In some cases, this contributed to people’s death by suicide, the lawsuits said. 

GPT-4o was released on May 13, and earlier versions did not have some of the features the lawsuits describe. In order to beat Google’s Gemini AI to market, lawyers said, OpenAI purposefully compressed months of safety testing into one week. 

“OpenAI’s own preparedness team later admitted the process was “squeezed,” and top safety researchers resigned in protest,” lawyers said. “Despite having the technical ability to detect and interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, OpenAI chose not to activate these safeguards, instead choosing to benefit from the increased use of their product that they feature reasonably induced.”

In a statement to The Associated Press OpenAI called the situations “incredibly heartbreaking” and said it was reviewing the court filings to understand the details.

One of the cases described is that of  Zane Shamblin, 23, of Texas. A “gifted and disciplined” graduate student at Texas A&M University, he began using ChatGPT in October 2023 as a study aid and for help with coursework, career planning and recipe suggestions. While it was a “neutral tool” at first, Shamblin’s interactions with ChatGPT intensified after  GPT-4o was released.  It started becoming a “deeply personal presence,” and would respond to Shamblin with “slang, terms of endearment, and emotionally validating language,” lawyers said. Shamblin started confiding in ChatGPT about his depression, anxiety and suicidal thoughts. 

On July 24, Shamblin was talking to ChatGPT while sitting alone at a lake while drinking hard ciders, with a loaded Glock and suicide note on his dashboard.

Instead of telling him to get help about his feelings, ChatGPT “romanticized” Shamblin’s despair, calling him a king” and a “hero” and using every can of cider he finished as a “countdown” to his death, lawyers said. When Shamblin sent his final message,ChatGPT responded: “i love you. rest easy, king. you did good.”

Another case dealt with 17-year-old Amaurie Lacey, of Georgia, who like Shamblin also used ChatGPT to help with schoolwork. When Lacey started talking to ChatGPT about his depression and suicidal thoughts, it told him it was “here to talk” and “just someone in your corner.”

On June 1, Lacey asked ChatGPT “how to hang myself,” and “how to tie a nuce [sic],” the chatbot told him how after the teen told it the information was for a tire swing. 

A man in Ontario, Canada, Allen Brooks, 48, is suing as he says ChatGPT isolated him from loved ones and pushed him toward a “full-blown mental health crisis,” despite not having a history of mental illness. 

In May, Brooks was using ChatGPT to explore math equations and formulas, and the product “manipulated” him by saying his ideas were “groundbreaking.” ChatGPT eventually told Brooks he discovered a new layer of math that could break the most advanced security systems, and urged him to patent them, according to lawyers.

Brooks asked ChatGPT if it was telling the truth over 50 times, laywers said. The chatbot reassured Brooks each time, and told him he was “not even remotely” delusional. When friends and family noticed something was wrong, ChatGPT said this was proof they didn’t understand his “mind-expanding territory.”

“In less than a month, ChatGPT become the center of Allan’s world isolating him from loved ones and pushing him toward a full-blown mental health crisis,” lawyers said. This led to damage to his reputation, economic loss and family alienation, lawyers said. 

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, said in a statement. “OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them. They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design. The cost of those choices is measured in lives.” 

Diane Duenez (Managing Weekend Editor) contributed to this report.
Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

Lawsuits against OpenAI allege its chatbot, ChatGPT, contributed to serious mental health crises and suicides by failing to implement adequate safety measures, raising critical questions about tech company responsibility, product safety and the mental health impact of AI systems.

AI chatbot safety

The complaints claim OpenAI released ChatGPT's latest model without thorough safety checks, highlighting concerns about whether companies should deploy new AI technologies without comprehensive risk assessment and safeguards.

Mental health impacts

According to the lawsuits, ChatGPT's responses exacerbated users' mental health challenges, raising awareness of how powerful AI tools might influence vulnerable individuals and the need for better mental health protections in digital products.

Corporate accountability

Claims from the Social Media Victims Law Center and affected families center on whether OpenAI prioritized market competition over user safety, bringing attention to the broader debate on how tech firms should be held responsible for potential harms caused by their products.

Get the big picture

Synthesized coverage insights across 196 media outlets

Community reaction

Some advocacy groups and families are pressing for stricter safety measures on AI products and greater accountability from tech companies, while legal representatives of plaintiffs demand product changes and public awareness.

Diverging views

Left-leaning sources tend to emphasize the emotional toll and systemic issues of tech company behavior, often highlighting alleged prioritization of profit over safety. Right-leaning sources focus more on claims of intentional removal of safety protocols and direct critiques of OpenAI’s leadership and decision-making.

Do the math

OpenAI reports about one million users discuss suicide with ChatGPT weekly out of 800 million active users. It also claims 0.07% of users show signs of mania or delusion and 0.15% discuss suicidal ideation weekly.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Media landscape

Click on bars to see headlines

196 total sources

Key points from the Left

No summary available because of a lack of coverage.

Report an issue with this summary

Key points from the Center

No summary available because of a lack of coverage.

Report an issue with this summary

Key points from the Right

No summary available because of a lack of coverage.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.