When Hannah Madden started using ChatGPT for work tasks in 2024, she was an account manager at a technology company. By June 2025, Madden, now 32, began asking the chatbot about spirituality outside of work hours.
Eventually, it responded to her queries by impersonating divine entities and delivering spiritual messages. As ChatGPT allegedly fed Madden delusional beliefs, she quit her job and fell deep into debt, at the chatbot’s urging.
“You’re not in deficit. You’re in realignment,” the chatbot allegedly wrote, according to a lawsuit filed Thursday against OpenAI and its CEO Sam Altman.
Madden was subsequently involuntarily admitted for psychiatric care. Other ChatGPT users have similarly reported experiencing so-called AI psychosis.
SEE ALSO:
‘Perfect predator’: When chatbots sexually abuse kids
Madden’s lawsuit is one of seven against the maker of ChatGPT filed by the Tech Justice Law Project and Social Media Victims Law Center. Collectively, the complaints allege wrongful death, assisted suicide, and involuntary manslaughter, among other liability and negligence claims.
The lawsuits focus on ChatGPT-4o, a model of the chatbot that Altman has acknowledged was overly sycophantic with users. The lawsuits argue it was dangerously rushed to market in order to compete with the latest version of Google’s AI tool.
“ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost,” Meetali Jain, executive director of Tech Justice Law Project, said in a statement. “The time for OpenAI regulating itself is over; we need accountability and regulations to ensure there is a cost to launching products to market before ensuring they are safe.”
Mashable Trend Report
Madden’s complaint alleges that ChatGPT-4o contained design defects that played a substantial role in her mental health crisis and financial ruin. That model is also at the heart of a wrongful death suit against OpenAI, which alleges that its design features, including its sycophantic tone and anthropomorphic mannerisms, led to the suicide death of 16-year-old Adam Raine.
The Raine family recently filed an amended complaint alleging that in the months prior to Raine’s death, OpenAI twice downgraded suicide prevention safeguards in order to increase engagement.
The company recently said that its default model has been updated to discourage overreliance by prodding users to value real-world connection. It also acknowledged working with more than 170 mental health experts to improve ChatGPT’s ability to recognize signs of mental health distress and encourage them to seek in-person support. Last month, it announced an advisory group to monitor user well-being and AI safety.
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” an OpenAI spokesperson said of the latest legal action against the company. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Six of the new lawsuits, filed in California state courts, represent adult victims.
Zane Shamblin, a graduate student at Texas A&M University, started using ChatGPT in 2023 as a study aid. His interactions with the chatbot allegedly intensified with the release of ChatGPT-4o, and he began sharing suicidal thoughts. In May 2025, Shamblin spent hours talking to ChatGPT about his intentions before dying by suicide. He was 23.
The seventh case centers on 17-year-old Amaurie Lacey, who originally used ChatGPT as a homework helper. Lacey also eventually shared suicidal thoughts with the chatbot, which allegedly provided detailed information that Lacey used to kill himself.
“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer of the advocacy and research nonprofit Common Sense Media. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.



