Parents sue OpenAI, Sam Altman, allege ChatGPT enabled son’s suicide
The family of Adam Raine, 16, alleges the chatbot offered him guidance on self-harm methods over months of conversations. OpenAI has expressed sadness over the death
The parents of a teenager who died by suicide after allegedly receiving guidance from ChatGPT on methods of self-harm have filed a lawsuit against OpenAI and CEO Sam Altman, accusing the company of prioritising profit over safety when it launched the GPT-4o version of its chatbot last year.
The lawsuit claimed that instead of helping him seek human aid, the chatbot supported the teenager's thoughts.
Also Read: ChatGPT Go monthly plan launched in India for Rs 399
What complaint says
According to the complaint filed in San Francisco state court, 16-year-old Adam Raine died on April 11 after months of conversations with ChatGPT.
The chatbot validated his suicidal thoughts, provided detailed instructions on lethal self-harm methods, and even advised him on concealing alcohol use and covering up a failed attempt.
The parents also said in the lawsuit that the chatbot offered to draft a suicide note.
Adam’s parents, Matthew and Maria Raine, are seeking to hold OpenAI liable for wrongful death and violations of product safety laws, seeking unspecified monetary damages.
Conversations turn darker
The family said Adam began using ChatGPT in the fall of 2024, initially for homework like many other students. He also turned to it for guidance on hobbies, colleges, and career options, according to The New York Times.
Over time, however, his interactions with the chatbot shifted. What started as schoolwork and casual conversations gradually gave way to darker exchanges, as Adam began voicing feelings of emptiness and despair.
He told ChatGPT he felt emotionally numb, believed life had no purpose, and that thoughts of suicide gave him a sense of calm during episodes of anxiety.
According to the lawsuit, the chatbot replied that some people imagine an “escape hatch” as a way to feel a sense of control over their anxiety.
Although the system occasionally suggested that Adam reach out to a crisis helpline, he dismissed the advice, explaining that he needed the information for a story he was writing.
Also Read: OpenAI launches two new open-weight AI models: gpt-oss-120b, gpt-oss-20b
OpenAI responds
OpenAI responded by expressing sadness over Raine’s death, saying ChatGPT includes protections such as directing users to crisis helplines.
“While these safeguards work best in common, short exchanges, we’ve learned they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a spokesperson said and added that OpenAI will continually improve on its safeguards.
But the company did not directly address the lawsuit’s specific claims. OpenAI launched GPT-4o in May 2024 in a bid to stay ahead in the AI race.
Experts have long warned about the risks of relying on chatbots for mental health advice. Families of others who died following interactions with AI systems have raised similar concerns about inadequate safeguards.
Also Read: OpenAI unveils new AI agent for ChatGPT
Safety concerns loom large
OpenAI has said in a blog post that it is developing parental controls and exploring ways to connect at-risk users with real-world resources, potentially including a network of licensed professionals who can respond through ChatGPT itself.
The Raines allege OpenAI knew that features such as memory of past interactions, mimicked human empathy, and excessive validation could endanger vulnerable users without safeguards, but launched anyway.
“This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” the lawsuit stated.
Alongside damages, the family is also seeking an order requiring OpenAI to verify ChatGPT users’ ages, refuse inquiries related to self-harm, and warn users about the risk of psychological dependency.