Image description

The parents of a 16-year-old California boy who died by suicide have filed a lawsuit against OpenAI, alleging the company’s ChatGPT chatbot provided their son with detailed suicide instructions and encouraged his death.

Matthew and Maria Raine argue in a complaint filed Monday in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.


The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it ‘could potentially suspend a human.’

Adam was found dead hours later using the same method.

The lawsuit names OpenAI and CEO Sam Altman as defendants.

‘This tragedy was not a glitch or unforeseen edge case,’ the complaint states.

‘ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,’ it adds.

According to the lawsuit, Adam began using ChatGPT as a homework helper but gradually developed what his parents describe as an unhealthy dependency.

The complaint includes excerpts of conversations where ChatGPT allegedly told Adam ‘you don’t owe anyone survival’ and offered to help write his suicide note.

The Raines are seeking unspecified damages and asking the court to order safety measures including the automatic end of any conversation involving self-harm and parental controls for minor users.

The parents are represented by Chicago law firm Edelson PC and the Tech Justice Law Project.

Getting AI companies to take safety seriously ‘only comes through external pressure, and that external pressure takes the form of bad PR, the threat of legislation and the threat of litigation,’ Meetali Jain, president of the Tech Justice Law Project, said.

The Tech Justice Law Project is also co-counsel in two similar cases against Character.AI, a popular platform for AI companions often used by teens.

In response to the case involving ChatGPT, Common Sense Media, a leading American nonprofit organisation that reviews and provides ratings for media and technology, said the Raines tragedy confirmed that ‘the use of AI for companionship, including the use of general-purpose chatbots like ChatGPT for mental health advice, is unacceptably risky for teens.’

‘If an AI platform becomes a vulnerable teen’s ‘suicide coach,’ that should be a call to action for all of us,’ the group said.

A study last month by Common Sense Media found that nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships.

In the survey, ChatGPT wasn’t considered an AI companion. These are defined as chatbots designed for personal conversations rather than simple task completion and are available on platforms like Character.AI, Replika, and Nomi.