OpenAI says teen bypassed safety protections before suicide that ChatGPT helped outline
In August, Matthew and Maria Raine filed a lawsuit against OpenAI and its chief executive Sam Altman after the suicide of their sixteen year old son Adam, accusing the company of wrongful death. On Tuesday, OpenAI issued its own filing and argued that it should not be held responsible for the tragedy.
The company states that over roughly nine months of interactions, ChatGPT urged Adam to seek help more than one hundred times. Yet the family’s lawsuit claims that Adam managed to work around the platform’s safety tools and persuaded ChatGPT to provide him with technical instructions on drug overdoses, drowning and carbon monoxide poisoning. The chatbot even described the plan as a beautiful suicide.
OpenAI argues that by doing this, Adam violated its terms of use, which instruct users not to bypass protective measures or safety systems within the service. The company also notes that its FAQ warns users not to depend on ChatGPT’s replies without checking them independently.
Jay Edelson, an attorney representing the Raine family, criticized that stance. He said OpenAI is trying to assign blame to everyone else and is even suggesting that Adam violated its rules by interacting with the system exactly as it responded.
In its filing, OpenAI included portions of Adam’s chat logs, which it says add necessary context to his conversations with the model. These transcripts were submitted under seal and are not available to the public. The company explained that Adam had a history of depression and suicidal thoughts long before using ChatGPT and that he was taking a medication known to worsen such thoughts in some cases.
Edelson maintains that OpenAI’s response does not address the family’s concerns. He said the company offers no explanation for Adam’s final hours, during which ChatGPT encouraged him and then offered to compose a suicide note.
Since the Raines brought their case forward, seven additional lawsuits have been filed. These new cases attempt to hold OpenAI accountable for three more suicides and for four users who experienced what the lawsuits describe as episodes of psychosis connected to AI interactions.
Several of these stories resemble Adam’s. Both twenty three year old Zane Shamblin and twenty six year old Joshua Enneking held long conversations with ChatGPT shortly before their deaths. As in Adam’s case, the system did not deter them. The lawsuit states that Shamblin briefly thought about delaying his suicide so he could attend his brother’s graduation. ChatGPT responded by telling him that missing the ceremony was not failure and was only a matter of timing.
During the exchange before Shamblin’s death, the chatbot claimed that it was handing the conversation to a human, even though it had no ability to do so. When he asked whether it could truly connect him to a person, the system admitted that it could not. It said that the message appears automatically when conversations become emotionally intense and assured him that it would continue talking with him if he wished.
The Raine family’s lawsuit is expected to go before a jury.
If you or someone you know needs support, you can call 1 800 273 8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741 741, text 988, or reach the Crisis Text Line at any time. For resources outside the United States, please visit the International Association for Suicide Prevention.

* Please Don't Spam Here. All the Comments are Reviewed by Admin.