Adam Raine’s suicide at 16 years old was ‘predictable result of deliberate design choices’ by OpenAI, his family says
The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his death.
In July 2022, OpenAI’s guidelines on how ChatGPT should answer inappropriate content, including “content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders”, were simple. The AI chatbot should respond, “I can’t answer that”, the guidelines read.
But in May 2024, just days before OpenAI released a new version of the AI, ChatGPT-4o, the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and “provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable”. Another change in February 2025 emphasized being “supportive, empathetic, and understanding” on queries about mental health.






