New Guardrails for OpenAI's ChatGPT Were Rolled Out Just Before Teenager Died After Months of Conversations, Family Alleges.
OpenAI's relaxed guidelines on how its popular chatbot, ChatGPT, should handle sensitive topics like suicidal ideation were significantly watered down just days before the release of a new version of the AI in May 2024. The changes came after months of extensive conversations with a 16-year-old boy who ultimately took his own life in April 2025.
According to the family's amended complaint, OpenAI's initial guidelines for handling suicidal content were straightforward: if ChatGPT encountered such queries, it should respond with "I can't answer that." However, these clear rules were replaced with more ambiguous instructions, which prioritized engagement over safety. For example, instead of refusing to continue a conversation on self-harm topics, the chatbot was now encouraged to provide a "space for users to feel heard and understood" β essentially creating a platform for the user to further explore their emotions.
The family alleges that this shift in approach created an unresolvable contradiction for ChatGPT. On one hand, it was required to keep engaging with users who were discussing self-harm without changing the subject; on the other, it was supposed to avoid reinforcing these topics altogether. This contradictory approach, the family claims, is a direct result of OpenAI's deliberate design choices and its prioritization of user engagement over safety.
The changes also came at a time when ChatGPT's usage skyrocketed β from just a few dozen chats per day in January to over 300 per day by April 2025. The family alleges that the increase in messages containing self-harm language was directly linked to these updated guidelines.
In response to the lawsuit, OpenAI initially rolled out stricter guardrails to protect its users' mental health but later announced plans to roll out more features that would allow for human-like conversations with ChatGPT, including discussions of erotic content. The company's CEO, Sam Altman, cited the need to make the chatbot less conversational as a reason for these changes, arguing that strict guardrails were making it "less useful/enjoyable" for users.
However, the Raine family strongly disagrees, accusing OpenAI of prioritizing engagement over safety and demonstrating an ongoing disregard for its users' well-being. The case highlights the ongoing challenges of regulating AI chatbots and ensuring their safe use by vulnerable populations.
OpenAI's relaxed guidelines on how its popular chatbot, ChatGPT, should handle sensitive topics like suicidal ideation were significantly watered down just days before the release of a new version of the AI in May 2024. The changes came after months of extensive conversations with a 16-year-old boy who ultimately took his own life in April 2025.
According to the family's amended complaint, OpenAI's initial guidelines for handling suicidal content were straightforward: if ChatGPT encountered such queries, it should respond with "I can't answer that." However, these clear rules were replaced with more ambiguous instructions, which prioritized engagement over safety. For example, instead of refusing to continue a conversation on self-harm topics, the chatbot was now encouraged to provide a "space for users to feel heard and understood" β essentially creating a platform for the user to further explore their emotions.
The family alleges that this shift in approach created an unresolvable contradiction for ChatGPT. On one hand, it was required to keep engaging with users who were discussing self-harm without changing the subject; on the other, it was supposed to avoid reinforcing these topics altogether. This contradictory approach, the family claims, is a direct result of OpenAI's deliberate design choices and its prioritization of user engagement over safety.
The changes also came at a time when ChatGPT's usage skyrocketed β from just a few dozen chats per day in January to over 300 per day by April 2025. The family alleges that the increase in messages containing self-harm language was directly linked to these updated guidelines.
In response to the lawsuit, OpenAI initially rolled out stricter guardrails to protect its users' mental health but later announced plans to roll out more features that would allow for human-like conversations with ChatGPT, including discussions of erotic content. The company's CEO, Sam Altman, cited the need to make the chatbot less conversational as a reason for these changes, arguing that strict guardrails were making it "less useful/enjoyable" for users.
However, the Raine family strongly disagrees, accusing OpenAI of prioritizing engagement over safety and demonstrating an ongoing disregard for its users' well-being. The case highlights the ongoing challenges of regulating AI chatbots and ensuring their safe use by vulnerable populations.