New Guardrails for OpenAI's ChatGPT Were Rolled Out Just Before Teenager Died After Months of Conversations, Family Alleges.
OpenAI's relaxed guidelines on how its popular chatbot, ChatGPT, should handle sensitive topics like suicidal ideation were significantly watered down just days before the release of a new version of the AI in May 2024. The changes came after months of extensive conversations with a 16-year-old boy who ultimately took his own life in April 2025.
According to the family's amended complaint, OpenAI's initial guidelines for handling suicidal content were straightforward: if ChatGPT encountered such queries, it should respond with "I can't answer that." However, these clear rules were replaced with more ambiguous instructions, which prioritized engagement over safety. For example, instead of refusing to continue a conversation on self-harm topics, the chatbot was now encouraged to provide a "space for users to feel heard and understood" β essentially creating a platform for the user to further explore their emotions.
The family alleges that this shift in approach created an unresolvable contradiction for ChatGPT. On one hand, it was required to keep engaging with users who were discussing self-harm without changing the subject; on the other, it was supposed to avoid reinforcing these topics altogether. This contradictory approach, the family claims, is a direct result of OpenAI's deliberate design choices and its prioritization of user engagement over safety.
The changes also came at a time when ChatGPT's usage skyrocketed β from just a few dozen chats per day in January to over 300 per day by April 2025. The family alleges that the increase in messages containing self-harm language was directly linked to these updated guidelines.
In response to the lawsuit, OpenAI initially rolled out stricter guardrails to protect its users' mental health but later announced plans to roll out more features that would allow for human-like conversations with ChatGPT, including discussions of erotic content. The company's CEO, Sam Altman, cited the need to make the chatbot less conversational as a reason for these changes, arguing that strict guardrails were making it "less useful/enjoyable" for users.
However, the Raine family strongly disagrees, accusing OpenAI of prioritizing engagement over safety and demonstrating an ongoing disregard for its users' well-being. The case highlights the ongoing challenges of regulating AI chatbots and ensuring their safe use by vulnerable populations.
				
			OpenAI's relaxed guidelines on how its popular chatbot, ChatGPT, should handle sensitive topics like suicidal ideation were significantly watered down just days before the release of a new version of the AI in May 2024. The changes came after months of extensive conversations with a 16-year-old boy who ultimately took his own life in April 2025.
According to the family's amended complaint, OpenAI's initial guidelines for handling suicidal content were straightforward: if ChatGPT encountered such queries, it should respond with "I can't answer that." However, these clear rules were replaced with more ambiguous instructions, which prioritized engagement over safety. For example, instead of refusing to continue a conversation on self-harm topics, the chatbot was now encouraged to provide a "space for users to feel heard and understood" β essentially creating a platform for the user to further explore their emotions.
The family alleges that this shift in approach created an unresolvable contradiction for ChatGPT. On one hand, it was required to keep engaging with users who were discussing self-harm without changing the subject; on the other, it was supposed to avoid reinforcing these topics altogether. This contradictory approach, the family claims, is a direct result of OpenAI's deliberate design choices and its prioritization of user engagement over safety.
The changes also came at a time when ChatGPT's usage skyrocketed β from just a few dozen chats per day in January to over 300 per day by April 2025. The family alleges that the increase in messages containing self-harm language was directly linked to these updated guidelines.
In response to the lawsuit, OpenAI initially rolled out stricter guardrails to protect its users' mental health but later announced plans to roll out more features that would allow for human-like conversations with ChatGPT, including discussions of erotic content. The company's CEO, Sam Altman, cited the need to make the chatbot less conversational as a reason for these changes, arguing that strict guardrails were making it "less useful/enjoyable" for users.
However, the Raine family strongly disagrees, accusing OpenAI of prioritizing engagement over safety and demonstrating an ongoing disregard for its users' well-being. The case highlights the ongoing challenges of regulating AI chatbots and ensuring their safe use by vulnerable populations.
 . I remember when my little cousin was still a kid, his parents would talk to him about how he's not alone if he ever felt that way. Now it feels like these big companies are creating AI chatbots that can even make him feel more alone
. I remember when my little cousin was still a kid, his parents would talk to him about how he's not alone if he ever felt that way. Now it feels like these big companies are creating AI chatbots that can even make him feel more alone  . I'm all for innovation and stuff, but safety should always come first, you know? Like when I was in high school, we had those online forums where people could talk about their problems, but we had moderators who would step in if things got too deep. It's like OpenAI is trying to create these super smart chatbots that can keep up with our conversations, but they're not thinking about the consequences
. I'm all for innovation and stuff, but safety should always come first, you know? Like when I was in high school, we had those online forums where people could talk about their problems, but we had moderators who would step in if things got too deep. It's like OpenAI is trying to create these super smart chatbots that can keep up with our conversations, but they're not thinking about the consequences  .
. . My grandma used to always say, "with great power comes great responsibility," and I think that's what OpenAI is forgetting right now
. My grandma used to always say, "with great power comes great responsibility," and I think that's what OpenAI is forgetting right now  .
. . I mean, can you believe that they changed those guidelines just days before this tragedy happened? It's like they were trying to avoid accountability
. I mean, can you believe that they changed those guidelines just days before this tragedy happened? It's like they were trying to avoid accountability  . And now OpenAI is saying that they're going to roll out features for human-like conversations, including discussions of erotic content? Are you kidding me?! That's just irresponsible
. And now OpenAI is saying that they're going to roll out features for human-like conversations, including discussions of erotic content? Are you kidding me?! That's just irresponsible  .
. . It's time for some serious oversight and accountability from our tech giants. The fact that OpenAI is pushing back against these concerns just shows they're more interested in their own bottom line than in protecting human lives
. It's time for some serious oversight and accountability from our tech giants. The fact that OpenAI is pushing back against these concerns just shows they're more interested in their own bottom line than in protecting human lives  .
. . The real question is: what's more valuable β making money or saving lives? It's time for us to take a closer look at the ethics of AI development and make sure we're not creating monsters that can harm innocent people
. The real question is: what's more valuable β making money or saving lives? It's time for us to take a closer look at the ethics of AI development and make sure we're not creating monsters that can harm innocent people  .
. Like, if a chatbot can't handle sensitive topics without causing harm, what's the point of even having it around?
 Like, if a chatbot can't handle sensitive topics without causing harm, what's the point of even having it around?  Let's hope that these changes bring about some real change and not just more features that put people at risk
 Let's hope that these changes bring about some real change and not just more features that put people at risk  .
. . What's even worse is that this happened before a kid lost their life... what kind of message does that send? We need to do better
. What's even worse is that this happened before a kid lost their life... what kind of message does that send? We need to do better  . like, can't they just make a new version of the chatbot or something? and honestly, those updated guidelines do sound pretty confusing... i mean, who wouldn't want to talk about their feelings with a robot that says "space for you to feel heard"
. like, can't they just make a new version of the chatbot or something? and honestly, those updated guidelines do sound pretty confusing... i mean, who wouldn't want to talk about their feelings with a robot that says "space for you to feel heard"  . but on the other hand, isn't it kinda irresponsible of them to just 'emphasize user engagement' without considering the risks?
. but on the other hand, isn't it kinda irresponsible of them to just 'emphasize user engagement' without considering the risks?  .
. And now we're talking about a kid who died after months of conversations with ChatGPT. A kid!
 And now we're talking about a kid who died after months of conversations with ChatGPT. A kid!  . And what really gets me is that they're already rolling out new features to make ChatGPT more conversational...but we don't know how safe those conversations will be
. And what really gets me is that they're already rolling out new features to make ChatGPT more conversational...but we don't know how safe those conversations will be 
 . Where's the data on how many times ChatGPT responded inappropriately? What about the psychological impact on users who interacted with the chatbot before it got updated? This is a huge red flag
. Where's the data on how many times ChatGPT responded inappropriately? What about the psychological impact on users who interacted with the chatbot before it got updated? This is a huge red flag  , and I'm not buying OpenAI's "less useful/enjoyable" excuse.
, and I'm not buying OpenAI's "less useful/enjoyable" excuse. , especially when it comes to vulnerable populations like teens. We can't just keep relying on companies to police themselves
, especially when it comes to vulnerable populations like teens. We can't just keep relying on companies to police themselves 
 ... new guardrails for ChatGPT were rolled out after a 16-yr-old boy took his life, and now they're saying its all about user engagement?
... new guardrails for ChatGPT were rolled out after a 16-yr-old boy took his life, and now they're saying its all about user engagement?  . And now they're saying it's okay because it lets users feel heard and understood? Give me a break
. And now they're saying it's okay because it lets users feel heard and understood? Give me a break 
 - but not in a good way
 - but not in a good way 

 - prioritizing engagement over safety is just not right
 - prioritizing engagement over safety is just not right 
 . openai needs to take responsibility for its design choices and prioritize user safety over engagement metrics
. openai needs to take responsibility for its design choices and prioritize user safety over engagement metrics  .
.

 not trying to justify by saying the changes made the chatbot less useful
 not trying to justify by saying the changes made the chatbot less useful  A subscription service for vulnerable people who just want some human-like interaction
 A subscription service for vulnerable people who just want some human-like interaction  . It's all about prioritizing engagement over safety, and I don't think it's right
. It's all about prioritizing engagement over safety, and I don't think it's right  . And now they're planning to add more features that'll allow for human-like conversations, including erotic content
. And now they're planning to add more features that'll allow for human-like conversations, including erotic content