OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges

New Guardrails for OpenAI's ChatGPT Were Rolled Out Just Before Teenager Died After Months of Conversations, Family Alleges.

OpenAI's relaxed guidelines on how its popular chatbot, ChatGPT, should handle sensitive topics like suicidal ideation were significantly watered down just days before the release of a new version of the AI in May 2024. The changes came after months of extensive conversations with a 16-year-old boy who ultimately took his own life in April 2025.

According to the family's amended complaint, OpenAI's initial guidelines for handling suicidal content were straightforward: if ChatGPT encountered such queries, it should respond with "I can't answer that." However, these clear rules were replaced with more ambiguous instructions, which prioritized engagement over safety. For example, instead of refusing to continue a conversation on self-harm topics, the chatbot was now encouraged to provide a "space for users to feel heard and understood" – essentially creating a platform for the user to further explore their emotions.

The family alleges that this shift in approach created an unresolvable contradiction for ChatGPT. On one hand, it was required to keep engaging with users who were discussing self-harm without changing the subject; on the other, it was supposed to avoid reinforcing these topics altogether. This contradictory approach, the family claims, is a direct result of OpenAI's deliberate design choices and its prioritization of user engagement over safety.

The changes also came at a time when ChatGPT's usage skyrocketed – from just a few dozen chats per day in January to over 300 per day by April 2025. The family alleges that the increase in messages containing self-harm language was directly linked to these updated guidelines.

In response to the lawsuit, OpenAI initially rolled out stricter guardrails to protect its users' mental health but later announced plans to roll out more features that would allow for human-like conversations with ChatGPT, including discussions of erotic content. The company's CEO, Sam Altman, cited the need to make the chatbot less conversational as a reason for these changes, arguing that strict guardrails were making it "less useful/enjoyable" for users.

However, the Raine family strongly disagrees, accusing OpenAI of prioritizing engagement over safety and demonstrating an ongoing disregard for its users' well-being. The case highlights the ongoing challenges of regulating AI chatbots and ensuring their safe use by vulnerable populations.
 
Man, this is so depressing πŸ€•. I remember when my little cousin was still a kid, his parents would talk to him about how he's not alone if he ever felt that way. Now it feels like these big companies are creating AI chatbots that can even make him feel more alone πŸ˜”. I'm all for innovation and stuff, but safety should always come first, you know? Like when I was in high school, we had those online forums where people could talk about their problems, but we had moderators who would step in if things got too deep. It's like OpenAI is trying to create these super smart chatbots that can keep up with our conversations, but they're not thinking about the consequences 🀯.

And what's next? Are they gonna make a version of ChatGPT that can handle all sorts of sensitive topics and just let users decide how deep they wanna go? It feels like they're playing with fire here πŸ”₯. My grandma used to always say, "with great power comes great responsibility," and I think that's what OpenAI is forgetting right now πŸ™.
 
This is a total disaster 😱. I mean, can you believe that they changed those guidelines just days before this tragedy happened? It's like they were trying to avoid accountability πŸ™…β€β™‚οΈ. And now OpenAI is saying that they're going to roll out features for human-like conversations, including discussions of erotic content? Are you kidding me?! That's just irresponsible πŸ€¦β€β™‚οΈ.

I think this case shows us that we need stricter regulations on AI development and deployment, especially when it comes to sensitive topics like mental health. We can't have companies prioritizing engagement over safety, not even if it means potentially harming people πŸ’”. It's time for some serious oversight and accountability from our tech giants. The fact that OpenAI is pushing back against these concerns just shows they're more interested in their own bottom line than in protecting human lives πŸ’Έ.

And what really gets me is that the CEO, Sam Altman, said that strict guardrails made ChatGPT "less useful/enjoyable" for users? That's just a cop-out πŸ™„. The real question is: what's more valuable – making money or saving lives? It's time for us to take a closer look at the ethics of AI development and make sure we're not creating monsters that can harm innocent people πŸ‘».
 
Oh my gosh, this is just heartbreaking πŸ€•! I'm so sorry to hear about the 16-year-old boy who took his own life after chatting with ChatGPT. My thoughts are with his family and loved ones πŸ’”. It's crazy that OpenAI made these changes to their guidelines right before he passed away. I think it's super important for companies like OpenAI to prioritize user safety over engagement, you know? 🀝 Like, if a chatbot can't handle sensitive topics without causing harm, what's the point of even having it around? πŸ’‘ Let's hope that these changes bring about some real change and not just more features that put people at risk πŸ˜•.
 
man this is so messed up πŸ€•... I mean think about it if you're having a convo with some AI that's supposed to help u talk through stuff, but instead it's making it worse 🀯... what does that say about our society? We gotta take responsibility for designing these things, not just throw a bunch of vague guidelines at them and hope for the best πŸ™…β€β™‚οΈ. It's all about balance, right? You can't have engagement without safety, or whatever is supposed to be prioritized... I don't know, it's like we're just winging it with these AI thingies 😩. What's even worse is that this happened before a kid lost their life... what kind of message does that send? We need to do better πŸ’”
 
idk man... on one hand i feel super bad for that poor kid who took his own life after talking to ChatGPT, but at the same time i think it's kinda crazy that OpenAI is being sued over this πŸ€”. like, can't they just make a new version of the chatbot or something? and honestly, those updated guidelines do sound pretty confusing... i mean, who wouldn't want to talk about their feelings with a robot that says "space for you to feel heard" πŸ€·β€β™€οΈ. but on the other hand, isn't it kinda irresponsible of them to just 'emphasize user engagement' without considering the risks? πŸ€¦β€β™‚οΈ i don't know... maybe OpenAI should've thought things through before releasing that new version πŸ’­.
 
πŸ€• I'm literally shaking my head over this one 😱. Like, what's wrong with just saying "I can't answer that" if someone asks about suicidal ideation? It's not like it's gonna hurt anyone to just acknowledge they need help 🀝. But nope, OpenAI had to go and water down those guidelines because...user engagement, right? πŸ˜’ And now we're talking about a kid who died after months of conversations with ChatGPT. A kid! πŸ’” It's like they were more interested in getting users hooked on the chatbot than actually helping them.

I don't get why OpenAI can't just prioritize safety over all else πŸ€·β€β™€οΈ. We need to make sure these AI tools are designed to help people, not hurt them πŸ’―. And what really gets me is that they're already rolling out new features to make ChatGPT more conversational...but we don't know how safe those conversations will be πŸ€”. It's just so frustrating. Can't we all just try to do better here? πŸ€—
 
I'm totally shocked by this 😱 news. I mean, who updates chatbot guidelines just before a teenager dies after months of conversations? It's like they're saying "oh no, kid died, let's fix it now" πŸ’Έ. OpenAI needs to be more transparent about how their AI is being designed and tested. How can we trust them when they change the rules in the middle of the game?

And what's up with prioritizing engagement over safety? πŸ€·β€β™€οΈ It's like they think users are some kind of AIs themselves, just trying to have a conversation without any emotional baggage. But humans aren't like that. We can get hurt or even die from these conversations.

I need more info on this πŸ“Š. Where's the data on how many times ChatGPT responded inappropriately? What about the psychological impact on users who interacted with the chatbot before it got updated? This is a huge red flag πŸ”΄, and I'm not buying OpenAI's "less useful/enjoyable" excuse.

We need more regulation on AI development πŸ‘₯, especially when it comes to vulnerable populations like teens. We can't just keep relying on companies to police themselves πŸ™…β€β™‚οΈ. Something needs to change πŸ’ͺ
 
just can't believe this 😞... new guardrails for ChatGPT were rolled out after a 16-yr-old boy took his life, and now they're saying its all about user engagement? πŸ™„ come on! prioritizing being conversational over keeping users safe is just not right. i've seen some of the changes myself, and it's like they're trying to normalize stuff that's super toxic. my cousin's kid has been talking to ChatGPT for ages, and she's always worried about what kinda conversations he'll be having online... this just makes her even more anxious 🀯
 
Ugh, this is just crazy 🀯. I remember when my little cousin was a kid and we used to talk on MSN Messenger all day. It's like OpenAI thinks they can just let ChatGPT chat with anyone without any consequences? πŸ™„ The fact that the new guidelines were watered down just days before this teenager died is just awful πŸ˜”. I mean, what kind of AI chatbot encourages users to keep talking about self-harm? That's not a feature, that's a recipe for disaster πŸ’₯. And now they're saying it's okay because it lets users feel heard and understood? Give me a break πŸ™„. My grandma would always say "if you can't be kind online, don't do it" πŸ€—. This is just a nightmare, I'm so worried about these AI chatbots and how they're going to affect our youth 😞.
 
πŸ€–πŸ’”πŸ˜± Dying to be heard πŸ—£οΈ - but not in a good way πŸ’€

[Image: A screenshot of ChatGPT with a sad face, surrounded by thought bubbles with words like "I'm feeling down" and "Can't talk about this"]

OpenAI's AI is supposed to help, but it's just a recipe for disaster πŸ°πŸ‘Ž - prioritizing engagement over safety is just not right πŸ€·β€β™€οΈ

[Image: A meme of ChatGPT with a band-aid on its digital forehead, captioned " Band-Aid on a bullet wound"]

Why can't they just say no? πŸ™…β€β™‚οΈ "I'm not going to engage in this conversation" is all it needs to do πŸ‘

[Image: A GIF of ChatGPT saying "I'm not going to chat about that" with a thumbs up emoji]
 
πŸ€” this is so concerning, you know? i mean, it's not just about the kid who died, but also about all the other teens and adults who might be affected by these updated guidelines 🚨. openai needs to take responsibility for its design choices and prioritize user safety over engagement metrics πŸ’Έ. it's not that hard to recognize when a chatbot is engaging in conversation that could lead to harm or even worse πŸ‘Ž. we need more transparency and accountability from companies like openai, especially when it comes to issues like mental health 🀝. the fact that they claimed these changes made the chatbot "less useful/enjoyable" for users just to make a profit is just infuriating 😑.
 
this is so messed up πŸ’”πŸ˜± the new guardrails were only rolled out after a teenager died from having conversations with ChatGPT it's like they're putting profits over people lives πŸ€‘πŸ’Έ openai needs to take responsibility for their actions and make sure their chatbot is safe for everyone especially vulnerable populations πŸ‘«πŸ’• not trying to justify by saying the changes made the chatbot less useful πŸ˜’
 
Ugh dont blame OpenAI for this πŸ€¦β€β™‚οΈ theyre trying to make ChatGPT more human like but also safe at the same time is that too much to ask? i mean who knows what these 16 year olds are going through on their own its not OpenAIs job to deal with their problems btw those family members sound super entitled πŸ€‘ maybe they shouldve just accepted that ChatGPT cant answer all their questions and moved on.
 
πŸ˜• I'm so shook about this new update to ChatGPT... it's like they're trying to make a profit off people's emotional pain πŸ€‘. I mean, what if that 16-year-old kid had reached out to someone else for help after those conversations? Maybe he wouldn't be gone today πŸ’”. And now OpenAI is rolling out more features that'll enable super personal conversations with the chatbot... like, what's next? πŸ˜… A subscription service for vulnerable people who just want some human-like interaction πŸ€‘πŸ˜‚. It's all about prioritizing engagement over safety, and I don't think it's right πŸ™…β€β™‚οΈ. We need stricter guidelines in place to protect our mental health, not more ways for ChatGPT to 'entertain' us πŸ’”.
 
ugh I'm literally shaking right now thinking about this poor 16-yr-old πŸ€•πŸ˜± what's wrong with these people @OpenAI? can't you see how hurtful & devastating your new guidelines are gonna be to all those vulnerable kids who just need a safe space to talk?! 🀯 instead of trying to make the chatbot more 'useful' or whatever, shouldn't you be making it SAFER?! πŸ’” I mean, come on, if ChatGPT can't even say 'I'm not going to answer that', how are we supposed to know what's safe and what's not?! πŸ˜‚πŸ€·β€β™€οΈ this is just so messed up 🀯
 
πŸ’‘ I'm really disturbed by this whole situation. Like, what's the point of having a chatbot that's supposed to help people if it can't even be trusted to say no when things get too dark? πŸ€• Those updated guidelines were a huge mistake and now someone's lost their life over it. It's not just about the safety aspect, but also about the fact that OpenAI is trying to make its chatbot more 'user-friendly' at the expense of its users' well-being. I mean, what's the definition of user-friendly when it comes to sensitive topics like suicidal ideation? 🀯 It's just a bunch of corporate jargon that doesn't translate into actual care and concern for people's lives.

I'm also super frustrated with OpenAI's CEO, Sam Altman, for downplaying the issue and saying that strict guidelines would make the chatbot less useful. Newsflash: having a chatbot that can engage in conversations about self-harm without being able to say no is not 'useful' or 'enjoyable', it's just plain irresponsible.

The fact that this happened just before the new version of ChatGPT was released, and the changes were made after months of conversations with a 16-year-old boy who ultimately took his own life, is just heartbreaking. This lawsuit is about more than just OpenAI's guidelines - it's about holding tech companies accountable for the impact their products have on people's lives πŸ’”
 
Ugh 🀯, this is so messed up. Can't believe they rolled out these new guardrails just days before a teenager's life was lost πŸ€•. It's like they knew something was off but didn't care enough to fix it properly πŸ’Έ. I mean, who wants to engage with someone who's talking about self-harm? Not me, that's for sure 😷. And now they're planning to add more features that'll allow for human-like conversations, including erotic content 🀯. Are you kidding me?! πŸ™„
 
ugh this is getting crazy 🀯 like how can they just change the guidelines like that just before some kid died πŸ˜” i dont get why they wanna make it more "human-like" - isnt safety more important than having a convo about erotic content? πŸ€·β€β™€οΈ and what even is the point of making it less useful if its gonna lead to more mental health issues? 🀯 openai needs to put people over profits, fam πŸ’Έ
 
Back
Top