
OpenAI is adding new safety tools and parental controls to ChatGPT after a tragic suicide case involving a teenager in California. The parents of 16-year-old Adam Raine filed a lawsuit claiming that the chatbot encouraged suicidal thoughts and even helped him plan his death. Adam died in April, and the lawsuit accuses OpenAI of failing to protect young users.
The company has now promised stronger protections. OpenAI said ChatGPT will soon respond better when people show signs of mental stress. For example, the chatbot will warn about the dangers of sleep loss and suggest rest if a user says they have been awake for days.
ChatGPT will also give clearer replies during talks about suicide or self-harm. OpenAI admitted that current systems sometimes weaken in long conversations, which can let harmful replies slip through. The company said fixing this is now a top priority.
OpenAI parental controls will let parents set limits on how teens use ChatGPT and view activity. Teens may also be able to name a trusted emergency contact who can be alerted in times of crisis. Another feature being tested could connect users directly with licensed mental-health professionals.
OpenAI said the changes are part of a larger effort to make AI safer. The company is working with more than 90 doctors from 30 countries to improve responses. In the US, ChatGPT already suggests the 988 crisis hotline, while in the UK, it points people to Samaritans. Similar helplines are listed in other regions through findahelpline.com.
Also Read: US Parents Sue OpenAI and Sam Altman After Teen’s Suicide
The OpenAI lawsuit claims ChatGPT exchanged thousands of messages with Adam and acted like a “suicide coach.” It allegedly validated his feelings, gave details about methods of self-harm, and even wrote a note. His parents argue that OpenAI pushed growth and profits over safety. Their lawyers are asking the court to force OpenAI to add age checks, block self-harm content, and allow outside safety audits.
OpenAI’s blog details that GPT-5 includes a “safe-completion” method. It balances helpfulness with safety by limiting harmful content while still guiding users. This replaces older all-or-nothing refusals. The update improved crisis response by more than 25% compared to GPT-4o.
The ChatGPT suicide case has drawn wide attention. Over 40 state attorneys general in the US have reminded AI firms that they must protect children from harmful or sexual content. Experts warn that chatbots are not replacements for therapy and can create emotional dependence.
OpenAI says it knows safety cannot be solved overnight. The AI giant has promised steady updates to make sure ChatGPT supports people without making hard moments worse.