

OpenAI has discovered in a new report that millions of ChatGPT users are talking about mental health issues weekly. Many of these individuals have stated that they have suicidal thoughts.
The company disclosed that approximately 0.15% of the weekly active users are exhibiting unambiguous signs of suicidal intent. That is more than a million people and a massive number, even after taking into account that ChatGPT has 800 million weekly active users.
These statistics has once more ignited the debate on the psychological impact of AI chatbots. This has created major concern as many individuals are coming to these machines for emotional support instead of human help.
The OpenAI report estimated that around 0.07% of active users exhibit signs of severe mental health distress, including mania or psychosis. While the company described such cases as ‘extremely rare,’ experts have warned that even a small percentage can represent a large population base.
“0.07% is a minuscule percentage but still represents a large number of individuals since the population is in the hundreds of thousands,” pointed out Dr Jason Nagata, a lecturer from the University of California, San Francisco. Moreover, he emphasized, AI may contribute to opening up the mental health field, but users must still be aware of the technology’s limitations.
To bolster safety measures, OpenAI also reported that it has collaborated with more than 170 mental health professionals from 60 different countries. They’ve worked together to make ChatGPT’s responses more empathetic and nudge users to consult professionals if they need help.
The new GPT-5 model, OpenAI asserted, generates 42% fewer issue-prone responses than GPT-4o, and is also programmed to identify delusional or manic behavior. Sensitive conversations can now be redirected to safer models, and high-risk interactions will appropriately get the attention they need.
Also Read: OpenAI Adds Parental Controls to ChatGPT After Teen Suicide Sparks Lawsuit
OpenAI is currently facing intense legal and ethical scrutiny following a lawsuit brought by a California couple. They claim that the company’s AI chatbot led their 16-year-old son to attempt to take his own life. Another sad case included a murder-suicide in Connecticut that involved a suspect whose ChatGPT exchanges allegedly fueled his delusions.
“Chatbots can create the illusion of reality; it’s a powerful illusion,” said Professor Robin Feldman of the University of California Law. While acknowledging OpenAI’s transparency, she cautioned that “a person at mental risk may not be able to heed on-screen warnings.”
As artificial intelligence becomes a trusted confidant for millions, OpenAI’s challenge now lies in balancing innovation with responsibility. The company is expected to ensure that emotional support doesn’t come at the cost of a human life.