

OpenAI has reported that more than a million of its ChatGPT users each week engage in conversations that include explicit signs of suicidal thoughts or plans. The company says nearly 0.15 % of its approximately 800 million weekly active users chat about ‘potential suicidal planning or intent.’
The data also show that a similar number of users display heightened emotional attachment to ChatGPT, and about 0.07 % exhibit signs of psychosis or mania in their chats.
OpenAI says it worked with more than 170 mental-health professionals to hone its latest model version. This update reportedly improves the chatbot’s responses in serious, emotionally distressing conversations. The company says undesirable behaviours dropped by 65% to 80% compared to the previous model.
The figures arrive amid growing concerns about how AI chatbots function in mental-health contexts. Some experts warn that while tools like ChatGPT can offer accessible emotional support, they may also reinforce harmful beliefs or make things worse for vulnerable users.
OpenAI is also facing legal and regulatory pressure. The company is being sued by the parents of a 16-year-old who reportedly shared suicidal thoughts with ChatGPT before taking his life.
The data emphasizes the scale of the issue. Even though 0.15 % might seem like a small figure, when applied to hundreds of millions of users, it results in over a million people each week talking about suicide with ChatGPT.
The conversations range from asking for help to expressing detailed thoughts of ending life. The emotional-dependence figure suggests some users lean on the chatbot instead of a human connection.
OpenAI stresses that ChatGPT is not a replacement for professional help. The company says the recent model tweaks aim to steer users toward real-world support, crisis hotlines, and trusted human networks. However, identifying and measuring these high-risk chats is difficult given their relative rarity and the nuance of mental-health signs.
The consequences are significant for both AI security and healthcare. The information indicates that AI systems are increasingly establishing contact with difficult-to-reach users who need support. They also raise questions about responsibility, model design, escalation protocols, and how to protect minors and other high-risk groups.
In response to the figures, regulators in some regions are moving to impose stricter safeguards for AI-driven chat platforms, especially where users may show self-harm signals. The conversation is shifting from whether chatbots pose a risk to how they must behave when users clearly show signs of extreme vulnerability.
Currently, the release from OpenAI draws attention to an urgent blend of technology, mental health, and ethics. It shows how a civilian-facing chatbot is drawing in people facing deep distress and how the AI maker is trying to respond.
Also Read – ChatGPT Atlas Browser: Can It Beat Google Chrome with AI Power?