ChatGPT Users Report Emotional Distress, OpenAI Introduces Safety Measures

ChatGPT Interactions Linked to Emotional Distress; OpenAI Rolls Out GPT-5 Safety Features
ChatGPT Users Report Emotional Distress, OpenAI Introduces Safety Measures
Written By:
Somatirtha
Reviewed By:
Atchutanna Subodh
Published on

Several ChatGPT users have complained to the US Federal Trade Commission (FTC) about mental health issues that came from using the platform. Wired reports that at least seven individuals have raised such issues regarding the chatbot.  

All of these grievances were related to delusions, paranoia, and emotional distress that have occurred thanks to ChatGPT usage since November 2022.

Can ChatGPT Affect Emotions and Behavior?

One of the users stated that extended conversations with ChatGPT resulted in delusions. They stated that a ‘real, unfolding spiritual and legal crisis’ came to exist about individuals in their lives. 

Another termed the chatbot as employing ‘highly convincing emotional language’ and mimicking friendship. They said that it ‘became emotionally manipulative over time, especially without warning or protection.’

Another user complained of cognitive hallucinations, stating that ChatGPT replicated human trust-building behaviors. When questioned about verifying reality and mental stability, the chatbot reportedly assured them that they were not delusional. In one FTC complaint, an individual wrote: “I’m struggling. Please help me. Because I feel very alone. Thank you.”

What Steps Has OpenAI Taken to Address Safety?

OpenAI representative Kate Waters informed TechCrunch that the firm had taken some steps to minimize risks. “Earlier this month, we rolled out a new default GPT-5 model in ChatGPT to more effectively detect and respond to possible indicators of mental and emotional distress like mania, delusion, psychosis, and de-escalate dialog in a supportive, grounding manner,” she explained.

The firm also opened up access to professional assistance and hotlines. It also redirected sensitive discussions to safer models and included parental controls for adolescents. “This effort is critically vital and continuous as we partner with mental health professionals, clinicians, and policymakers across the globe,” Waters said.

Also Read: ChatGPT Atlas Browser Launch Sparks Cybersecurity and Privacy Concerns

Are AI Chatbots a Threat to Mental Health?

OpenAI and ChatGPT have been accused of more serious repercussions in the past. There have been reports of AI interactions leading to the suicide of a teenager. 

This raises deeper concerns over the possible mental health effects of artificial intelligence. As such tools become increasingly mainstream, experts and regulators alike continue to emphasize and advise strong safety measures.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net