Your ChatGPT Conversations Are Not Safe, OpenAI Admits

OpenAI Confirms ChatGPT Chats Can Be Flagged, Reviewed, and Reported to Law Enforcement
Your ChatGPT Conversations Are Not Safe, OpenAI Admits
Written By:
Antara
Reviewed By:
Shovan Roy
Published on

In a surprising revelation, OpenAI has admitted that conversations on ChatGPT are now being monitored. A cloud regarding OpenAI privacy policy has started surfacing after it stated that discussions on the platform may be flagged, reviewed, and, in some extreme cases, reported to law enforcement. 

According to the report, human moderators will review the flagged content and either ban accounts or notify authorities if they identify any risk. At the same time, OpenAI CEO Sam Altman has acknowledged that chats with AI, even if those are related to therapy or consultancy, carry no protection that real-world professionals offer.

Policy Shift: When AI Chats Meet Law Enforcement

AI chatbots often cause trouble. According to reports, ChatGPT has recently caused two serious problems. In the first case, it supported a teenager in attempting suicide, and the next one involved the AI LLM to fuel one’s mental illness and convinced him that his mother is trying to poison him. 

The shared chats reveal ChatGPT saying, “Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified.” It led the man to kill his mother and commit suicide. 

OpenAI, in its latest blog post, shared that the new policy is being imposed due to these unfortunate incidents. With the latest ChatGPT security monitoring system, flagged chats that contain potential threats of physical harm can be escalated to human moderators. 

About the next step, it says, “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”

Interestingly, OpenAI has drawn a line and revealed that discussions of self-harm won't be sent to law enforcement, maintaining privacy sentiments. Sam Altman, CEO of the company, has warned users not to use the chatbot as a therapist, lawyer, or confidant, as these conversations carry no privacy protection.

Beyond Flagging: What Other Risks Do LLMs Pose?

The debate over the flagged chat is a tiny part of a bigger issue. The actual question is, how much of the data that users share with ChatGPT or other LLMs is safe? Large language Models, like OpenAI’s one, are designed and trained on vast datasets. 

There’s a high chance that it will reveal all private messages and information. Researchers have demonstrated how easily hackers can manipulate prompts to access private information, thereby increasing cybersecurity risks. 

Moreover, in some countries, courts have started considering AI conversations as evidence. This means that any casual query related to legal issues or personal disputes could be presented in the courtroom and cause trouble. 

Also Read: How to Prevent ChatGPT From Using Your Data?

Using AI Responsibly in a Gray Zone

With these instances and the latest policy of OpenAI, it’s obvious that ChatGPT conversations aren’t private. Safeguarding the platform may prevent harm, but this also blurs the boundaries between personal expression and institutional surveillance. So, until the situation improves, users must use the AI chatbot with caution. 

Sharing sensitive issues with AI and treating it as a counselor for financial, legal, or mental health problems may feel convenient, but it’s massively risky. As AI adoption escalates, policymakers, users, and developers must ensure that innovation does not come at the expense of ChatGPT privacy. 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net