OpenAI Warns About ChatGPT Account Breach

OpenAI Warns About ChatGPT Account Breach

OpenAI gives ChatGPT users a strict warning regarding the account breach and malware

OpenAI has responded After more than 100,000 user profiles were exposed on the dark web, with more to come.

According to a spokeswoman from the business behind the famous AI writer ChatGPT, it utilizes industry-standard security practices, and the leak is "the result of commodity malware on people's devices, not an OpenAI breach."

"We are currently investigating the accounts that have been exposed," they continued. OpenAI follows industry best practices for authenticating and authorizing users to services such as ChatGPT, and we advise our users to use strong passwords and install only verified and trustworthy software on their PCs."

RedLine, Racoon, Vidar,

Group-IB highlighted the leak in its Threat Intelligence report, discovering that the stolen credentials belonged to users who entered ChatGPT between June 2022 and May 2023, with more expected to emerge in the coming months.

Group-IB further stated that the most compromised ChatGPT accounts were found in logs from May and that the Asia-Pacific area had the largest concentration of credentials for sale.

Lists of URLs visited and user IP addresses are also included in the logs that contained the ChatGPT accounts.

Most of the disclosed credentials were discovered in logs that had been compromised by several connected information stealers, one of which was the infamous Racoon, which was used to compromise 79,348 accounts.

Racoon is especially harmful because of its popularity and simplicity of usage. Threat actors can pay a monthly fee to utilize it, and no technical skills are necessary. Like other information thieves, Raccoon has other harmful characteristics that allow cybercriminals to conduct successive assaults automatically.

Vidar malware was also used to hijack ChatGPT accounts, albeit it was significantly less effective than Racoon, only gaining access to 12,984 accounts. RedLine virus was next, with 6,773 accounts succumbing to its methods.

Access to the logs also grants terrible actors you access to your chatbot interaction history, which might be extremely dangerous if you use it at business and share trade secrets with it.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net