

OpenAI has notified its API customers about a data exposure incident resulting from a security breach at its third-party analytics provider, Mixpanel. The company clarified that its own systems and the ChatGPT user platforms were not compromised, and no sensitive information was exposed.
According to OpenAI, earlier this month, an attacker gained unauthorized access to Mixpanel’s environment and exported a dataset. Fortunately, the breach did not extend to OpenAI’s infrastructure.
OpenAI was made aware of the incident and received the dataset in question by November 25, 2025. The company says it immediately began assessing the incident and determining which API users are impacted.
The company said the information exposed was ‘limited, non-sensitive analytics data’ involving API product users only. The dataset did not include things such as chat logs, account passwords, API keys, billing information, or government-issued IDs.
Data that may have been exposed includes:
Name on the API account
Email address associated with the account
Approximate location (city, state, country)
Operating system and browser information
Referring websites
Organisation or User IDs
The company reiterated that the dataset was purely for analytics and contained no high-risk customer information.
Following the incident, OpenAI announced that it has removed Mixpanel from all production systems. The company is now doing expanded security reviews across its entire vendor ecosystem, strengthening requirements, and demanding stricter security assurances from partners.
In a blog post, OpenAI wrote, “Trust, security, and privacy are foundational to our products, our organization, and our mission...After reviewing this incident, OpenAI has terminated its use of Mixpanel.”
Meanwhile, the company is directly contacting all the affected organisations, administrators, and users.
Also Read: Salesforce Rejects Ransom Demand Following Data Breach and Record Exposure
While the exposed data is partial, OpenAI has recommended that its impacted users be vigilant against phishing and social engineering. This is because the dataset contained names, email IDs, and API identifiers; hence, users have been advised to:
Beware of unsolicited emails or messages that contain links or attachments.
Every email or message claiming to come from OpenAI must be verified by checking the sender's domain as an official one.
Keep in mind that OpenAI would never, under any circumstances, ask for passwords, API keys, or verification codes through email, text, or chat.
It is recommended to use multi-factor authentication for your accounts, as it will act as a more secure barrier.
As OpenAI tightens its vendor security and calls users to be vigilant, the larger question now is: how well-equipped are tech companies to safeguard customer trust in an increasingly vulnerable digital ecosystem?