
OpenAI has lost its access to the Claude API, just before the GPT 5 launch. The primary reason behind this sudden move is OpenAI’s violation of the rules and privacy policies of Anthropic. The company mentioned that the rival’s engineers have been using Claude codes to enhance their AI LLM models.
The sudden access cut highlights the tension in the growing competitive AI market, and raises questions on openness and collaborations among the dominating players of this industry.
The increasing competition in the AI LLM market is generating tension between companies. However, this cut-off decision came at a time when Sam Altman’s company had been preparing to launch its next LLM model: GPT 5.
In the statement, Anthropic spokesperson clearly mentioned, “Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5.” Continuing his statement, Christopher Nulty concludes, “Unfortunately, this is a direct violation of our terms of service.”
Claude’s spokesperson straightforwardly mentioned that customers are not allowed to use the service to make a rival product, or train other competitive LLM models. As per the reports, OpenAI used to use Claude APIs, instead of the regular chat interface. Therefore, they had the opportunities to evaluate Claude’s capabilities in sectors like coding and creative writing, compared to their own GPT models.
Notably, OpenAI replied to all the allegations positively, mentioning what they have done is necessary to improve AI safety and doesn’t go against industry standards. The spokesperson opined, “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
Also Read: Zuckerberg Takes a U-Turn, Reverses Stance on Open Sourcing AI
Anthropic and OpenAI share a complicated relationship. In 2021, Dario and Daniela Amodei, two ex-employees from OpenAI, founded Anthropic, following their internal disagreement with the company’s policies and safety measures.
Since Anthropic was founded, these two companies have been racing with their advanced AI models. While ChatGPT's success is based on massive commercialization and strategic partnerships, the Claude series prioritizes user safety.
Despite their differences, these two AI giants used to maintain API sharing, a mutual access to each other’s models for testing and benchmarking. This has been a common trend in the AI development sector. However, it seems that Anthropic’s claims and actions have broken the peaceful collaboration of these two rivals.
This is not the first time Anthropic has issued an AI API ban for some companies. Earlier this year, the company did the same with Windsurf, a startup that’s reportedly owned by OpenAI. With this latest move, it looks like the San Francisco-based company wants to remove all the connection to the AI giant.
The dispute between Anthropic and OpenAI marks a pivotal shift in the AI industry, which highlights that companies are going away from collaborative research and emphasizing hard competition. Major AI players who used to be a part of an open ecosystem are now drawing stricter boundaries around their AI models. Anthropic's dramatic cutoff showed the growing mistrust and strategic defensiveness among companies.
This fragmentation may have long-lasting consequences. Restricted access may hinder the pace of AI innovation, slow down safety research, and reduce opportunities for collaborative breakthroughs. However, the industry has to figure out whether progress is better served when working alone or in collaboration to shape the future of the AI race and usage.
Also Read: AI Career Showdown: Should You Join Open AI, Google, or xAI?