OpenAI has revised its Pentagon deal following controversy, adding stronger anti-surveillance protections. Altman posted on X about the agreement with the Pentagon, stating that the OpenAI deal with the government had been rushed. The move signals increased focus on AI governance.
When the US Defence Secretary Pete Hegseth called Anthropic a ‘supply-chain risk’, OpenAI struck a deal with the United States Department of Defence. This situation has raised new questions about how AI companies handle business deals, especially when the issue is sensitive.
Anthropic's Claude has received huge support from users in its fight with the US Department of Defence (DoD). Claude AI has climbed to the No.1 position on the App Store amid user reactions to OpenAI’s Pentagon deal. Observers noted this shift in tone after significant user backlash. As reported earlier, many users also protested against the deal between OpenAI and the Pentagon as they deleted the AI tool during the Cancel ChatGPT trend.
A user on Reddit, clearly not impressed by the OpenAI deal, wrote, "I think it's time to burn any bridges we had with ChatGPT, cancel your subscription, and delete it too, obviously. Also, start leaving bad reviews on the Play Store and App Store. And if you have to, use an open weights model!”
One such user wrote, "This was a calculated business decision to chase government money at the expense of everything they promised when they asked for your trust and your subscription. You can be done with them in 15 minutes. And you can make the last month hurt a little on your way out.”
Sam Altman posted a message on X addressing the controversy. ‘We shouldn’t have rushed to get this out on Friday. The issues are super complex and demand clear communication,’ he wrote. He added, ‘We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. ‘ Good learning experience for me as we face higher-stakes decisions in the future.’
Altman also said it was ‘critical to protect the civil liberties of Americans’ and noted that the Pentagon had assured OpenAI its tools would not be used for domestic surveillance. He further said Anthropic should not be designated a supply chain risk and that he hoped the Defence Department would offer it the same terms.
Also Read: US Military Used Claude AI After Trump’s Anthropic Ban, Claims Report
What began as a disagreement over how government agencies could use advanced AI tools quickly became a larger debate about civil liberties, military power, and corporate responsibility. As debates over corporate ethics and defense partnerships intensify, trust and transparency may become defining factors in platform adoption, shaping the next phase of rivalry between leading AI developers.
Stronger oversight mechanisms may set a precedent for future defense contracts, signaling that transparency, compliance, and clear surveillance limits will increasingly shape how advanced AI systems are deployed in sensitive national security contexts.