The Pentagon has warned it could cut ties with Anthropic over disagreements on AI safeguards for military use of Claude. The dispute turns on how broadly the Defense Department can apply Claude inside government systems, including classified environments.
Anthropic says it supports national security work, but it wants firm limits on certain uses. Those limits target high-risk areas such as fully autonomous weapons and mass domestic surveillance.
Defense officials have reviewed whether to continue working with Anthropic under the current terms. The Pentagon wants language that permits Claude's use for “all lawful purposes.” The scope covers weapons development, intelligence collection, and battlefield operations.
The Pentagon has also discussed labeling Anthropic a “supply chain risk.” This step can force defense suppliers to certify that they do not use Claude in their workflows. It can also complicate future contracting across the defense ecosystem.
Claude has already integrated into sensitive US government systems. This footprint raises switching costs if the Pentagon ends the relationship quickly. Officials have also weighed how fast other frontier models can meet specialized government needs.
The Pentagon contract at issue totals up to $200 million. The figure represents a small share of Anthropic’s stated revenue, but the policy outcome can shape future AI procurement rules.
Anthropic has framed the talks as a policy question, not an operational one. The company has said it has not discussed Claude's use for specific military operations. Instead, discussions have focused on usage-policy boundaries.
Those boundaries include “hard limits” on fully autonomous weapons and mass domestic surveillance. Anthropic has pushed for clear guardrails that keep humans accountable for lethal decisions and large-scale monitoring.
Defense officials have argued that strict vendor rules can create uncertainty during missions. They want consistent permissions that follow US law without additional contractual carve-outs. This gap has driven months of friction on contract terms and acceptable safeguards.
The Pentagon’s stance also signals how it plans to approach other major AI labs. The same “all lawful purposes” standard has become a central reference point in government negotiations over advanced model access.
Anthropic announced a $30 billion Series G funding round led by GIC and Coatue Management. The round valued the company at $380 billion post-money. Anthropic also listed a wide investor group, including NVIDIA and Microsoft.
Anthropic said the round included a portion of previously announced investments from Microsoft and NVIDIA. It has also named co-leads such as D. E. Shaw Ventures, Dragoneer Investment Group, Founders Fund, ICONIQ, and MGX.
In the same announcement, Anthropic said its run-rate revenue reached $14 billion and has grown more than 10x annually over the past three years. CFO Krishna Rao said customers increasingly rely on Claude for critical work across organizations.
Anthropic also highlighted Claude Code as a major growth driver. The company said Claude Code's run-rate revenue exceeded $2.5 billion and has more than doubled since the beginning of 2026.
Also Read: Pentagon Adds xAI’s Grok Chatbot to GenAI.mil AI Platform for Defense Use