A report claims the US military deployed Anthropic’s Claude AI in Iran strikes hours after a policy ban, raising questions about AI governance and defense usage.
Meanwhile, Trump, on Friday, directed all government agencies to stop using the AI assistant Claude by Anthropic. Defence Secretary Pete Hegseth went a step further, designating the AI startup a national security threat.
The US government used AI tools from Anthropic during the air attack launched on Iran just hours after declaring that it would stop using technology from the AI startup.
The command used Anthropic’s AI for intelligence assessments, target identification, and simulating battle scenarios. Prior to the Iran attack, Claude AI was also used by the Pentagon during the capture of Venezuela president Nicolás Maduro.
In a Truth Social post about ending the deal with Anthropic, US President Donald Trump had gone on to call the company ‘leftwing nut jobs’ and ‘woke’ while claiming that ‘their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.’
Trump had directed all federal agencies in the US to ‘immediately cease’ using Anthropic technology.
“We don’t need it, we don’t want it, and will not do business with them again! There will be a six-month phase-out period for agencies like the Department of War who are using Anthropic’s products at various levels,” he wrote.
Anthropic has challenged the US designation of the company as a ‘supply chain risk’ and said it will contest it in court.
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so,” the company wrote in a blog post.
Anthropic differed from other AI firms due to its safety-conscious business model. CEO Dario Amodei has always been against the Trump administration using the AI tool for spying on American citizens or to power killer robots that can eliminate people without human input.
Also Read: Why Anthropic’s Claude Cowork AI Tool Made Tech Giants Look Fragile
The report noted that the use of Claude in high-profile missions is among the reasons why the US administration had said that it would take six months to phase out the technology from the AI startup.
Pentagon and Anthropic had been arguing for months over how the company’s AI models are used in national defence. The AI startup said that it had allowed the US DoD to use Anthropic technology for purposes with two exceptions: mass domestic surveillance of Americans and fully autonomous weapons.
The episode underscores the growing complexity of regulating rapidly advancing AI technologies.