

Anthropic’s Claude reached the top spot among free apps on Apple’s App Store and Google Play this week, as a public dispute with the US Department of Defense (DoD) increased attention on the company’s AI safety stance. Anthropic said Claude became the most downloaded free app on both platforms on Tuesday. The company also said Monday marked its largest single day of sign-ups.
The surge followed a high-profile clash over military use of AI systems. Anthropic and the DoD disagreed over whether Claude models could be used in ways that Anthropic says may enable mass surveillance of Americans or fully autonomous weapons. The issue drew wider public interest after the US administration ordered federal agencies to phase out Anthropic’s technology over the next six months.
The main development centers on Claude’s rapid rise in app rankings after the government conflict became public. Anthropic said user sign-ups climbed sharply as more people learned about the company’s position on military AI safeguards. The company’s app performance suggests the debate reached beyond policy circles and into mainstream consumer behavior.
Anthropic’s public statements stressed that the company would not weaken its restrictions on the use of its models for mass domestic surveillance or fully autonomous weapons. The company also challenged the idea that pressure from federal agencies should force a change in those guardrails. This position became a key factor in public discussion around the company and its products.
The DoD dispute also introduced a new commercial risk. Defense officials said Anthropic could be labeled a supply chain threat. This label may affect firms that work with the US government and use Anthropic’s AI tools in contract-related operations. Anthropic said the scope would apply to government work and not automatically to those firms’ private business activities.
At the same time, OpenAI moved ahead with an agreement that allows the DoD to use its models under stated guardrails. This decision increased scrutiny of how major AI companies define acceptable military use. It also fueled comparisons between Anthropic and OpenAI on safety language, contract structure, and public communication.
Critics argued that early wording in OpenAI’s DoD arrangement left room for misuse, including potential domestic surveillance concerns. OpenAI later said it updated the language to make its limits clearer. The company stated that its tools would not be used for domestic surveillance of US persons and said any use by certain intelligence agencies would require a separate agreement.
These developments placed both companies at the center of a broader policy debate about AI, civil liberties, and national security. Public reaction showed that product adoption can shift quickly when users connect a company’s technology decisions with its stated values.
Anthropic moved quickly to build on the attention by adding memory features to Claude’s free tier, allowing users to retain context across interactions. The feature expansion may help the company keep new users who downloaded the app during the recent surge. It also strengthens Claude’s position in the consumer AI app market, where feature parity matters.
Even so, the longer-term impact remains unclear. Anthropic still faces possible fallout in government-linked business segments if contractors limit use of its models in federal projects. At the same time, stronger consumer visibility may improve adoption in non-government markets, especially among users who prioritize AI safety guardrails.
Also Read: Anthropic’s Claude AI Beats OpenAI’s ChatGPT in App Store Charts Amid Pentagon Deal Fallout