Caitlin Kalinowski, OpenAI’s head of robotics and consumer hardware, has resigned after raising concerns about the company’s agreement with the US Department of Defense. In a public statement on March 7, 2026, she said OpenAI moved too quickly before allowing its AI systems onto classified military cloud networks. She said the issue involved governance and oversight, not personal disagreements with leadership.
Kalinowski said artificial intelligence can support national security, but she argued that some uses need stronger limits and more review before deployment. In her statement, she said surveillance of Americans without judicial oversight and lethal autonomy without human authorization crossed lines that required more deliberation. She also said the Pentagon agreement was announced before clear guardrails were fully defined.
She added that her decision came down to principle and governance. At the same time, she expressed respect for Sam Altman and the team at OpenAI. Her comments show that her objection focused on the process behind the agreement rather than the people involved. Kalinowski joined OpenAI in 2024 after previously working at Meta on augmented reality hardware.
Her departure puts new attention on how AI companies handle sensitive national security partnerships. It also highlights a broader debate inside the industry over how firms should define limits for military use, surveillance, and automated decision-making. In this case, the central question is not only what the technology can do, but also how quickly companies should approve its use in high-risk environments.
OpenAI has defended the agreement and said it includes specific protections. In its public explanation of the Pentagon contract, the company said its rules block domestic mass surveillance, fully autonomous weapons, and high-stakes automated decisions without meaningful human involvement. OpenAI also said these restrictions were written into the agreement to guide how the technology can be used in classified settings.
The company said the deployment model matters as well. It explained that the systems under this agreement would run in controlled cloud environments, which it says would not support fully autonomous weapons that depend on edge deployment. OpenAI also said it plans to stay involved in policy discussions with defense officials, cloud providers, and other AI labs as use of advanced models by the government expands.
In a separate response after criticism of the deal, OpenAI said people hold strong views on these questions and that it will continue engaging with employees, governments, civil society groups, and communities. That response suggests the company wants to keep the contract in place while easing concerns about how its tools may be used in national security operations.
Kalinowski’s resignation arrives at a time when military use of artificial intelligence is moving from discussion to deployment. As governments seek AI tools for classified networks and operational planning, pressure on private technology firms has increased. Companies now face closer scrutiny over internal governance, contract language, and enforcement of red lines.