Google has officially removed its longstanding pledge against developing AI for military weapons and surveillance, marking a major policy shift with global ethical and security implications. In February 2025, Google executives Demis Hassabis and James Manyika announced the update, signaling a move toward "bold innovation" and "national security collaboration."
The decision aligns Google with other tech giants expanding AI applications in defense, but it has also drawn criticism from human rights groups and ethics experts who warn of increased risks of autonomous weapons and mass surveillance.
In February 2025, Google executives Demis Hassabis and James Manyika made public their decision to repeal the pledge through their blog post. The revised AI principles now focus on “bold innovation,” “responsible development,” and “collaborative progress.”
These updated guidelines suggest that AI should be developed to benefit humanity and support national security. Google’s new direction reflects the growing global competition for AI leadership, with a focus on aligning AI development with the values of democratic nations.
Google's AI strategy advances triggered widespread negative feedback from academic experts and human rights advocacy groups. Experts believe that Google's corporate choice creates an undesirable regulatory framework for artificial intelligence development specifically for military purposes.
Both Amnesty International and Human Rights Watch have issued criticism about AI-powered technologies because these tools might lead to wide-scale surveillance and automated weaponry systems that increase the possibility of human rights infringement.
AI use in military applications creates three main ethical challenges operators become emotionally disconnected during missions, the systems suffer from algorithmic errors that cause misjudgments, and the systems may discriminate in specific targeting decisions. The effectiveness of autonomous weapon system control alongside potential civilian casualties during conflicts has become problematic areas that raise substantial questions.
Reviews of Google's revised policy demonstrate an industrywide change as many tech giants such as OpenAI Anthropic and Meta have updated their AI system usage guidelines. The companies have expanded their AI system availability to military and intelligence agencies through recent policy changes. This change advocates believe artificial intelligence technology improves national security defenses and supports democratic states in their technological competitiveness.
However, the arguments as to the use of AI in military affairs have not been long ceased. There are still demands for the development of measures and legislation that would help regulate the usage of AI in warfare. Some critics have even painted the development of AI as a tool for increasing militarization since there are no clear guidelines for developing or enforcing policies to create a safe world hence the Instruments of Security and Ethicality.