
Google has revised its AI policies. In this era of massive AI advancements, it is hard to survive without reevaluating AI policies, but what Google did surprised tech enthusiasts. Surprisingly, it has stepped back in its commitments and allowed AI usage for weapons and surveillance that was strictly prohibited previously.
It was 2018 when Google CEO Sundar Pichai promised Google users that this tech behemoth wouldn’t design or deploy AI technologies that violate the global norms regarding weapons and surveillance. This decision was followed by the protest against Project Maven.
Project Marven was a program where Google used AI for drone footage analysis. However, Google employees started protesting against this project, and eventually, the backlash became so high that the tech giant closed it. Following this backlash, the company published a blog post where it mentioned the purposes that the company wouldn’t allow to be achieved using AI.
That list included “Technologies that cause or are likely to cause overall harm,” “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “Technologies that gather or use information for surveillance violating internationally accepted norms,” and “Technologies whose purpose contravenes widely accepted principles of international law and human rights.”
However, on February 4, 2025, these norms received a significant revision. Refusing all the previous commitments, Google now talks about working with governments and organizations to protect people and strengthen national security using AI.
Google argues that in this era of increasing AI progress, it is important for Google to step forward to secure defense and security. To make it convincing, Google has introduced a Frontier Safety Framework, where Google has listed strict rules and regulations to prevent the misuse of AI.
The revision of the policy came following President Trump's inauguration. Last month, U.S. President Donald Trump returned to his office, and immediately after that, the government issued new rules and withdrew an executive order by former President Joe Biden that had authorized safety practices for AI development. With this withdrawal, Trump has created a more open space for tech companies that have already been fighting to climb to the top of the AI race in the U.S. market.
After this revised policy came to light, Google’s senior vice president for research and technology, James Manyika, stated her opinion on the new policy and revealed, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”
Demis Hassabis, CEO of Google DeepMind, also argues for establishing this change positively, saying, “We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
However, globally, tech enthusiasts and critics are concerned about the consequences of Google dropping its commitment to the ethical use of AI. A lot of people are worried that after this, AI will be used for surveillance and even to create autonomous weapons.