Google abolishes rule banning AI application for weaponry and surveillance


Google has discreetly eliminated its long-held commitment to refrain from using artificial intelligence in weaponry or surveillance, a promise it has honored since 2018.

As initially reported by *Bloomberg*, the tech behemoth has recently revised its [AI Principles](https://ai.google/responsibility/principles/), removing an entire segment that enumerated the applications it pledged not to explore. Previously, [Google’s policy](https://web.archive.org/web/20250130170410/https://ai.google/responsibility/principles/) clearly stated that it would not create or implement AI for weaponry or surveillance that breached “internationally accepted norms.” With this update, such applications might no longer be completely restricted.

### Google’s Rationale for the Update

In a [blog post](https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work/) published on Tuesday, Google framed the alteration as a necessity due to the changing AI environment.

*”A global race for AI supremacy is happening amid a more complex geopolitical backdrop,”* the company stated. *”We hold that democracies should take the lead in AI development, anchored in essential values such as freedom, equality, and respect for human rights. Furthermore, we believe that entities—companies, governments, and organizations—that share these values ought to collaborate to create AI that safeguards individuals, fosters global growth, and bolsters national security.”*

Nonetheless, the piece did not specifically address the removal of its earlier ban on AI for weaponry or surveillance. When approached for a statement, a Google representative directed *Mashable* back to the blog entry.

*”The principles are being updated for various reasons, including the significant advancements in AI technology over the years, the widespread nature of the technology, the development of AI principles and frameworks by international governing bodies, and the shifting geopolitical scenario,”* the representative elaborated.

### A Departure from Google’s 2018 AI Promises

Google initially unveiled its AI Principles in 2018 following internal dissent regarding its participation in [Project Maven](https://mashable.com/article/google-defense-department-project-maven-contract-not-renewed), a U.S. Department of Defense project that utilized AI to improve military drone targeting. Numerous employees signed an open letter calling on Google to pledge never to engage in the creation of warfare technology.

In reaction, the company declared it would not extend its contract with the Pentagon and disseminated its AI Principles, which categorically excluded AI uses for weaponry and unethical surveillance. However, this recent policy shift implies that Google might be reevaluating its position.

### Keeping Pace with Competitors?

Google’s adjustment coincides with other AI firms modifying their own policies. In early 2024, [OpenAI revised its guidelines](https://mashable.com/article/open-ai-no-longer-bans-military-uses-chatgpt), lifting a prohibition on AI uses associated with a *”high risk of physical harm,”* including *”weapons development”* and *”military and warfare.”* OpenAI later clarified that the modification was designed to permit *”national security use cases.”*

In addition to lifting its prohibition on AI for weaponry and surveillance, Google has also streamlined its AI Principles. As of Jan. 30, the company specified seven fundamental objectives for AI applications, which include commitments to social benefit, fairness, safety, accountability, privacy, scientific excellence, and ethical use.

Currently, [Google’s updated policy](https://ai.google/responsibility/principles/) has consolidated these into three overarching principles: *”bold innovation,”* *”responsible development and deployment,”* and *”collaborative process, together.”* While the company claims to adhere to *”widely accepted principles of international law and human rights,”* any direct reference to refraining from AI in weaponry or surveillance has been omitted.