Skip to main content
Tech

Google reverses pledge to not use AI for weapons or surveillance


  • Google updated its public AI ethics policy. The decision reverses the company’s promise that it won’t use the technology to pursue applications for weapons and surveillance.
  • Google is defending the change, saying it now has a better “understanding of AI’s potential and risks.”
  • Some are criticizing the decision with one Google employee saying “the company should not be in the business of war.”

Full Story

In a sharp reversal from its original principles, Google updated its artificial intelligence ethics policy during the week of Feb. 3, lifting a longstanding ban on the technology being used to create weapons and conduct surveillance. A previous version of the policy stated the company would not use AI for developing weapons or other technology intended to injure people or technology used to surveil beyond international norms.

That language is gone from the policy page, with this disclosure at the top: “We’ve made updates to our AI principles. Visit AI.Google for the latest.”

The company first published its AI principles in 2018, which is years before the technology became vastly common.

Why did Google make the change?

Google defended the change in a blog post. The post said businesses and Democratic governments need to work together on AI that “supports national security”.

The company added it now has a deeper “understanding of AI’s potential and risks.”

The move comes just weeks into President Donald Trump’s second term, but a Google spokesperson told Wired the changes were in the works for much longer. 

Who is against the change?

Multiple Google employees expressed concern in interviews with Wired.

“It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,” Google software engineer, Paul Koul said.

Human Rights Watch also criticized Google’s decision telling the BBC, AI can “complicate accountability” for battlefield decisions that “may have life or death consequences.”

Does the U.S. military plan to use AI?

In January 2025, the Pentagon announced a new office focused on integrating AI  into military systems like autonomous drones, command and control systems and intelligence systems. 

This initiative is part of the United States’ broader efforts to deploy autonomous weapons and counter threats from near-pear adversaries like China and Russia.

The Pentagon said these advancements will enhance the efficiency of U.S. forces.

Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives.

Tags: , , , , , , , , ,

[Ryan]

IN A SHARP REVERSAL FROM ITS ORIGINAL PRINCIPLES … GOOGLE UPDATED ITS A-I ETHICS POLICY THIS WEEK – LIFTING A LONGSTANDING BAN ON THE TECHNOLOGY BEING USED TO CREATE WEAPONS AND CONDUCT SURVEILLANCE.

A PREVIOUS VERSION OF THE POLICY STATED THE COMPANY WOULD NOT USE A-I FOR DEVELOPING WEAPONS OR OTHER TECHNOLOGY INTENDED TO INJURE PEOPLE– OR TECHNOLOGY USED TO SURVEIL BEYOND INTERNATIONAL NORMS.

NOW, THAT LANGUAGE IS GONE FROM THE POLICY PAGE … WITH THIS DISCLOSURE  AT THE TOP …

“WE’VE MADE UPDATES TO OUR A-I PRINCIPLES. VISIT AI-DOT-GOOGLE FOR THE LATEST.”

THE COMPANY FIRST PUBLISHED ITS A-I PRINCIPLES IN 20-18 – YEARS BEFORE THE TECHNOLOGY BECAME SO COMMON.

GOOGLE DEFENDED THE CHANGE IN A BLOG POST, SAYING BUSINESSES AND DEMOCRATIC GOVERNMENTS NEED TO WORK TOGETHER ON A-I THAT “SUPPORTS NATIONAL SECURITY”.

THE COMPANY ADDED IT NOW HAS A DEEPER “UNDERSTANDING OF A-I’S POTENTIAL AND RISKS.”

THE MOVE COMES JUST WEEKS INTO PRESIDENT DONALD TRUMP’S SECOND TERM, BUT A GOOGLE SPOKESPERSON TOLD “WIRED” THE CHANGES WERE IN THE WORKS FOR MUCH LONGER. 

MULTIPLE GOOGLE EMPLOYEES EXPRESSED CONCERN IN INTERVIEWS WITH “WIRED” …

 “IT’S DEEPLY CONCERNING TO SEE GOOGLE DROP ITS COMMITMENT TO THE ETHICAL USE OF A-I TECHNOLOGY WITHOUT INPUT FROM ITS EMPLOYEES OR THE BROADER PUBLIC, DESPITE LONG-STANDING EMPLOYEE SENTIMENT THAT THE COMPANY SHOULD NOT BE IN THE BUSINESS OF WAR … A SOFTWARE ENGINEER FOR THE COMPANY SAID.

‘HUMAN RIGHTS WATCH’ ALSO CRITICIZED GOOGLE’S DECISION TELLING THE B-B-C .. A-I CAN “COMPLICATE ACCOUNTABILITY” FOR BATTLEFIELD DECISIONS THAT “MAY HAVE LIFE OR DEATH CONSEQUENCES.”

LAST MONTH, THE PENTAGON ANNOUNCED A NEW OFFICE FOCUSED ON INTEGRATING A-I INTO MILITARY SYSTEMS LIKE AUTONOMOUS DRONES, COMMAND AND CONTROL SYSTEMS AND INTELLIGENCE SYSTEMS. 

THIS INITIATIVE IS PART OF THE UNITED STATES’ BROADER EFFORTS TO DEPLOY AUTONOMOUS WEAPONS AND COUNTER THREATS FROM NEAR-PEAR ADVERSARIES LIKE CHINA AND RUSSIA.

THE PENTAGON SAYS THESE ADVANCEMENTS WILL ENHANCE THE EFFICIENCY OF U-S FORCES.

GOOGLE LISTS ITS NEW GOALS AS PURSUING BOLD, RESPONSIBLE, AND COLLABORATIVE A-I INITIATIVES.

I’M RYAN ROBERTSON, FOR MORE UNBIASED, STRAIGHT FACT REPORTING LIKE THIS DOWNLOAD THE STRAIGHT ARROW NEWS APP TODAY