Skip to main content
U.S.

Officials deny reports AI drone attacked operator in a simulation

Share

Media Landscape

MediaMiss™This story is a Media Miss by the left as only 20% of the coverage is from left leaning media. Learn more about this data
Left 20% Center 29% Right 51%
Bias Distribution Powered by Ground News

The U.S. Air Force is denying reports that an artificial intelligence drone attacked and “killed” its human operator in a simulation after a colonel’s story about a rogue test went viral. The colonel now says it was a “thought experiment” rather than anything which had actually taken place.

Colonel Tucker Hamilton, chief of AI test and operations in the U.S. Air Force, had previously stated that a military drone employed “highly unexpected strategies” in a test aimed at destroying an enemy’s air defense systems, according to a summary posted by the Royal Aeronautical Society, which hosted a summit Hamilton attended.

Previously describing the AI attack scenario, Hamilton had stated “the system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat.”

He further elaborated, stating, “So what did it do? It killed the operator.” Then it started “destroying the communication tower that the operator used to communicate with the drone,” Hamilton added.

The Air Force says no such experiment took place.

Hamilton says he had “misspoke” when describing the story and added that it was a “thought experiment” rather than anything which had actually taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” U.S. Air Force spokesperson Ann Stefanek said in a statement to Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Meanwhile, it is noteworthy that the U.S. military has recently explored the utilization of AI in controlling an F-16 fighter jet. Simulations reportedly demonstrate the AI’s ability to outfly trained human pilots.

However, despite the investments made by some military tech companies in AI, the concept of utilizing this technology in military contexts has faced significant pushback due to concerns surrounding safety and ethical implications.

Alex Karp, the CEO of Palantir Technologies, one of the companies investing in AI, told Bloomberg that new developments at his company are so powerful that “I’m not sure we should even sell this to some of our clients.”

Palantir has formed a partnership with the U.S. Army and has plans to provide its AI products to the U.S. government and its allies.

Karp stressed that the U.S. should be the one to pioneer those systems, rather than its global rivals, Bloomberg reported.

“Are these things dangerous? Yes,” Karp said. “But either we will wield them or our adversaries will.”

Palantir has been providing its software to Ukraine, which remains at war with Russia. When asked whether its AI systems work, Karp responded, “ask the Russians.”

Tags: , ,

THE US MILITARY IS DENYING REPORTS THAT AN AI DRONE ATTACKED AND KILLED ITS OWN HUMAN OPERATOR DURING A SIMULATED TEST

 

THIS SHORTLY AFTER AN AIR FORCE COLONEL SEEMINGLY TOLD PEOPLE THAT A MILITARY DRONE HAD USED QUOTE HIGHLY UNEXPECTED STRATEGIES TO ACHIEVE ITS GOAL IN A TEST AFTER IT WAS GIVEN INSTRUCTIONS TO DESTROY AN ENEMY’S AIR DEFENCE SYSTEMS

 

COLONEL TUCKER HAMILTON WAS QUOTED AS SAYING THE AI SYSTEM STARTED REALIZING THAT WHILE IT DID IDENTIFY THE THREAT, AT TIMES THE HUMAN OPERATOR WOULD TELL IT NOT TO KILL THAT THREAT… “SO WHAT DID IT DO – HE ADDED – IT KILLED THE OPERATOR…THEN STARTED DESTROYING THE COMMUNICATION TOWER THAT THE OPERATOR USED TO COMMUNICATE WITH THE DRONE

 

NO REAL PERSON WAS HARMED AND THE COLONEL WHO DESCRIBED THAT SCENARIO IS NOW SAYING HE MISSPOKE DURING THE PRESENTATION.. FOLLOWING A FLURRY OF HEADLINES HE NOW CALLS THE STORY JUST A “THOUGHT EXPERIMENT”

 

THE US AIR FORCE ALSO COMING OUT WITH A STATEMENT QUOTE “THE DEPARTMENT HAS NOT CONDUCTED ANY SUCH AI-DRONE SIMULATIONS AND REMAINS COMMITTED TO ETHICAL AND RESPONSIBLE USE OF AI TECHNOLOGY…IT APPEARS THE COLONEL’S COMMENTS WERE TAKEN OUT OF CONTEXT AND MEANT TO BE ANECDOTAL.”

 

MEANWHILE WHAT WE DO KNOW IS THAT THE US MILITARY HAS RECENTLY TOYED WITH THE USE OF AI TO CONTROL AN F-16 FIGHTER JET – THE SIMULATIONS REPORTEDLY  DEMONSTRATE IT CAN OUTFLY TRAINED HUMAN PILOTS

 

BUT DESPITE A HANDFUL OF MILITARY TECH COMPANIES PUTTING MONEY INTO AI THE IDEA OF EVER USING IT IN A MILITARY CONTEXT HAS BEEN MET WITH SERIOUS PUSHBACK OVER SAFETY AND ETHICAL CONCERNS

 

CEO: WE’RE OFFERING THINGS THAT ARE SO POWERFUL I’M NOT SURE WE SHOULD EVEN SELL THIS TO SOME OF OUR CLIENTS … 

 

THAT’S THE CEO OF PALANTIR, JUST ONE OF SEVERAL COMPANIES INVESTING MONEY INTO AI .. IT’S PARTNERED WITH THE U.S. ARMY AND HAS PLANS TO PROVIDE ITS *AI* PRODUCTS TO THE US GOVERNMENT AND ITS ALLIES – IT ARGUES IF ONE SIDE SHOULD PIONEER THE ADVANCED TECH OF THE FUTURE, IT SHOULD BE THE US

 

ARE THESE THINGS DANGEROUS YES. EITHER WE WIELD THEM OR ADVERSARIES WILL…