Skip to main content
Ryan Robertson Anchor, Investigative Reporter
Share
International

Autonomous kill: AI drones in Ukraine strike Russian targets

Ryan Robertson Anchor, Investigative Reporter
Share

Ukrainian drones dealing deathblows to Russian armor and equipment is nothing new. AI-piloted drones making the decision to strike targets on their own? That’s new. Not just for the war in Ukraine, but for all humanity.

In September, Ukraine started using Saker’s Scout quadcopter drone. A month later, Ukrainian developers confirmed to Forbes the drones are now carrying out autonomous strikes on Russian forces. It’s the first confirmed use of lethal force by artificial intelligence in history.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The Saker Scout started as a reconnaissance drone helping Ukraine’s armed forces identify Russian artillery and armor, even when heavily camouflaged.

Saker said its Scout can reconnoiter a field, mark hundreds of enemy targets and relay that info to other assets in a fraction of the time it would take humans to perform the same task.

The Scout can currently identify 64 different types of Russian military equipment including trucks, tanks, APCs and launchers. Teaching the Scout to target new types of equipment is as simple as a software patch.

The Scout can carry about six and a half pounds of explosives, and has a range over seven miles. Once the Scout identifies a target, it can autonomously drop its ordinance or act as a spotter; relaying the information to other Ukrainian attack drones which may, or may not, be controlled by a human operator.

The Scout doesn’t need GPS to navigate and can operate in environments where radio jamming blocks communication signals. It’s likely in these types of environments where the Scout is reportedly being used sparingly in Ukraine to carry out autonomous strikes.

Artificial Intelligence agents flying drones is nothing new. Straight Arrow News reported on several different AI pilots before, like Shield AI’s Hivemind. Coincidentally, Hivemind recently completed tests where it successfully flew swarms of V-Bats in formation. But again, those tests were focused maneuvering. Taking humans out of the kill-chain is a new development.

There are no international laws concerning the use of artificial intelligence and its control of lethally-armed robotic weapons systems. The U.S. military mandates an “appropriate level of human judgment” before an AI-agent can use force, but that’s an admittedly flexible term.

Critics say allowing so-called “Slaughterbots” onto the battlefield sets a dangerous precedent for mankind.

In an article from The Hill, the Future of Life Institute and the Arms Control Association agreed AI algorithms shouldn’t be in a position to take human life because they cannot comprehend its value. Those organizations also argued the over-reliance on machines and AI agents to conduct warfare will make it easier to declare war, could lead to the proliferation of AI weaponry being used by bad actors, and increases the risk of escalation between nuclear powers.

The United Nations is scheduled to address the issue of AI warfare more directly at its next General Assembly in late October. U.N. Secretary-General António Guterres said by 2026, he wants a legally binding agreement to prohibit lethal autonomous weapons from being used without human oversight.

Tags: , , , , ,

UKRAINIAN DRONES DEALING DEATHBLOWS TO RUSSIAN ARMOR AND EQUIPMENT IS NOTHING NEW. THOSE DRONES MAKING THE DECISION TO STRIKE TARGETS ON THEIR OWN? THAT’S NEW. NOT JUST FOR THE WAR IN UKRAINE, BUT FOR ALL HUMANITY.

IN SEPTEMBER, UKRAINE STARTED USING SAKER’S SCOUT QUADCOPTER DRONE. A MONTH LATER, UKRAINIAN DEVELOPERS CONFIRM THE DRONES ARE NOW CARRYING OUT AUTONOMOUS STRIKES ON RUSSIAN FORCES. IT’S THE FIRST CONFIRMED USE OF LETHAL FORCE BY ARTIFICIAL INTELLIGENCE IN HISTORY.

THE SAKER SCOUT STARTED AS A RECONNAISSANCE DRONE HELPING UKRAINE’S ARMED FORCES IDENTIFY RUSSIAN ARTILLERY AND ARMOR, EVEN WHEN HEAVILY CAMOUFLAGED. SAKER SAYS ITS SCOUT CAN RECONNOITER A FIELD, MARK HUNDREDS OF ENEMY TARGETS AND RELAY THAT INFO TO OTHER ASSETS IN A FRACTION OF THE TIME IT WOULD TAKE HUMANS TO PERFORM THE SAME TASK.

FORBES REPORTS THE SCOUT CAN CURRENTLY IDENTIFY 64 DIFFERENT TYPES OF RUSSIAN MILITARY EQUIPMENT INCLUDING TRUCKS, TANKS, APCS, AND LAUNCHERS. TEACHING THE SCOUT TO TARGET NEW TYPES OF EQUIPMENT IS AS SIMPLE AS A SOFTWARE PATCH

THE SCOUT CAN CARRY ABOUT 6 AND A HALF POUNDS OF EXPLOSIVES AND HAS A RANGE OVER 7 MILES. ONCE THE SCOUT IDs A TARGET, IT CAN DROP ITS ORDINANCE OR ACT AS A SPOTTER; RELAYING THE INFORMATION TO OTHER UKRAINIAN ATTACK DRONES WHICH MAY, OR MAY NOT, BE CONTROLLED BY A HUMAN OPERATOR.

THE SCOUT DOESN’T NEED GPS TO NAVIGATE AND CAN OPERATE IN ENVIRONMENTS WHERE RADIO JAMMING BLOCKS COMMUNICATION SIGNALS. ITS LIKELY IN THESE TYPES OF ENVIRONMENTS WHERE THE SCOUT IS REPORTEDLY BEING USED SPARINGLY IN UKRAINE TO CARRY OUT AUTONOMOUS STRIKES.

AI-AGENTS FLYING DRONES IS NOTHING NEW. WE’VE REPORTED ON SEVERAL DIFFERENT AI PILOTS BEFORE, LIKE SHIELD AI’S HIVEMIND WHICH JUST SUCCESSFULLY FLEW SWARMS OF V-BATS. BUT AGAIN, TAKING HUMANS OUT OF THE KILL-CHAIN IS A NEW DEVELOPMENT.

THERE ARE NO INTERNATIONAL LAWS CONCERNING THE USE OF ARTIFICIAL INTELLIGENCE AND ITS CONTROL OF LETHALLY ARMED ROBOTIC WEAPONS SYSTEMS. THE US MILITARY MANDATES AN APPROPRIATE LEVEL OF HUMAN JUDGMENT BEFORE AN AI-AGENT CAN USE FORCE, BUT THAT’S AN ADMITTEDLY FLEXIBLE TERM.

CRITICS SAY ALLOWING SO-CALLED SLAUGHTERBOTS ONTO THE BATTLEFIELD SETS A DANGEROUS PRECEDENT FOR MANKIND. THE FUTURE OF LIFE INSTITUTE AND THE ARMS CONTROL ASSOCIATION SAY AI ALGORITHMS SHOULDN’T BE IN A POSITION TO TAKE HUMAN LIFE BECAUSE THEY CANNOT COMPREHEND ITS VALUE. THOSE ORGANIZATIONS ALSO ARGUE THE OVER-RELIANCE ON MACHINES AND AI AGENTS TO CONDUCT WARFARE WILL MAKE IT EASIER TO DECLARE WAR, COULD LEAD TO THE PROLIFERATION OF AI WEAPONRY BEING USED BY BAD ACTORS, AND INCREASES THE RISK OF ESCALATION BETWEEN NUCLEAR POWERS.

NOT TO MENTION WE COULD BE CREATING SKYNET. AND TO THINK, ALL THIS TIME I THOUGHT SKYNET WAS CALIFORNIAN.

THE UNITED NATIONS IS SCHEDULED TO ADDRESS THE ISSUE OF AI WARFARE MORE DIRECTLY AT ITS NEXT GENERAL ASSEMBLY IN LATE OCTOBER. UN SECRETARY GENERAL ANTONIO GUTERRES SAID BY 2026, HE WANTS A LEGALLY BINDING AGREEMENT TO PROHIBIT LETHAL AUTONOMOUS WEAPONS FROM BEING USED WITHOUT HUMAN OVERSIGHT.