A Blog by Jonathan Low

 

Mar 20, 2022

Russia's Killer 'Suicide' Drone Use In Ukraine Sparks Fear of AI Warfare

AI weapons are programmed to decide who to kill. The Russian Army has reportedly used one of its AI models in Ukraine. The US 'Switchblade' drone, being supplied to Ukraine, is also 'suicidal' as it crashes into its target, but it requires a human to make the final decision. 

The issue is, ultimately, about how much more impersonal humans can make warfare. JL

Will Knight reports in Wired:

A RUSSIAN “SUICIDE drone” boasts the ability to identify targets using artificial intelligence.With a wingspan of 1.2 meters, the sleek white drone resembles a small pilotless fighter jet. It is fired from a portable launch, can travel up to 130 kilometers per hour for 30 minutes, and deliberately crashes into a target, detonating a 3-kilo explosive. Advances in AI have made it easier to incorporate autonomy into weapons and have raised the prospect that more capable systems could eventually decide for themselves who to kill.

A RUSSIAN “SUICIDE drone” that boasts the ability to identify targets using artificial intelligence has been spotted in images of the ongoing invasion of Ukraine.

Photographs showing what appears to be the KUB-BLA, a type of lethal drone known as a “loitering munition” sold by ZALA Aero, a subsidiary of the Russian arms company Kalashnikov, have appeared on Telegram and Twitter in recent days. The pictures show damaged drones that appear to have either crashed or been shot down.

With a wingspan of 1.2 meters, the sleek white drone resembles a small pilotless fighter jet. It is fired from a portable launch, can travel up to 130 kilometers per hour for 30 minutes, and deliberately crashes into a target, detonating a 3-kilo explosive.

ZALA Aero, which first demoed the KUB-BLA at a Russian air show in 2019, claims in promotional material that it features “intelligent detection and recognition of objects by class and type in real time.”

The drone itself may do little to alter the course of the war in Ukraine, as there is no evidence that Russia is using them widely so far. But its appearance has sparked concern about the potential for AI to take a greater role in making lethal decisions.


“The notion of a killer robot—where you have artificial intelligence fused with weapons—that technology is here, and it's being used,” says Zachary Kallenborn, a research affiliate with the National Consortium for the Study of Terrorism and Responses to Terrorism (START).

Advances in AI have made it easier to incorporate autonomy into weapons systems, and have raised the prospect that more capable systems could eventually decide for themselves who to kill. A UN report published last year concluded that a lethal drone with this capability may have been used in the Libyan civil war.

It is unclear if the drone may have been operated in this way in Ukraine. One of the challenges with autonomous weapons may prove to be the difficulty of determining when full autonomy is used in a lethal context, Kallenborn says.

The KUB-BLA images have yet to be verified by official sources, but the drone is known to be a relatively new part of Russia’s military arsenal. Its use would also be consistent with Russia’s shifting strategy in the face of the unexpectedly strong Ukrainian resistance, says Samuel Bendett, an expert on Russia’s military with the defense think tank CNA.

Bendett says Russia has built up its drone capabilities in recent years, using them in Syria and acquiring more after Azerbaijani forces demonstrated their effectiveness against Armenian ground military in the 2020 ​​Nagorno-Karabakh war. “They are an extraordinarily cheap alternative to flying manned missions,” he says. “They are very effective both militarily and of course psychologically.”

The fact that Russia seems to have used few drones in Ukraine early on may be due to misjudging the resistance or because of effective Ukrainian countermeasures.

But drones have also highlighted a key vulnerability in Russia’s invasion, which is now entering its third week. Ukrainian forces have used a remotely operated Turkish-made drone called the TB2 to great effect against Russian forces, shooting guided missiles at Russian missile launchers and vehicles. The paraglider-sized drone, which relies on a small crew on the ground, is slow and cannot defend itself, but it has proven effective against a surprisingly weak Russian air campaign.

This week, the Biden administration also said it would supply Ukraine with a small US-made loitering munition called Switchblade. This single-use drone, which comes equipped with explosives, cameras, and guided systems, has some autonomous capabilities but relies on a person to make decisions about which targets to engage.

But Bendett questions whether Russia would unleash an AI-powered drone with advanced autonomy in such a chaotic environment, especially given how poorly coordinated the country’s overall air strategy seems to be. “The Russian military and its capabilities are now being severely tested in Ukraine,” he says. “If the [human] ground forces with all their sophisticated information gathering can't really make sense of what's happening on the ground, then how could a drone?”

Several other military experts question the purported capabilities of the KUB-BLA.

“The companies that produce these loitering drones talk up their autonomous features, but often the autonomy involves flight corrections and maneuvering to hit a target identified by a human operator, not autonomy in the way the international community would define an autonomous weapon,” says Michael Horowitz, a professor at the University of Pennsylvania, who keeps track of military technology.

 

Despite such uncertainties, the issue of AI in weapons systems has become contentious of late because the technology is rapidly finding its way into many military systems, for example to help interpret input from sensors. The US military maintains that a person should always make lethal decisions, but the US also opposes a ban on the development of such systems.

To some, the appearance of the KUB-BLA shows that we are on a slippery slope toward increasing use of AI in weapons that will eventually remove humans from the equation.

“We'll see even more proliferation of such lethal autonomous weapons unless more Western nations start supporting a ban on them,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, an organization that campaigns against such weapons.

Others, though, believe that the situation unfolding in Ukraine shows how difficult it will really be to use advanced AI and autonomy.

William Alberque, Director of Strategy, Technology, and Arms Control at the International Institute for Strategic Studies says that given the success that Ukraine has had with the TB2, the Russians are not ready to deploy tech that is more sophisticated. “We’re seeing Russian morons getting owned by a system that they should not be vulnerable to.”



0 comments:

Post a Comment