A Blog by Jonathan Low


Apr 26, 2024

How Ukraine Has Applied AI For Targeting, Threat Assessing And Sentiment Analysis

AI is on the battlefield - and all around it. Ukraine is using for improved targeting sophistication, for identifying and prioritizing potential threats and for sentiment analysis that estimates the cognitive impact of attacks like those on Crimea. 

This is a literal force multiplier for an underresourced military. JL 

Callum Fraser reports in the International Institute For Strategic Studies:

The application of AI in war is now grounded in reality. AI focuses on decision-support, intelligence analysis and targeting. AI has allowed planners to sift through huge swathes of data faster than humans. Kyiv is using AI to understand how targeted military activities have a cognitive effect on the adversary, such as sentiment analysis of how a rocket attack on the Antonovsky Bridge at Kherson impacted Russian morale. The ability to measure physical and cognitive impact is a huge advantage for an army with limited resources. With increased capacity to exploit the information environment for military effect, Ukraine has used AI to increase targeting sophistication. It has also applied AI to identify those who might perpetrate espionage and leverages AI in its cyber defences to scan for threats.

Once just hypothetical strategic and ethical questions surrounding the military application of artificial intelligence (AI) in war, they are now becoming grounded in reality, with fighting in Ukraine and Gaza emerging as one of the technology’s first trials-by-fire.

The application of AI is comparable in both conflicts and has principally focused on decision-support, intelligence analysis and targeting. In each case, AI has allowed planners to sift through huge swathes of data faster than humanly possible.

Yet the context in which Ukraine and Israel are using AI differs. Ukraine is fighting a numerically superior opponent; Israel is seeking to use AI to press home its military advantage in its effort to eradicate Hamas. There are lessons and warnings from both for other armed forces on the application of AI in tactical, operational and strategic efforts.

Battle lab Ukraine

Ukraine’s use of a diverse set of AI applications is a bright spot for the embattled country at a time when its armed forces battle personnel and equipment shortages. Ukraine is deploying AI to achieve an asymmetric advantage over a materially and numerically more powerful adversary.

Kyiv, for instance, is using AI to understand how targeted military activities have a cognitive effect on the adversary. According to a recent article in The Economist, Ukraine has been using such tools for sentiment analysis. In one case, it tried to understand how a rocket attack on the Antonovsky Bridge leading out of occupied Kherson impacted the morale of Russian soldiers and citizens. The ability to measure and cohere activity to achieve maximum physical and cognitive effect is a huge advantage for an army with limited resources.

Outside of its increased capacity to understand and exploit the information environment for military effect, Ukraine has used AI to increase the sophistication of its targeting. It has also applied such systems to bolster counter-intelligence efforts, for example, by trawling large pools of data on individuals and their circumstances to identify those that may be at risk of perpetrating espionage. Ukraine also leveraged AI in its cyber defences in 2022 when those tools scanned for potential threats.

The Russians are equally looking to exploit AI for military benefit. The country has upgraded its Lancet-3 loitering munition to make it more jamming resistant and its Marker unmanned ground vehicles to use AI software to, for example, scan and interpret its environment and reportedly recognise friendly camouflage.
The identification of targets is a common thread that is also playing out in the Israel Defense Forces’ (IDF) campaign in Gaza. The IDF has been using 'The Gospel',  a decision support system to help planners identify buildings to strike. Israel may be using such target identification technology even more widely. 

A report, which Israel denies, claims that the IDF has been using a system called ‘Lavender’ to target individuals with a 90% accuracy rate, drawing on a database of 37,000 Palestinians. The report alleges that the IDF considered 15 to 20 civilians as acceptable collateral damage per each junior Hamas commander targeted, giving a possible explanation for the extremely high numbers of civilian casualties in Gaza, particularly early in the conflict when Israeli forces aimed to hit as many targets as possible.

The use of such tools represents a watershed moment for AI, demonstrating the technology is having a significant effect at the operational level of war. The United States has also embraced the technology and used systems developed under its flagship AI effort, called ‘Project Maven’, to identify targets.

The volume and intensity of IDF strikes and offensive activities enabled by AI, while achieving tempo on the battlefield, is also creating a less-rigorous target approval process. With less due diligence by humans, this has led to errors that have caused widespread public backlash against Israel and criticism also from some of its most prominent allies, such as in the case of a strike on an aid convoy.

Implications for decision-makers

The current usage of AI brings both lessons and warnings for Western armed forces. Ukraine and Gaza validate the theory that AI synthesis and decision-making support can help generate more targets and a greater operational tempo. But it has also highlighted that AI is not a cure-all. Generating more targets drives up a requirement for humans in the loop to prosecute the greater volume, sapping resources.

There are other key takeaways that hedge against the techno-optimism that characterises the Western way of war. Despite the sophisticated use of AI, technological advantage rarely converts into an enduring battlefield advantage given the dialectal nature of warfare, with both sides adapting and counter-adapting in real-time. Ukraine has exemplified this with the employment of direct-attack munitions, efforts to defeat them, and their evolution to overcome those defences, a testament to the Darwinian evolutionary dynamic in warfare. Additionally, despite the increasing employment of AI in both Gaza and Ukraine at the tactical and operational level, technology in isolation rarely leads to an enduring strategic advantage. Other factors, such as industrial capacity, alliances, organisational structure and leadership become increasingly significant at the strategic level of war.

There are other issues to ponder. While the moral and legal implications of AI use have been debated for some time, and the extent to which humans should be ‘in the loop’ or ‘on the loop’, the current conflicts that include a reported 20 seconds-worth of human input in a strike brings this thought experiment into the cold light of reality.

What is already clear is that while effective human–machine teaming will grow in importance, it is a mistake to believe that AI represents a strategic silver bullet to achieve victory. Evangelists of the US-dominant Third-Offset philosophy, which proposes technological superiority to mitigate numerical disadvantage, may take heart from the innovative use of the Ukrainians to fend off Russia. But it is possible the dynamic between Israel and Hamas may evolve to resemble the ill-fated American 'war on terror', where technological overmatch spluttered in the face of a resolute enemy united by an ideology immune to bombs and bullets.


Post a Comment