A Blog by Jonathan Low

 

Jan 21, 2020

AI Will Make Important Decisions In Future Wars. With Autonomy

Once you start, how do you stop an autonomous AI? And what if you can't? JL

Simon Chandler reports in Forbes:

Adaption is a necessary response to the ever-changing nature of inter-state conflict. Instead of open armed warfare between states and their armies, geopolitical rivalry is increasingly being fought in cyber-warfare, micro-aggressive standoffs, and trade wars. Multiple smaller events require defence forces to be more aware of what's happening in the world around them. "Crews are already facing information overload with thousands of sources of data, intelligence, and information." The most interesting–and worrying–element  is the focus on introducing AI-enabled "autonomy" to defense capabilities.
Artificial intelligence isn't only a consumer and business-centric technology. Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governments–and in particular militaries–also have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the U.K.'s Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare.
The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of £4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."
In this first wave of funding, the MoD will share £1 million between nine projects as part of DASA’s Intelligent Ship–The Next Generation competition. However, while the first developmental forays will be made in the context of the navy, the U.K. government intends any breakthroughs to form the basis of technology that will be used across the entire spectrum of British defensive and offensive capabilities.
"The astonishing pace at which global threats are evolving requires new approaches and fresh-thinking to the way we develop our ideas and technology," said U.K. Defence Minister James Heappey. "The funding will research pioneering projects into how AI and automation can support our armed forces in their essential day-to-day work."
More specifically, the project will be looking at how four concepts–automation, autonomy, machine learning, and AI–can be integrated into U.K. military systems and how they can be exploited to increase British responsiveness to potential and actual threats.
"This DASA competition has the potential to lead the transformation of our defence platforms, leading to a sea change in the relationships between AI and human teams," explains Julia Tagg, the technical lead at the MoD's Defence Science and Technology Laborator (Dstl). "This will ensure U.K. defense remains an effective, capable force for good in a rapidly changing technological landscape."
On the one hand, such an adaption is a necessary response to the ever-changing nature of inter-state conflict. Instead of open armed warfare between states and their manned armies, geopolitical rivalry is increasingly being fought out in terms of such phenomena as cyber-warfare, micro-aggressive standoffs, and trade wars. As Julia Tagg explains, this explosion of multiple smaller events requires defence forces to be much more aware of what's happening in the world around them.
"Crews are already facing information overload with thousands of sources of data, intelligence, and information," she says. "By harnessing automation, autonomy, machine learning and artificial intelligence with the real-life skill and experience of our men and women, we can revolutionise the way future fleets are put together and operate to keep the U.K. safe."
That said, the most interesting–and worrying–element of the Intelligent Ship project is the focus on introducing AI-enabled "autonomy" to the U.K.'s defense capabilities. As a number of reports from the likes of the Economist, MIT Technology Review and Foreign Affairs have argued, AI-powered systems potentially come with a number of serious weaknesses. Like any code-based system they're likely to contain bugs that can be attacked by enemies, while the existence of biases in data (as seen in the context of law and employment) indicate that algorithms may simply perpetuate the prejudices and mistakes of past human decision-making.
It's for such reasons that the increasing fondness of militaries for AI is concerning. Not only is the British government stepping up its investment in military AI, but the United States government earmarked $927 million for "Artificial Intelligence/Machine Learning investments to expand military advantage" in last year's budget. As for China, its government has reportedly invested "tens of billions of dollars" in AI capabilities, while Russia has recently outlined an ambitious general AI strategy for 2030. It's even developing “robot soldiers,” according to some reports.
So besides being the future of everything else, AI is likely to be the future of warfare. It will increasingly process defense-related information, filter such data for the greatest threats, make defence decisions based on its programmed algorithms, and perhaps even direct combat robots. This will most likely make national militaries “stronger” and more “capable,” but it could come at the cost of innocent lives, and perhaps even the cost of escalation into open warfare. Because as the example of Stanislav Petrov in 1983 proves, automated defense systems can't always be trusted.

0 comments:

Post a Comment