A Blog by Jonathan Low

 

May 11, 2021

The Pentagon is Moving Towards Letting AI Control Weapons

Sure. What could go wrong?    JL

Will Knight reports in Wired:

Military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding test how artificial intelligence could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision. The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed. AI oucld perform more identifying and distinguishing potential targets while humans make high-level decisions. “I think that's where we're going,”

LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.


The drill was one of several conducted last summer to test how artificial intelligence could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision. The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed.

General John Murray of the US Army Futures Command told an audience at the US Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems. Murray asked: “Is it within a human's ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added.

Other comments from military commanders suggest interest in giving autonomous weapons systems more agency. At a conference on AI in the Air Force last week, Michael Kanaan, director of operations for the Air Force Artificial Intelligence Accelerator at MIT and a leading voice on AI within the US military, said thinking is evolving. He says AI should perform more identifying and distinguishing potential targets while humans make high-level decisions. “I think that's where we're going,” Kanaan says.

At the same event, Lieutenant General Clinton Hinote, deputy chief of staff for strategy, integration, and requirements at the Pentagon, says that whether a person can be removed from the loop of a lethal autonomous system is “one of the most interesting debates that is coming, [and] has not been settled yet.”

A report this month from the National Security Commission on Artificial Intelligence (NSCAI), an advisory group created by Congress, recommended, among other things, that the US resist calls for an international ban on the development of autonomous weapons.

Timothy Chung, the Darpa program manager in charge of the swarming project, says last summer’s exercises were designed to explore when a human drone operator should, and should not, make decisions for the autonomous systems. For example, when faced with attacks on several fronts, human control can sometimes get in the way of a mission, because people are unable to react quickly enough. “Actually, the systems can do better from not having someone intervene,” Chung says.

The drones and the wheeled robots, each about the size of a large backpack, were given an overall objective, then tapped AI algorithms to devise a plan to achieve it. Some of them surrounded buildings while others carried out surveillance sweeps. A few were destroyed by simulated explosives; some identified beacons representing enemy combatants and chose to attack.

The US and other nations have used autonomy in weapons systems for decades. Some missiles can, for instance, autonomously identify and attack enemies within a given area. But rapid advances in AI algorithms will change how the military uses such systems. Off-the-shelf AI code capable of controlling robots and identifying landmarks and targets, often with high reliability, will make it possible to deploy more systems in a wider range of situations.

But as the drone demonstrations highlight, more widespread use of AI will sometimes make it more difficult to keep a human in the loop. This might prove problematic, because AI technology can harbor biases or behave unpredictably. A vision algorithm trained to recognize a particular uniform might mistakenly target someone wearing similar clothing. Chung says the swarm project presumes that AI algorithms will improve to a point where they can identify enemies with enough reliability to be trusted.

Use of AI in weapons systems has become controversial in recent years. Google faced employee protest and public outcry in 2018 after supplying AI technology to the Air Force through a project known as Maven.

To some degree, the project is part of a long history of autonomy in weapons systems, with some missiles already capable of carrying out limited missions independent of human control. But it also shows how recent advances in AI will make autonomy more attractive and inevitable in certain situations. What's more, it highlights the trust that will be placed in technology that can still behave unpredictably.

Paul Scharre, an expert at the Center for New American Security and author of Army of None: Autonomous Weapons and the Future of War, says it is time to have a more sophisticated discussion about autonomous weapons technology. “The discussion surrounding ‘humans in the loop’ ought to be more sophisticated than simply a binary ‘are they or aren't they?’” Scharre says. “If a human makes a decision to engage a swarm of enemy drones, does the human need to individually select each target?”

The Defense Department issued a policy on autonomous weapons in November 2012, stating that autonomous weapons systems need to have human oversight—but this need not mean soldiers making every decision.

Those who believe that militaries could use AI to cross a Rubicon when it comes to human responsibility for lethal force see things differently.

“Lethal autonomous weapons cheap enough that every terrorist can afford them are not in America's national security interest,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, a nonprofit that opposes autonomous weapons.

Tegmark says AI weapons should be “stigmatized and banned like biological weapons.” The NSCAI report's opposition a global ban is a strategic mistake, he says: “I think we'll one day regret it even more than we regret having armed the Taliban.”

1 comments:

Dim4ksan said...

"按照大纲的内容,标题大纲常见的写作方法有2种:
1)标题大纲是由短句或短语组成的标题形式,简要提示留学生作业代写中的重点,安排论文内容。这种写法简洁中肯,很容易在短时间内记住几十亿。这是最广泛使用的写作方法。
2)总结型提纲:总结型(中心句)提纲用一句话或几句话概括提纲中每一项内容的要点,对短文的全部内容进行了较厚的描述。提纲中的每一个句子都是课文段落的基础。
常常来说,提纲的写作方法有规律,也有不规律,应根按照短文的主要论点特点、复杂性和个人的撰述习惯来确定。提纲的重要意义在于激发writer的主动性和creativity,撰述时,既要按提纲写,也不要太受提纲的束缚。"

Post a Comment