A Blog by Jonathan Low

 

Apr 15, 2016

Should Humans Have Final Say on Firing Robot Weapons? And Why Is This Even a Question?

To most people the response seems obvious: humans should have the final say.

But some weapons experts think algorithmically driven weapons systems might make better decisions than humans. Assuming, of course, that there are no technological or mechanical glitches. And that someone responsible is overseeing those designing the algorithms. JL

John Markoff reports in the New York Times:

While some have argued that in the future, autonomous weapons might be able to better adhere to the laws of war than humans, an international debate is now emerging over whether it is possible to limit the evolution of weapons that make killing decisions without human involvement.
Two international arms control groups issued a report that called for maintaining human control over a new generation of weapons that are increasingly capable of targeting and attacking without the involvement of people.
The report, which came from Human Rights Watch and the Harvard Law School International Human Rights Clinic at the opening of a weeklong United Nations meeting on autonomous weapons in Geneva, potentially challenges an emerging United States military strategy that will count on technology advantages and increasingly depend on weapons systems that blend humans and machines.
That strategy has been described as the Third Offset strategy and it seeks to exploit technologies to maintain American military superiority. Pentagon officials have recently stated that the new technologies — and particularly artificial intelligence software — will help, rather than replace, human soldiers who must make killing decisions.
“Machines have long served as instruments of war, but historically humans have always dictated how they are used,” the report, titled “Killer Robots and the Concept of Meaningful Human Control,” said.
While some have argued that in the future, autonomous weapons might be able to better adhere to the laws of war than humans, an international debate is now emerging over whether it is possible to limit the evolution of weapons that make killing decisions without human involvement.
Current United States military guidelines, published in 2012, call for commanders and weapons operators to exercise “appropriate levels of human judgment” over the use of force. The guidelines do not completely prohibit autonomous weapons, but require that high-level Pentagon officials authorize them. They draw a line between semiautonomous weapons, whose targets are chosen by a person, and fully autonomous weapons that can hunt and engage targets without intervention.
New weapons that will enter the United States arsenal as early as 2018 may make the distinction a vital one. One example is a missile, known as the Long Range Anti-Ship Missile, or L.R.A.S.M., which was initially designed by the Defense Advanced Research Projects Agency and will be manufactured by Lockheed Martin. This year, the Pentagon asked Congress to authorize $927 million over the next five years for the system.
The missile is being developed in large part because of concerns that American carriers will be required to operate farther from China because of its growing military power.
Yet the missile has raised concerns among critics because it is designed to be launched by a human operator and then fly to a targeted ship out of human contact and make final targeting decisions autonomously.
“I would argue that L.R.A.S.M. is intended primarily to threaten China and Russia and is only likely to be used in the opening shots of a nuclear war that would quite likely destroy our civilization and kill a large fraction, or most, or nearly all human beings,” said Mark A. Gubrud, a physicist and member of the International Committee for Robot Arms Control, a group working for the prohibition of autonomous weapons.
The ability to recall a weapon may be a crucial point in any ban on autonomous weapons, said Bonnie Docherty, the author of the report and a lecturer on law and senior clinical instructor at the International Human Rights Clinic at Harvard Law School.
Weapons specialists said the exact capabilities of systems like L.R.A.S.M. are


often protected as classified information.
“We urge states to provide more information on specific technology so the international community can better judge what type and level of control should be required,” Ms. Docherty said.
The United States is not the only nation pursuing automated weapons. Britain, Israel and Norway have deployed missiles and drones that carry out attacks against enemy radar, or tanks without direct human control.
The most recent United States military budget for the 2017 fiscal year calls for spending $3 billion on what it describes as “human machine combat teaming.” As machines become more capable and the pace of warfare quickens because of automation, many weapons specialists think that it will be challenging to keep humans in control.
Some nations are now calling for some kind of international agreement that limits the weapons.
“There seems to be a broad consensus that, at some level, humans should be involved in lethal force,” said Paul Scharre, a senior fellow at the Center for New American Security in Washington.


0 comments:

Post a Comment