A Blog by Jonathan Low

 

Jan 13, 2017

Is the AI Robotics Race Spiraling Out of Control?

In an era in which technological domination has become a goal rather than a concern, notions of proportionality appear almost quaint.

The larger question may be what, if anything, society is willing to do about a trend that has already captivated global consumers (and their leaders) whose primary inclination is almost always to prefer convenience over any other inducement. JL

Gemma Tetlow reports in the Financial Times:

Some of the underlying principles of AI often do not fit normal human patterns of thought.“If binary logic — in which the only thing that matters is winning while the margin of victory is irrelevant — were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories.
 A new arms race in artificial intelligence and robotics risks spiralling out of human control, according to a report paving the way for next week’s World Economic Forum in Davos.The WEF’s annual Global Risks report highlights mounting concern at the few regulatory constraints on AI technologies that are increasingly used in defence, as in other walks of life, and that may soon be able to out-think humans.While it argues that reducing human oversight may increase efficiency and is necessary for applications such as driverless cars, the report warns of “dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions”.To date AI applications have been relatively narrow, limited to solving specific problems such as trading stocks. However, it is a rapidly developing field and such technologies are already starting to be deployed in areas where more serious ethical and security concerns arise.Tech leaders warn of killer robot arms race, Elon Musk and Stephen Hawking among those calling for ban on autonomous weapons. The report registers particular concerns in the field of autonomous weapons systems, a sector attracting large amounts of investment.Declaring that “a new arms race is developing in weaponised robotics”, the report worries that as the use of AI becomes more common, “so do the risks of these applications operating in unforeseeable ways or outside the control of humans”.It adds: “Some serious thinkers fear that AI could one day pose an existential threat: a ‘superintelligence’ might pursue goals that prove not to be aligned with the continued existence of humankind.”Bill Gates, founder of Microsoft, said last year: “I am in the camp that is concerned about super intelligence.” First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”AI brings new purpose to consumer robotsPlay videoThe report contends that AI has the potential to improve productivity and decision making in many areas, but highlight what it says are the perils of insufficient controls.“Artificial intelligence makes a decision in one way or another. It might not even be programmed but instead learning from pattern recognition over time,” said John Drzik, president of global risk and specialities at Marsh, an insurance broking and risk management group, and one of the authors of the report.The dangers of this type of machine learning were highlighted last year when a Twitter “chatbot” had to be deactivated after it started posting increasingly racist, sexist and xenophobic messages, based on what it had “learnt” online.  The report flags particular concern at the military sector’s embrace of AI, arguing that the weaponisation of the technology “will represent a paradigm shift in the way wars are fought, with profound consequences for international security and stability”.Noting fears that human beings will leave decisions to use lethal force to machines, it highlights that some of the underlying principles of AI often do not fit normal human patterns of thought.“If this binary logic — in which the only thing that matters is winning while the margin of victory is irrelevant — were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000,” the report says.“We may already have passed the tipping point for prohibiting the development of these weapons. An arms race in autonomous weapons systems is very likely in the near future.”

0 comments:

Post a Comment