A Blog by Jonathan Low

 

Jun 22, 2014

Robotic Rights and the Other Ethical Implications of Thinking Machines

We have a tendency to make it up as we go along and assume that everything will work out okay. As a civilization we have rarely devoted a lot of thought, or time, to the moral implications of say, the universal computer access or smartphone mobility.

But as machines are given greater intelligence as well as the power to build on that themselves, questions are being raised about rights and laws and morality and ethics. The issues cut two ways: what rights do thinking machines have in their interaction with humans - and what rights should humans have in their interactions with machines?

The philosophical or ideological question also has a political dimension: as technology and its agents take more jobs and, more to the point, the income they generate, what rights to citizens have to demand the legislation of limits - or anything else, for that matter. Who needs to be protected from whom, and to what degree?

This may seem absurdly abstract at the moment, but just wait till you have your first traffic accident with a driverless car or robotic device informs you that your services are no longer required. JL

The Financial Times comments:

At what point will machines have enough brainpower to deserve some legal protection against abuse, such as animals now receive?
In a world first for journalism, this editorial was written by an intelligent machine. Well no, not really. But such a statement may soon be true, given the rapid and accelerating pace of research into artificial intelligence (AI). Last week’s flurry of excitement about a computer apparently passing the Turing test – convincing a panel of judges that it was a 13-year-old boy during a five-minute text conversation – was partly a PR stunt but it did illustrate the progress made by natural language programs. Other manifestations of AI research range from Google’s driverless cars and neural networks to more academic projects that aim to simulate the workings of a human brain in silico.
Admittedly there have been waves of excitement about AI before, originally in the 1950s and again with Japan’s Fifth Generation Computer Project in the 1980s, which petered out when the researchers realised that they had seriously underestimated the technical barriers to making machines think like people. There seems to be more substance to the enthusiasm this time, both because computing and neuroscience are advancing so fast and because the private sector – and Silicon Valley in particular – is devoting substantial resources to AI research in addition to large publicly funded initiatives. Of course machines that match or exceed human intelligence could be life-enhancing for individuals (think for example of a language assistant that instantly and accurately translates conversation as you travel in a foreign country) and immensely useful for humanity as a whole (solving problems in fields from economics to the environment). But if AI is really on the threshold of reality we need also to think seriously about its ethical and moral implications, which have engaged some philosophers and science fiction writers but are largely ignored by researchers themselves.
Even something as apparently straightforward as a driverless car can pose ethical questions. How will it react if an accident threatens the lives of its occupants and other road users? Will the car be programmed to protect its passengers from harm as far as possible, even if this puts others at risk? If it detects a huge lorry hurtling directly towards it and the only escape is to swerve into a group of pedestrians on the pavement, will it kill or injure them to reduce the likely death toll inside the car? Philosophers will recognise this as a futurist variant of the “trolley problem” in which the driver of a runaway trolley-car has to decide who will live and who will die in various accident scenarios.
Although the answers are far from straightforward, they will need to be addressed sooner rather than later, preferably in a public discussion. And there are many other types of robot whose behaviour will require some sort of moral dimension, going far beyond the famous “three laws of robotics” formulated 70 years ago by Isaac Asimov, as they become more intelligent and more influential in our lives.
Conversely there is the issue of robotic rights. At what point will machines have enough brainpower to deserve some legal protection against abuse, such as animals now receive?
Further ahead lies the possibility of superintelligence, far beyond the powers of the human brain, emerging from AI. Although many will see this as too futuristic a prospect to worry about, a small but growing band of scientists is warning that superintelligence out of human control poses one of the biggest “existential risks” to the future of our species over the next century or so – and that we should be thinking now about how to shape AI research in a way that maximises the chance of outcome favourable to humanity. The stakes are so high that is hard to disagree.

0 comments:

Post a Comment