A Blog by Jonathan Low

 

Jul 6, 2016

Google's Approach To Analyzing the Practical Challenges Posed By Artificial Intelligence

Attempting to reduce the prospect of troubling outcomes from the unimaginable to the unlikely. JL

Tom Simonite reports in MIT Technology Review:

Google has a commitment to ensuring that artificial intelligence software doesn’t have unintended consequences.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper  describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points.  One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper coauthored with Google colleague Dario Amodei, with contributions from others at Google, Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Google has also spoken of a commitment to ensuring that artificial intelligence software doesn’t have unintended consequences. The company’s first research paper on the topic was released this month by its DeepMind group in London. DeepMind’s leader, Demis Hassabis, has also convened an ethics board to consider the possible downsides of AI, although its members have not been disclosed (see “How Google Plans to Solve Artificial Intelligence”).
Oren Etzioni, CEO of the Allen Institute for AI, welcomes the approach outlined in Google’s new paper. He has previously criticized discussions about the dangers of AI as being too vague for scientists or engineers to engage productively. But the scenarios laid out by Google are specific enough to allow real research, even if it’s still unclear whether such experiments will be practically useful, he says.
“It’s the right people asking the right questions,” says Etzioni. “As for the right answers—time will tell.”

1 comments:

sharery dair said...

significant presentation and attention for your online journals, Google and Yahoo are the place you need to be. www.northbridgetimes.com

Post a Comment