A Blog by Jonathan Low

 

Jul 11, 2017

The Reason Intelligent Machines Are Being Asked To Explain How Their Minds Work

So we can get them to think more like humans? Is that even a benefit?

And what if AI systems are like teens, and dont want to explain themselves to mere humans?

Or even worse, that to do so, they have to dumb themselves down, rendering AI less useful?  JL

Richard Waters reports in the Financial Times:

Deep learning systems have shown they can match humans in recognising images or driving a car. But even experts cannot tell why they come up with the answers they do.“You’re talking to an alien.It’s a different kind of a mind.” The difficulty in understanding precisely how artificial neural networks used in deep learning make their judgments threatens to delay use of AI. Shared understanding of the world between teacher and machine would provide common knowledge needed to communicate
Researchers at Parc, a laboratory with links to some of Silicon Valley’s biggest breakthroughs, have just taken on a particularly thorny challenge: teaching intelligent machines to explain, in human terms, how their minds work.The project, one of several sponsored by the US Defense Advanced Research Projects Agency (Darpa), is part of the search for an answer to one of the hardest problems in artificial intelligence. Deep learning systems, the most advanced form of machine learning that is at the heart of recent breakthroughs in AI, have shown they can match humans in recognising images or driving a car. But even the experts cannot tell exactly why they come up with the answers they do.“You’re in effect talking to an alien,” said Mark Stefik, the researcher heading the project. “It’s a different kind of a mind.”The difficulty in understanding precisely how the artificial neural networks used in deep learning make their judgments threatens to delay use of AI. Darpa’s pursuit of so-called “explainable AI” reflects the need for the US military to be able to have full trust in robotic battlefield systems of the future.How could you even know that was a hole in the training? It’s a big black box — it can’t engage in a conversation with youResearcher Mark Stefik on a fatal accident last year involving a Tesla self-driving carBusinesses and others looking to apply advanced AI face a similar problem. Doctors would already be using AI systems more widely if they could understand how they come up with their recommendations, said David Gunning, the program manager at Darpa leading its work in the area.“Right now, I think this AI technology is eating the world, and [people] are going to need this,” said Mr Gunning.AI systems are trained using large data sets, helping them to develop an “understanding” that can be later applied to the real world. But unforeseen situations can expose flaws that did not emerge during training. That was the case when a Tesla driver was killed last year when his car’s “autopilot” software failed to identify a white truck on a sunny day.“How could you even know that was a hole in the training?” said Mr Stefik. “There isn’t any process to build trust. It’s a big black box — it can’t engage in a conversation with you.”The attempt to teach AI a human way to express itself is one of more than 10 research projects that Darpa has already funded in the US to make AI more explainable. The research is due to run until the middle of 2021, by which time it could be incorporated in systems such as for robotic ships, which are being tested by the navy. Like Darpa’s early funding of the internet, however, the project is also likely to have a much broader impact on how the technology is used.The Parc project is using a human approach to interrogate the thought processes of AI. The researchers aim to use teachers to train AI systems in the same way that human students are taught, starting with simple concepts before building a deeper knowledge. This shared understanding of the world, or ontology, between teacher and machine would provide the common knowledge needed to communicate, said Mr Stefik.This approach to finding better ways for machines and people to communicate could be well-suited to the “mixed workforces” of the future, when many people work alongside AI, Mr Stefik added. Trust in intelligent machines would come from knowing they have been put through the same rigorous teaching that human students are subjected to, he said.While representing one way for artificial intelligence to be more understandable by humans, it might still prove limited in explaining the most advanced processes in deep learning, said Mr Gunning. The system would be limited by the amount of training that human teachers are able to impart, he added.Other research projects funded by Darpa are using deep-learning systems to interpret other deep-learning systems. “We should use AI methods to make other AI methods more explainable,” said Oren Etzioni, head of the Allen Institute of Artificial Intelligence in Seattle.Tim Harford     Experts warn, however, that despite some promising recent research projects that have started to shine a light into the inner working of deep learning, the most advanced AI may well never be fully understandable by humans.“I don’t expect we will have complete explainability for the most complex deep-learning systems,” said Mr Gunning. “It will probably always be the case that the highest performing algorithm will not be as explainable as the lowest performing algorithm.”According to researchers, this is likely to lead to mixed AI systems that apply several different algorithms, rather than only one, as well techniques for AI systems to recognise when they are out of their depth and either hand control to humans or shut down.“These systems can’t be 100 per cent — that’s the way they are designed,” said Sebastian Scherer, a robotics expert at Carnegie Mellon University. “Whenever you put these systems out in the real world, you have to make trade-offs. That’s very tricky.”

0 comments:

Post a Comment