A Blog by Jonathan Low

 

Dec 26, 2018

How Come Interpretable AI Became Such A Hot Topic?

Because executives are frustrated that it doesnt automatically solve all their problems. JL

George Seif reports in Medium:

Companies have been pushing hard to adopt AI into every industry you can think of from healthcare, to retail, to finance, to self-driving cars. AI gives these companies the incredible ability to create powerful prediction systems that can automate almost any repetitive human task that they can get the data for. The idea that AI needed to be fully interpretable came about when people saw their AI systems making mistakes. They wanted to understand why this happened so that they could improve their systems.
Why did interpretable AI become such a hot topic?
AI has been booming everywhere since 2012, when massive breakthroughs in Computer Vision and Natural Language processing began to emerge. Since then, companies have been pushing hard to adopt AI into every industry you can think of from healthcare, to retail, to finance, to self-driving cars. AI gives these companies the incredible ability to create powerful prediction systems that can automate almost any repetitive human task that they can get the data for.
The idea that AI needed to be fully interpretable came about when people saw their AI systems making mistakes. They wanted to understand why this happened so that they could improve their systems.
In some fields, the lack of interpretability brings many legal issues into play. What happens if AI makes a mistake? Who’s responsible for the damage? In finance, a mistake could mean billions of dollars in lost revenue. In healthcare, mistakes cost human lives.
But interpretability isn’t the answer to those challenges. It might provide us with better ways to improve our AI systems or perhaps give us better peace of minding knowing where the system’s mistakes came from. But it doesn’t get us much closer to solving AI fundamentally, nor does it solve the ethical and legal issues.
We could have an AI system that overlooks a dangerous cancer, causing a patients death. If that AI were more interpretable, perhaps we could source where its mistake came from and use it to improve the system. But we already knew that the mistake came from our AI. The same ethical and legal issues still apply, nothing has been solved on that front.
What would happen if a doctor made that mistake?
Several months ago I attended a Machine Learning event in Toronto. An intriguing question was posed to the Q&A panel:
If a self-driving car had to choose between running someone over and crashing into a tree, killing all its passengers, which one would it choose?
The answer given by one of the panel members was intriguing: well, what would a human do? There is no definitive answer. Humans have their own biases and unique internal decision making systems. There’s no right way to answer this question.
We don’t fully understand human thinking, but we still accept its mistakes. If you make a mistake on a math test, you might be able to work your way back and figure out where the mistake occurred and how you can fix it for next time. But no one has any clue how the brain itself internally arrived to its conclusions! Neuroscience really isn’t there yet and we have no major problems getting by without that knowledge.
If we want to tackle some of the ethical and legal issues of AI within industries that are sensitive to mistakes, such as healthcare and finance, then it doesn’t really make sense to work on interpretability. It’s more about applying AI in the right way.
There are very strong moral constructs in health care decision making that it’s not even appropriate to have AI make those decisions. Those decisions should be made by an expert human. AI should be used as a tool to help give the expert human information that can aid in the speed and accuracy of the decision, but not for making the decision itself.
If it has to do with morality, emotions, or anything that is inherently human and not machine, it should remain in the full control of the humans. AI is a tool that can lend a helping hand, but it’s not the captain of the ship.
AI doesn’t need to be fully interpretable. It is important that we have some interpretability from a high level: How the system works, its different parts, and which part has made the mistake.
But we don’t need to know these nitty gritty details. Knowing about them won’t help us solve intelligence nor the ethical and legal issues that AI inherently brings to the table
What will help us solve and replicate human intelligence will be an understanding of how the brain’s different parts function to form the whole. How it works as a system.

0 comments:

Post a Comment