A Blog by Jonathan Low

 

Nov 2, 2017

If Artificial Intelligence's Decisions Can't Be Explained, Should They Be Used?

The issue is that scientists dont yet entirely understand the implications and consequences of the technological developments they are promoting. Yet these 'solutions' are being employed widely with little or no oversight. Is that the right way to proceed?

Dave Gershgorn reports in Quartz:

A subfield of AI research made popular by Google, Facebook, Microsoft, and Amazon, uses millions of tiny computations to make a single decision, like recognizing a face in an image, in a way meant to imitate the human brain. But just as our brains’ signals are too complex to easily interpret, so are the machinations of a deep learning algorithm. While researchers are now backtracking to try and understand why AI makes the decisions it does, some institutions are blindly following the direction algorithms give.
Elementary school textbooks will tell you that the American experiment started with the phrase “No taxation without representation.” If the British monarchy couldn’t show the receipts for why taxes were levied, the colonies wouldn’t pay them.
Nearly 250 years in, there’s a new force at work on the North American continent whose decisions can’t be explained: Artificial intelligence. A group of researchers called AI Now from New York University, Google Open Research, and Microsoft Research caution in an Oct. 18 report that “black box” algorithms now used in criminal justice, healthcare, and education should be phased out until they’re better understood.
The authors are largely referring to deep learning, a subfield of AI research made popular by Google, Facebook, Microsoft, and Amazon, which uses millions of tiny computations to make a single decision, like recognizing a face in an image, in a way meant to imitate the human brain. But just as our brains’ signals are too complex to easily interpret, so are the machinations of a deep learning algorithm. Why did Facebook identify your face to tag it in a photo, but not your friends? They don’t know with 100% certainty—only that it works with high accuracy.
While researchers are now backtracking to try and understand why this kind of AI makes the decisions it does, some government institutions are already blindly following the direction the algorithms give. A report by ProPublica found that an algorithmic system for criminal sentencing was biased against black people—not by understanding the color of their skin but by using flawed data correlated with race. Teachers in Texas recently won a case where their job performance was being evaluated by an algorithm—a circuit court found that the unexplainable software violated the teachers’ 14th amendment rights to due process.
“The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards,” the AI Now report says.
The report also details how automated decision-making can be impacted by skewed data and homogenous architects—effects which are then masked by the algorithms’ inscrutability.
Tech companies who helped bring the advent of this technology aren’t safe from similar criticism: Facebook and Google, whose algorithms are given autonomy over what users see on their respective websites, are both entrenched in public battles over misinformation and propaganda gaming their systems.

0 comments:

Post a Comment