A Blog by Jonathan Low

 

Oct 17, 2016

Artificial Intelligence's Blind Spot: Garbarge In, Garbage Out

What if the real problem is not robots smarter than humans with diabolical plans for universal dominance but dumb machines to whom people have willingly ceded authority so those self-same humans can have more time for texting, taking selfies and writing snarky restaurant reviews? JL

Cory Doctorow reports in BoingBoing:

"People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world."
Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it.
Now, writing in Nature, the two argue that AI thinking is plagued by a "blind spot": "there are no agreed methods to assess the sustained effects of such applications on human populations." In other words, to quote Pedro Domingos from The Master Algorithm: "People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world."
The authors note that the AI industry is already taking steps to avoid the worst pitfalls of trusting too much in machine learning, like having humans step in and make exceptions when the AI gets it wrong, and embedding "values in design" to make sure that we're using AI in a way that reflects our wider goals. But they advocate for a more basic, "social-systems analysis" to containing AI's failures: "investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on."
In other words, garbage in, garbage out. When you train an AI on data from biased activities -- like police stop-and-frisks -- you get biased advice from the AI. This is absolutely fundamental, the major pitfall of all statistical analysis: sampling bias.
A social-systems approach would consider the social and political history of the data on which the heat maps are based. This might require consulting members of the community and weighing police data against this feedback, both positive and negative, about the neighbourhood policing. It could also mean factoring in findings by oversight committees and legal institutions. A social-systems analysis would also ask whether the risks and rewards of the system are being applied evenly — so in this case, whether the police are using similar techniques to identify which officers are likely to engage in misconduct, say, or violence.
As another example, a 2015 study9 showed that a machine-learning technique used to predict which hospital patients would develop pneumonia complications worked well in most situations. But it made one serious error: it instructed doctors to send patients with asthma home even though such people are in a high-risk category. Because the hospital automatically sent patients with asthma to intensive care, these people were rarely on the ‘required further care’ records on which the system was trained. A social-systems analysis would look at the underlying hospital guidelines, and other factors such as insurance policies, that shape patient records9.
There is a blind spot in AI research [Kate Crawford and Ryan Calo/Nature]

0 comments:

Post a Comment