A Blog by Jonathan Low

 

Nov 23, 2014

Algorithmic Arrogance

We have ceded a lot of authority to a belief system. We had better hope that faith is not misplaced.

Curiously, that belief system is based on information. On data, actually. Which is probably what makes it so dangerous, as the following article explains.

That belief is in the infallibility of numbers. Of algorithms to be precise, but however you define it, in a canon based on the superiority of statistical analysis over that of human judgment.

There are many reasons for this evolution, and sobering human experience with its own species should not be entirely discounted for the change. But the fact remains that we have handed so much power to machines and the processes that run them mostly out of fear. Because we are afraid of mistakes and their financial or professional cost, despite the 'let's embrace the lessons of failure' ethos that supposedly prevails in some quarters. Because we are afraid of consequences generally in a technologically driven economy which can quickly turns such knowledge to our disadvantage. And because we are afraid of the complexity that has come to rule our world.

Why we think that algorithms designed by other humans with all their faults and biases will make better decisions than mere humans with the same characteristics is a reflection of our technological myopia. It is, quite possibly, a transitional state which will change as we more fully integrate technology into our lives and learn to dominate it more successfully than we currently do. But until then it would be advisable to remember that numbers and the processes that manipulate them are no more accurate than anything else humans create. JL

Luke Dormehl reports in Wired:

A single human showing explicit bias can only ever affect a finite number of people. An algorithm, on the other hand, has the potential to impact the lives of exponentially more.
On April 5, 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles. The letter informed Gass that his driver’s license had been revoked and that he should stop driving, effective immediately. The only problem was that, as a conscientious driver who had not received so much as a traffic violation in years, Gass had no idea why it had been sent.
After several frantic phone calls, followed up by a hearing with Registry officials, he learned the reason: his image had been automatically flagged by a facial-recognition algorithm designed to scan through a database of millions of state driver’s licenses looking for potential criminal false identities. The algorithm had determined that Gass looked sufficiently like another Massachusetts driver that foul play was likely involved—and the automated letter from the Registry of Motor Vehicles was the end result.
The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few.
John Gass is hardly alone in being a victim of algorithms gone awry. In 2007, a glitch in the California Department of Health Services’ new automated computer system terminated the benefits of thousands of low-income seniors and people with disabilities. Without their premiums paid, Medicare canceled those citizens’ health care coverage.
Where the previous system had notified people considered no longer eligible for benefits by sending them a letter through the mail, the replacement CalWIN software was designed to cut them off without notice, unless they manually logged in and prevented this from happening. As a result, a large number of those whose premiums were discontinued did not realize what had happened until they started receiving expensive medical bills through the mail. Even then, many lacked the necessary English skills to be able to navigate the online health care system to find out what had gone wrong.
Similar faults have seen voters expunged from electoral rolls without notice, small businesses labeled as ineligible for government contracts, and individuals mistakenly identified as “deadbeat” parents. In a notable example of the latter, 56-year-old mechanic Walter Vollmer was incorrectly targeted by the Federal Parent Locator Service and issued a child-support bill for the sum of $206,000. Vollmer’s wife of 32 years became suicidal in the aftermath, believing that her husband had been leading a secret life for much of their marriage.Equally alarming is the possibility that an algorithm may falsely profile an individual as a terrorist: a fate that befalls roughly 1,500 unlucky airline travelers each week. Those fingered in the past as the result of data-matching errors include former Army majors, a four-year-old boy, and an American Airlines pilot—who was detained 80 times over the course of a single year.
Many of these problems are the result of the new roles algorithms play in law enforcement. As slashed budgets lead to increased staff cuts, automated systems have moved from simple administrative tools to become primary decision-makers.
In a number of cases, the problem is about more than simply finding the right algorithm for the job, but about the problematic nature of believing that any and all tasks can be automated to begin with. Take the subject of using data-mining to uncover terrorist plots, for instance. With such attacks statistically rare and not conforming to well-defined profiles in the way that, for example, Amazon purchases do, individual travelers end up surrendering large amounts of personal privacy to data-mining algorithms, with little but false alarms to show for it. As renowned computer security expert Bruce Schneier has noted:
Finding terrorism plots . . . is a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.
While it is clear why such emotive subjects would be considered ripe for The Formula, the central problem once again comes down to the spectral promise of algorithmic objectivity. “We are all so scared of human bias and inconsistency,” says Danielle Citron, professor of law at the University of Maryland. “At the same time, we are overconfident about what it is that computers can do.”
The mistake, Citron suggests, is that we “trust algorithms, because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.” To put it another way, a computer algorithm might be unbiased in its execution, but, as noted, this does not mean that there is not bias encoded within it.
Implicit or explicit biases might be the work of one or two human programmers, or else come down to technological difficulties. For example, algorithms used in facial recognition technology have in the past shown higher identification rates for men than for women, and for individuals of non-white origin than for whites.
An algorithm might not target an African-American male for reasons of overt prejudice, but the fact that it is more likely to do this than it is to target a white female means that the end result is no different. Biases can also come in the abstract patterns hidden within a dataset’s chaos of correlations.
Consider the story of African-American Harvard University PhD Latanya Sweeney, for instance. Searching on Google one day, Sweeney was shocked to notice that her search results were accompanied by ads asking, “Have you ever been arrested?” These ads did not appear for her white colleagues. Sweeney began a study that ultimately demonstrated that the machine-learning tools behind Google’s search were being inadvertently racist, by linking names more commonly given to black people to ads relating to arrest records.
A similar revelation is the fact that Google Play’s recommender system suggests users who download Grindr, a location-based social-networking tool for gay men, also download a sex-offender location-tracking app. In both of these cases, are we to assume that the algorithm has made an error, or that they are revealing inbuilt prejudice on the part of their makers? Or, as is more likely, are they revealing distasteful large-scale cultural associations between—in the former case—black people and criminal behavior and—in the latter—homosexuality and predatory behavior?
Regardless of the reason, no matter how reprehensible these codified links might be, they demonstrate another part of algorithmic culture. A single human showing explicit bias can only ever affect a finite number of people. An algorithm, on the other hand, has the potential to impact the lives of exponentially more.
Excerpted from The Formula: How Algorithms Solve All Our Problems—and Create More

0 comments:

Post a Comment