A Blog by Jonathan Low

 

Nov 25, 2018

Follow the Data: Searching For Truth In the Age of Algorithms

As algorithms become more important in running human lives, so does figuring out how they work - and whether that makes socio-economic sense. JL

Thomas Hornigold reports in Singularity Hub:

In theory, the algorithms should make decisions based purely on the data, in a transparent way. (In) reality, algorithms are designed by people and draw their datasets from a biased world. Hidden prejudices may lead to unintended consequences. Overconfidence in algorithms’ performance, misinterpretation of statistics, and automated decision-making processes can make appealing these decisions extremely difficult. Algorithms are usually incapable of explaining “why” they made a decision: careful, statistical analysis is needed to disentangle the effects of all the variables considered.
You probably have a picture of a typical investigative journalist in your head. Dogged, persistent, he digs through paper trails by day and talks to secret sources in abandoned parking lots by night. After years of painstaking investigation, the journalist uncovers convincing evidence and releases the bombshell report. Cover-ups are exposed, scandals are surfaced, and sometimes the guilty parties are brought to justice.
This is a formula we all know and love. But what happens when, instead of investigating a corrupt politician or a fraudulent business practice, journalists are looking into the behavior of an algorithm?
In an ideal world, algorithmic decision-making would be better than that made by humans. If you don’t program your code to discriminate on the basis of age, gender, race, or sexuality, then you may think these factors shouldn’t be taken into account. In theory, the algorithms should make decisions based purely on the data, in a transparent way.
Reality, however, is not ideal; algorithms are designed by people and draw their datasets from a biased world. Hidden prejudices may lead to unintended consequences. Furthermore, overconfidence in algorithms’ performance, misinterpretation of statistics, and automated decision-making processes can make appealing these decisions extremely difficult.
Even when decisions are appealed, algorithms are usually incapable of explaining “why” they made a decision: careful, statistical analysis is needed to disentangle the effects of all the variables considered, and to determine whether or not that decision was unfair. This can make explaining the case to the general public—or to lawyers—very difficult.

AI Behaving Badly

A classic example of recent investigative journalism about algorithms is ProPublica’s study of Broward County’s recidivism algorithm. The algorithm, which delivers “risk scores” assessing an accused person’s likelihood of committing more crimes, helps judges determine an appropriate sentence.
ProPublica found the algorithm to have a racial bias—it was more often incorrectly assigning high risk scores to black defendants than white. Yet Northpointe, the company that made the software, argued it was unbiased. The higher rate of false positives for black defendants could be due to the fact that they are arrested more often by the police.
It’s illustrative of how algorithms fed on historical data can perpetuate historical biases. Hirevue’s algorithm assigns scores to candidates for jobs, records job applicants, and analyzes their verbal and non-verbal reactions to a series of questions. It then compares that score against the highest-performing employees currently at the company, as a substitute for a personality test. Critics of the system argue that this just ensures your future employees look and sound like those you’ve hired in the past.
Even when algorithms don’t appear to be making obvious decisions, they can wield an outsized influence on the world. Part of the Trump-Russia scandal involves the political ads bought on Facebook; its micro-targeting was enabled by Facebook’s algorithm. Facebook’s experiments in 2012 demonstrated that the ads could nudge people to go to the polls by altering what they saw in the newsfeed. According to Facebook, this experiment pushed between 60,000-280,000 additional voters to go to the polls; that number could easily exceed the margin of victory in a close election.
Just as we worry that legislators will struggle to keep up with rapid developments in technology, and that tech companies will get away with inadequate oversight of bad actors with new tools, journalism must also adapt to cover and explain “the algorithms beat.”

The Algorithms Beat

Nick Diakopoulos, Director of the Computational Journalism Lab at Northwestern University, is one of the researchers hoping to prevent a world where mysterious, black-box algorithms are empowered to make ever more important decisions, with no way of explaining them and no one held accountable when they go wrong.
In characterizing “the algorithms beat,” he identifies four main types of newsworthy stories.
The first type is where the algorithm is behaving unfairly, as in the Broward County case. The second category of algorithmic public-interest stories arise from errors or mistakes. Algorithms can be poorly designed; they can work from incorrect datasets; or they can fail to work in specific cases. Then, because the algorithm is perceived as infallible, errors can persist, such as graphic or disturbing videos that slip through YouTube’s content filter.
The third type of story arises when the algorithm breaks social norms or even laws. Google’s predictive search algorithm has been sued for defamation by an Australian man for suggesting the phrase “is a former hitman” as an autocomplete option after his name. If an advertising company hired people to stand outside closing factories advertising payday loans and hard liquor, there might be a scandal, but an algorithm might view this behavior as optimal. In what might be considered a parallel case, Facebook allowed advertisers to target white supremacists.
Finally, the algorithms may not be entirely to blame: humans can use or abuse algorithms in ways that weren’t intended. Take the case detailed in Cathy O’Neil’s wonderful book, Weapons of Math Destruction. A Washington teacher was fired for having a low “teacher assessment score.” The score was calculated based on whether standardized test scores for the students improved under a specific teacher. But this created a perverse incentive: teachers lied and inflated the scores their students received. Those who didn’t cheat and inflate the scores were fired. The algorithm was being abused by the teachers—but, arguably, it should never have been used as the main factor in deciding who got bonuses and who got fired.

Finding the Story

So how can journalists hope to find stories in this new era? One way is to obtain raw code for an audit. If the code is used by the government, such as in the 250+ algorithms tracked by the website Algorithm Tips, freedom of information requests may allow journalists to access the code.
If the bad behavior arises from a simple coding error, an expert may be able to reveal it, but issues with algorithms tend to be far more complicated. If even the people who coded the system can’t predict or interpret its behavior, it will be difficult for outsiders to infer a personality from a page of Python.
“Reverse-engineering” the algorithm—monitoring how it behaves, and occasionally prodding it with a well-chosen input—might be more successful.
AlgorithmWatch in Germany gathers data from customers to see how they are affected by advertising and newsfeed algorithms; WhoTargetsMe is a browser plugin that collects information about political advertising and tells them who’s trying to influence their vote. By crowdsourcing data from a wide range of people, its behavior in the field can be analyzed.
Investigative journalists, posing as various people, can attempt to use the algorithms to expose how they behave—along with their vulnerabilities. VICE News recently used this to demonstrate that anyone could pose as a US Senator for the purposes of Facebook’s “Paid for by…” feature, which was intended to make political ads transparent.

Who’s Responsible?

Big tech companies derive much of their market value from the algorithms they’ve designed and the data they’ve gathered—they are unlikely to share them with prying journalists or regulators.
Yet without access to the data and the teams of analysts these companies can deploy, it’s hard to get a handle on what’s happening and who’s responsible. Algorithms are not static: Google’s algorithms change 600 times a year. They are dynamic systems that respond to changing conditions in the environment, and therefore their behavior might not be consistent.
Finally, linking the story back to a responsible person can be tough, especially when the organizational structure is as opaque as the algorithms themselves.
As difficult as these stories may be to discover and relate accurately, journalists, politicians, and citizens must start adapting to a world where algorithms increasingly call the shots. There’s no turning back. Humans cannot possibly analyze the sheer volume of data that companies and governments will hope to leverage to their advantage.
As algorithms become ever more pervasive and influential—shaping whole nations and societies—holding them accountable will be just as important as holding politicians responsible. The institutions and tools to do this must be developed now—or we will all have to live with the consequences.

0 comments:

Post a Comment