A Blog by Jonathan Low

 

Sep 1, 2017

The Effort To Make Algorithmic 'Black Boxes' More Accountable

Our lives are increasingly influenced - if not ruled - by algorithms. But the content, construction, inputs and intent are frequently obtuse, when they are explained at all.

There are growing concerns about the degree to which this information should be made publicly available. The obstacle is that elements which would best relieve or exacerbate - those concerns are considered valuable intellectual property and competitively advantageous. The question is how to answer the growing questions without undermining the intangible value. JL


Cathy O'Neill reports in Bloomberg:

Researchers and practitioners are working to assess algorithms and define fairness. This runs into a bigger problem: secrecy. Algorithms are considered legally protected “secret sauce” of the companies that build them, and hence immune to scrutiny. We almost never have sufficient information about them. How can we test them if we have no access in the first place?
Computer algorithms play an increasingly important role in running the world -- filtering news, assessing prospective employees, even deciding when to set prisoners free. All too often, though, their creators don’t make them adequately accountable to the people whose lives they affect.
It’s thus good to see some researchers and politicians starting to do something about it.
Objective as they may seem, artificial intelligence and big-data algorithms can be as biased as any human. Examples pop up all the time. A Google AI designed to police online comments rated “I am a gay black woman” 87 percent toxic but “I am a man” only 20 percent. A machine-learning algorithm developed by Microsoft came to perceive people in kitchens as women. Left unchecked, the list will only grow.
Help may be on the way. Consider Themis, a new, open-source bias detection tool developed by computer scientists at the University of Massachusetts Amherst. It tests “black box” algorithms by feeding them inputs with slight differences and seeing what comes out -- much as sociologists have tested companies’ hiring practices by sending them resumes with white-sounding and black-sounding names. This can be valuable in understanding whether an algorithm is fundamentally flawed.
The software, however, has a key limitation: It changes just one attribute at a time. To quantify the difference between white and black candidates, it must assume that they are identical in every other way. But in real life, whites and blacks, or men and women, tend to differ systematically in many ways -- data points that algorithms can lock onto even with no information on race or gender. How many white engineers matriculated from Howard University? What are the chances that a woman attended high-school math camp?
Untangling cultural bias from actual differences in qualifications isn’t easy. Still, some are trying. Notably, Julius Adebayo at Fast Forward Labs -- using a method to “decorrelate” historical data -- found that race was the second biggest contributor to a person’s score on COMPAS, a crime-prediction algorithm that authorities use in making decisions on bail, sentencing and parole. His work was possible thanks to Florida sentencing data unearthed by Julia Angwin at ProPublica for her own COMPAS audit -- an effort that sparked a battle with COMPAS maker Northpointe, in large part because there’s no shared definition of what makes an algorithm racist.


Many researchers and practitioners are working on how to assess algorithms and how to define fairness. This is great, but it inevitably runs into a bigger problem: secrecy. Algorithms are considered the legally protected “secret sauce” of the companies that build them, and hence largely immune to scrutiny. We almost never have sufficient information about them. How can

we test them if we have no access in the first place?
There’s a bit of good news on this front, too. Last week, James Vacca, a Democratic New York City council member, introduced legislation that would require the city to make public the inner workings of the algorithms it uses to do such things as rate teachers and decide which schools children will attend.
It’s a great idea, and I hope it’s just the first step toward making these fallible mechanisms more transparent and accountable -- and toward a larger, more inclusive discussion about what fairness should mean.

0 comments:

Post a Comment