A Blog by Jonathan Low

 

Nov 29, 2017

Data Bias Is Becoming A Massive Problem For Companies

In a data-dependent economy, bad data causes resource misallocation, inefficiency and significant financial costs. JL

Greg Satell reports in Digital Tonto:

Overfitting means that because there is an element of bias in every data set, the more specifically we tailor a predictive model to the past the less likely it is to reflect the future. In other words, the more detailed you make your model to fit the data, the worse the predictions are likely to get. What makes data bias so damaging is that we are mostly unaware of it. We assume that data and analytics are objective.
Nobody sets out to be biased, but it’s harder to avoid than you would think. Wikipedia lists over 100 documented biases from authority bias and confirmation bias to the Semmelweis effect, we have an enormous tendency to let things other than the facts to affect our judgments. We all, as much as we hate to admit it, are vulnerable.
Machines, even virtual ones, have biases too. They are designed, necessarily, to favor some kinds of data over others. Unfortunately, we rarely question the judgments of mathematical models and, in many cases, their biases can pervade and distort operational reality, creating unintended consequences that are hard to undo.
What makes data bias so damaging is that we are mostly unaware of it. We assume that data and analytics are objective, but that’s almost never the case. Our machines are, for better or worse, extensions of ourselves and inherit our subjective judgments. As data and analytics become a core component of our decision making, we need to be far more careful.

Overfitting The Past

Imagine you’re running a business that hires 100 people a year and you want to build a predictive model that would tell you what colleges you should focus your recruiting efforts on. A seemingly reasonable approach would be to examine where you’ve recruited people in the past and how they performed. Then you could focus your efforts on the best performing schools.
On the surface, that would seem to make sense, but if you take a closer look it is inherently flawed. First of all, 100 students spread across perhaps a dozen colleges is far from statistically significant. Second. It’s not hard to see how a one or two standouts or dullards from a particular school would skew the results massively.
A related problem is what statisticians call overfitting, which basically means that because there is an element of bias in every data set, the more specifically we tailor a predictive model to the past the less likely it is to reflect the future. In other words, the more detailed you make your model to fit the data, the worse the predictions are likely to get.
That may seem counterintuitive, and it is, which is why overfitting is so common. People who sell predictive software love to be able to say things like, “our model has been proven to be 99.8% accurate,” even if that is often an indication that their product is actually less reliable than one that is, say, 80% accurate, but far simpler and more robust.

Bias In The Learning Corpus

With humans, we are careful to construct learning environments thoughtfully. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.
Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.
However, the process can go horribly awry, as in the case of Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual  (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.
Bias in the learning corpus is far more common than we often realize. Do an image search for word “Grandma” and you will get almost exclusively white faces. The same goes for prestigious titles, like doctor, lawyer and scientist. When we query machines, all too often we find our own biases baked in.

Perpetuating Bias

For over a century, the intelligence quotient (IQ) has been the standard method to test intelligence and has been shown to be strongly correlated with educational, professional and economic outcomes. However, a strong correlation is not a perfect correlation and researchers have consistently found a number of sources of bias in the testing that can affect scores.
The flaws of IQ tests are well known and educators are generally aware of them, so are well placed to mitigate the problems that bias creates, but still test results help shape the educational experience. Students that test well are placed in different classrooms, get different curriculums and are treated differently by teachers.
As Cathy O’Neil explains in Weapons of Math Destruction today algorithms often determine what college we attend, if we get hired for a job and even who goes to prison and for how long. Unlike IQ tests, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.
Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your humiliating descent is documented, measured and evaluated.

Correcting For Bias

In Thinking Fast, Thinking Slow,Daniel Kahneman explains how humans can overcome their biases. He describes our brains as two systems. The first is quick to judgement, but the second is slower and weighs data more carefully. With training and experience, we can learn to disengage our system 1 and replace it with system 2.
Yet we rarely do the same with machines. We don’t ask our algorithms to “sleep on it” or to get a second opinion. Often, we don’t even stop to question their judgments. If a human told us to make a decision in a certain way, we would want to know why, but when a mathematical model does it, we usually just accept it and move on.
We shouldn’t. Our data systems are designed by people and inherit many of our human flaws. We need to hold them to higher standards. Good systems, like good people, need to be transparent and accountable. We should know what information is being used, how factors are weighted and how conclusions are arrived at.
It’s been a long time since we simply accepted “the will of the gods” as an acceptable explanation for our fates. Now that those gods have been replaced by algorithms in black boxes, we need to continue to question their objectivity. Anything less is not only bad practice, it’s immoral.

0 comments:

Post a Comment