A Blog by Jonathan Low

 

Aug 21, 2013

The Problem With Algorithms: Magnifying Misbehavior

R-E-S-P-E-C-T.

Aretha Franklin laid it all out when she sang about the way she expected to be treated. And the song has become an iconic anthem to the importance of  of human interaction.

Which makes organizations' relentless efforts to jettison humans in favor of machines all the more curious. We understand that their are costs involved, though the 'savings' touted in favor of device purchases have been shown - repeatedly - to be incomplete, biased and just plain wrong. We also understand the theory behind increasing productivity, though research, again, has convincingly demonstrated that without proper training and frequently disruptive reorganization of the workplace as well as of the people who inhabit it, technology may also fail to deliver the enhancements advertised as a benefit of its purchase.

Time and again, enterprises choose to do almost anything to replace humans in favor of devices whose judgment, we are led to believe, is unbiased, unemotional, objective and otherwise superior to anything of which man or woman is capable. This has become particularly true as the era of big data dawns. Algorithms and their offspring are touted as a preferable alternative to human experience and knowledge. In some cases, this is probably true, but as the following article explains, algorithms are designed by humans so reflect all the imperfections their inventors bring to bear in the creative process. And when mistakes are made, they can far offset any putative savings their implementation was thought to provide. JL

John Burn-Murdoch reports in The Guardian:

Computers that learn from and repeat human behavior save time and money, but what happens when they repeat flawed traits or errors thousands of times per second?
By the time you read these words, much of what has appeared on the screen of whatever device you are using has been dictated by a series of conditional instructions laid down in lines of code, whose weightings and outputs are dependent on your behaviour or characteristics.
We live in the Age of the Algorithm, where computer models save time, money and lives. Gone are the days when labyrinthine formulae were the exclusive domain of finance and the sciences - nonprofit organisations, sports teams and the emergency services are now among their beneficiaries. Even romance is no longer a statistics-free zone.
But the very feature that makes algorithms so valuable - their ability to replicate human decision-making in a fraction of the time - can be a double-edged sword. If the observed human behaviours that dictate how an algorithm transforms input into output are flawed, we risk setting in motion a vicious circle when we hand over responsibility to The Machine.

The prejudiced computer

For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed - and then exacerbated - gender and racial discrimination.
As detailed here in the British Medical Journal, staff at St George's Hospital Medical School decided to write an algorithm that would automate the first round of its admissions process. The formulae used historical patterns in the characteristics of candidates whose applications were traditionally rejected to filter out new candidates whose profiles matched those of the least successful applicants.
By 1979 the list of candidates selected by the algorithms was a 90-95% match for those chosen by the selection panel, and in 1982 it was decided that the whole initial stage of the admissions process would be handled by the model. Candidates were assigned a score without their applications having passed a single human pair of eyes, and this score was used to determine whether or not they would be interviewed.
Quite aside from the obvious concerns that a student would have upon finding out a computer was rejecting their application, a more disturbing discovery was made. The admissions data that was used to define the model's outputs showed bias against females and people with non-European-looking names.
The truth was discovered by two professors at St George's, and the university co-operated fully with an inquiry by the Commission for Racial Equality, both taking steps to ensure the same would not happen again and contacting applicants who had been unfairly screened out, in some cases even offering them a place.
Nevertheless, the story is just one well documented case of what could be thousands. At the time, St George's actually admitted a higher proportion of ethnic minority students than the average across London, although whether the bias shown by other medical schools was the result of human or machine prejudice is not clear.

Selection bias: could recruitment be next?

Recent developments in the recruitment industry open it up to similar risks. Earlier this year LinkedIn launched a new recommendation service for recruiters, which runs off algorithms similar in their basic purpose to those used at St George's.
'People You May Want To Hire' uses a recruiter or HR professional's existing and ongoing candidate selection patterns to suggest to them other individuals they might like to consider hiring.
"The People You May Want to Hire feature within LinkedIn Recruiter looks at a wide range of members' public professional data - like work experience, seniority, skills, location and education - and suggests relevant candidates that may not otherwise show up in a recruiter's searches on LinkedIn. Gender and ethnicity are not elements we ask for or track anywhere on Recruiter", said Richard George, corporate communications manager at LinkedIn.
Although gender and race play no part in the process per se, a LinkedIn user's country of residence could be one criterion used by the model to filter in or out certain candidates. An individual's high school, LinkedIn connections and - to an extent - the university they attended are just three more examples of essentially arbitrary characteristics that could become more and more significant in candidate selection as a result of the algorithm's iterative nature.

The glitch that sank a thousand shares

Another magnification problem is one that arises purely out of the pace at which algorithms work. In just three years there have been four entirely distinct stock market disruptions brought about by technological 'glitches'.
The 'Flash Crash' of 2010 saw the Dow Jones Industrial Average jump by 600 points in a matter of minutes. Some shares saw 99% wiped off their value while others rocketed upwards, neither as a result of anything more than computing chaos.
In 2012 we had both the Knight Capital incident - details of which remain scarce - and Nasdaq's technological mishandling of the Facebook IPO. The crash of April 2013 brought about by the hacking of the Associated Press Twitter account is another example of algorithms being unable to deal with context that a human observer would likely have accounted for.
Once again these are examples of the very same quality that makes algorithms attractive - in this case their speed - being their downfall. The US Securities and Exchange Commission has acted to minimise the risk of repeats, and EU legislators are clamping down on high frequency trading in Europe, but both stock market malfunctions and the St George's case provide evidence that it is only after algorithms are seen to have gone rogue that action is taken.
As a result, we are left with some unnerving questions: when and where will the next 'glitch' strike? And if it's as subtle as that of St George's, how much damage might be done before it is even identified?

0 comments:

Post a Comment