A Blog by Jonathan Low

 

Nov 20, 2016

Coevolution and the Limitations of Algorithms

Coevolution describes the process by which competitors - in the natural, physical and digital world - adapt to each others' successes in order to avoid extinction.

When it comes to algorithmic decision-making, the first lessons in contemporary socio-economic terms came in finance when investors in the 1980s learned, to their misfortune, that smart people with the same educational backgrounds, technology and software would, inevitably, reach the same conclusions, pursue the same strategies and thus cancel out each others' perceived advantages.

We are now seeing the same phenomenon in marketing, politics and technology development. The only question is why the absolute certainty of this evolutionary imperative has not been more obvious to more ostensible thought leaders. JL

Ben Carlson comments in A Wealth Of Common Sense:

With greater use of technology in many facets of life going forward the biggest beneficiaries will typically be those who get there first, not the second or third adopters. This is how competition works. Early adopters reap the biggest gains which attracts competitors who (want) that same edge. Eventually this levels the playing field and competitive advantages slowly subside.
It would be an understatement to say that the election result caught some people off-guard (just not Dave Chappelle or Chris Rock).
When these things happen, rational people try to learn from their mistakes by showing some humility. Political discussions are rarely rational, though, so the past week or so has been filled with hindsight bias, hubris and denial. Everyone is trying to figure out that single variable that will explain why millions of people did what they did even though they all have different goals, agendas, personal opinions and reasons for their actions.
People have spent a lot of time criticizing the polling models that turned out to be partially or wildly inaccurate at predicting the future (go figure), but the politicians themselves have also turned to technology to help on the campaign trail.
For the 2008 election, President Obama hired a product manager from Google to head up his media analytics team. They put to work technology and strategies that hadn’t been used before and it gave them a huge edge in a number of different areas, including increased donations and understanding different regions of voters around the country.
As outlined in the book Algorithms to Live Bythe majority of these first mover advantages were gone by Obama’s reelection efforts in 2012:
We know what happened to Obama in the 2008 election, of course. But what happened to his director of analytics, Dan Siroker? After the inauguration, Siroker returned west, to California, and with fellow Googler Pete Koomen co-founded the website optimization firm Optimizely. By the 2012 presidential election cycle, their company counted among its clients both the Obama re-election campaign and the campaign of Republican challenger Mitt Romney.
Within a decade or so after its first tentative use, A/B testing was no longer a secret weapon. It has become such a deeply embedded part of how business and politics are conducted online as to be effectively taken for granted. The next time you open your browser, you can be sure that the colors, images, text, perhaps even the prices you see— and certainly the ads— have come from an explore/ exploit algorithm, tuning itself to your clicks. In this particular multi-armed bandit problem, you’re not the gambler; you’re the jackpot.
This is how competition works. Early adopters reap the biggest gains which attracts competitors who come in search of that same edge. Eventually this levels the playing field and competitive advantages slowly subside.
The Clinton team tried to take things a step further in their efforts to use technology to their advantage this time around (as described by the Washington Post):
What Ada did, based on all that data, aides said, was run 400,000 simulations a day of what the race against Trump might look like. A report that was spit out would give campaign manager Robby Mook and others a detailed picture of which battleground states were most likely to tip the race in one direction or another — and guide decisions about where to spend time and deploy resources.
The use of analytics by campaigns was hardly unprecedented. But Clinton aides were convinced their work, which was far more sophisticated than anything employed by President Obama or GOP nominee Mitt Romney in 2012, gave them a big strategic advantage over Trump.
To state the obvious, this model didn’t help all that much:
About some things, she was apparently right. Aides say Pennsylvania was pegged as an extremely important state early on, which explains why Clinton was such a frequent visitor and chose to hold her penultimate rally in Philadelphia on Monday night.
But it appears that the importance of other states Clinton would lose — including Michigan and Wisconsin — never became fully apparent or that it was too late once it did.
Again, there is no single variable that can explain the results of something as complex as a presidential election. This does, however, offer a good lesson in the limitations of the use of technology in our efforts to predict the future.
There are a number of studies that have shown that algorithms are typically better at making decisions that humans because they are disciplined and rules-based. They don’t allow emotions to cloud their judgement like we do.
Yet there are always going to be variables you can’t map out in a model. You can’t teach it common sense or human emotion. You can’t really model the future or random, unexpected events. The outputs are only as good as the inputs so it’s always going to be garbage-in, garbage-out with these things.
And with greater use of technology in many facets of life going forward the biggest beneficiaries will typically be those who get there first, not the second or third adopters.
As algorithms become more prevalent in our lives and the decision-making process it’s worth remembering both their strengths and limitations. These things are not infallible. They can help make our lives more efficient, but they are not yet the be-all, end-all. Our interpretations of the outputs will still play a large role in the success or failure of these models.
And as the adoption rate increases and more and more people put them to use it will still be up to the humans who are operating the algorithms to help differentiate between huge mistakes or successful outcomes.
An overreliance on technology may be one of the biggest mistakes people make in the future as overconfidence may shift from our own abilities to those of an algorithm.

0 comments:

Post a Comment