A Blog by Jonathan Low

 

Jun 17, 2013

How Consistency Leads to Overconfidence

There was a time when one of the most popular nuggets of business wisdom was that no one ever got fired for buying IBM. Until it was no longer true.

The implication was that IBM made a consistent, reliable, if sometimes inflexible and higher priced product. But those reviewing such decisions probably didnt know enough about computing themselves to credibly second guess. And they bought into the notion that safe was better than brilliant.

These days we are increasingly reliant on algorithms and computerized programs to make our decisions for us. Instead of relying on judgment or experience, we say 'do the math.' We believe that numbers dont lie, even though those calculating the numbers might. And even though we may not know enough or care enough to challenge the assumptions that went in to developing those numbers.

We assume that really smart people created those programs and models. That they share our assumptions about the future and that they are tied to the same outcomes based on risk and reward. Even though we can never be sure about any of that. As a society we are somewhat more literate than numerate. Math makes us nervous because it is complicated and difficult. The teaching of it is frequently boring. We are raised to speak but have to be taught to add, subtract and all the rest. Which feeds our personal insecurity and encourages our outsourcing of decision-making to those quant jocks who can translate for us.

The notion that markets dislike surprises, abhor vaccuums and reward consistency has been drummed into a generation of managers. Although we profess to honor entrepreneurial initiative, few of us choose that path. It is lonely and risky and could lead to failure. Our sedentary nature leads us to self-select for comfort and security.

The problem is that we live in a world where consistency, when it can be found, is available in shorter and shorter time frames. And in an economy where the only constant is change, we ignore the contra-indicative message that consistency sends: just keep on keepin' on despite the fact that the world around you is imploding.

The lesson is not that consistency is bad. But that it is no more reliable than inconsistency. It just is. Which means judgment must be applied. And sometimes that judgment must be based on gut, feel, experience, muscle memory and the accumulated wisdom that comes from a lifetime - however short - of education, training, observation, knowledge, practice and familiarity. In sum, a bunch of factors that may be difficult to quantify or build into a statistical model. Numbers are tools, not Gods. The sooner we recognize that their power and weakness as indicators are intertwined, the more likely are we able to effectively apply them. JL

Alex Mayyasi reports in Price Economics:

The less ad-hoc, and the more standardized their approach to solving a problem, the higher participants rated their performance, regardless of whether they solved the problem correctly. The study concludes that using formalized approaches to solving problems seems to lead people to be overconfident in their solution and performance.
Imagine two groups of investors. The first makes their investments on an ad-hoc basis. Whenever someone has an idea for a new investment, they hold and meeting and decide whether to act on it. The second makes decisions by using formal models to regularly assess the value of their current and potential investments. 
Who would you trust with your money?
You would probably go with the second group, trusting their disciplined approach. But while the second group undoubtedly would have a better day to day grasp of their investments, their consistent approach to investing may lead them to be overconfident. 
This is the implication of a forthcoming article in the Journal of Personality and Social Psychology, “The hobgoblin of consistency: Algorithmic judgment strategies underlie inflated self-assessments of performance.” The paper investigates how taking a consistent, algorithmic approach to problem solving can lead to overconfidence (abstract). 
The authors studied participants solving problems in areas like financial investing and logical reasoning. Afterward, the participants rated their satisfaction with their performance and confidence in their solution.
How does this play out in the real world?
A major, disastrous illustration of the principle is the Vietnam War. Secretary of Defense Robert McNamara, who believed in the value of statistical analysis, used metrics like body counts to make judgments about America’s ability to win the war. While data-driven analysis can be very valuable, McNamara’s formal analysis (of flawed metrics), made him overconfident in his predictions of progress and victory.
A more recent example comes from the use of formulas and models by Wall Street. The above image from Wired depicts a Gaussian copula function created by David Li, a quantitative analyst who worked in finance until 2008. Described as “the formula that killed Wall Street” by financial journalist Felix Salmon, it is credited as a significant cause of the financial crisis. 
While Salmon breaks down the problems with the formula at great length, the basic premise is that the formula oversimplified the process of calculating the risk of pools of bonds. It ignored the nearly impossible work of analyzing the risk of each bond and the correlation between the risk of each bond (in effect, what is the likelihood that if one bond fails, another will as well - due to say, a massive financial crisis). Instead, it looked at the prices of each bond's corresponding credit default swap (essentially a bet on whether the bond would default or not), and took price changes on the credit default swap market as a sign of whether risk was increasing or decreasing.
Among other simplifications, this rested on the assumption that markets could price the risk of default correctly. As Salmon notes in his article, although people recognized the limitations of Li's formula, Wall Street embraced it to create a massive new market in pools of bonds, seduced by the simplicity of Li's formula. Just like the participants in the professors' study, they were overconfident because of the standardized and formulaic nature of their analysis of risk. As a result, they amassed massive risks in the bond markets, completely unprepared for the day when their model would fail.
Standard processes for problem solving like formal models and A/B testing are useful tools, but they always have limitations. Solving hard problems always has an element of uncertainty. It is easy to escape the worry of whether you made the right decision by trusting a model or SOP. But if you outsource your decision making to SOPs, formulas, and models, your tools limitations will become your own.

0 comments:

Post a Comment