A Blog by Jonathan Low

 

Sep 16, 2017

We Remember Predictions That Come True But Forget Ones That Don't

Selective reasoning - and memory. JL

Dan Gardner reports in Slate:

When it comes to the predictions and forecasting, the challenge is to separate the lucky from the skilled. To calculate performance stats for forecasters, you need very large numbers of forecasts. The forecasts must use precise terms and numbers. And, to ensure comparability, you need forecasters to forecast about the same thing.
The election of 2016 made Donald Trump a president. And it made Michael Moore an oracle.
“Not many people can claim they predicted Donald Trump would win the presidency,” wrote McClatchy’s Brian Murphy on Nov. 9. “Fewer still can show they laid out exactly how Trump would do it. Michael Moore nailed it.” There were dozens of stories like it. Most were breathless and celebratory. Journalists flocked to Moore and begged him to say what a Trump presidency would bring. He obliged with a string of dramatic prophecies.
The reports of Moore’s successful prognostication weren’t entirely wrong. In July 2016, Moore published a short essay in which he said a Trump victory was certain. But what the laudatory stories didn’t mention was that a Trump win was not Moore’s only prediction. In August—when Trump’s campaign appeared to be collapsing—Moore wrote that Trump was “self-sabotaging” because he was terrified of losing and he didn’t actually want to win. Trump would drop out of the race long before the election, he foretold.
And on Oct. 9, when the Clinton campaign was riding high, Moore tweeted this: “Some note it was my post in July, the 5 reasons Trump could win, that lit a fire under millions 2 take this seriously & get busy. Ur welcome.”
Trump wins, Trump drops out, Clinton wins: No matter what happened on Nov. 8, Moore could claim he saw it coming.
The problem here isn’t Michael Moore. It’s the media and how they report on forecasts and forecasters. Whenever a shocking event occurs, journalists rush to find the wise few who saw it coming, anoint them oracles, and beg them to reveal what will come next. It’s an understandable reaction to surprise and uncertainty. It’s also an embarrassing failure of the elementary skepticism that should be journalism’s foundation.
For big events like presidential elections, terrorist attacks, and stock market crashes, the number of observers making forecasts is always large, with varied forecasts. As a result, every possible outcome will usually have been predicted. In those circumstances, the mere fact that someone correctly predicted something means little. To take it as proof that the forecaster possesses deep insight and knows what’s coming next makes as much sense as asking today’s lottery winner to reveal next week’s winning numbers.
When it comes to the predictions and forecasting, the challenge is to separate the lucky from the skilled. As any baseball fan knows, that requires statistics. One home run or one strikeout says very little. To judge a batter, you need to know his batting average—a performance statistic based on the careful observation and scoring of a large number of at-bats.
This might seem a simple thing to do with forecasts. It’s not. A big problem is vague language. When some pundit says something “could” or “may” happen, or there’s “a distinct possibility” that it will, she is quite literally saying it may or may not happen. Good luck scoring that. Even something like “Trump will win” has hidden ambiguity. If Trump loses the popular vote but wins the Electoral College, is it right? What about the reverse? It’s impossible to say beyond dispute. Unfortunately, vague language is far more common in media reports about forecasting than precise, scorable terms.

Another huge barrier to creating reliable track records is our tendency—journalists and public alike—to remember hits and forget misses.
In the first half of 2008, when the prices of oil and other commodities were soaring and food shortages prompted riots in various places around the world, there were countless stories about “peak oil” and the coming “age of scarcity.” In the second half of 2008, things did go to hell, but not that way. And within several years—certainly by 2015, when the price of oil collapsed—the frightening forecasts of early 2008 had clearly failed. But few journalists looked back. They seldom do. So there was no wave of “What happened to our Mad Max future?” stories, and forecasters were not grilled about what they got wrong. It all just faded slowly out of memory.
This happens routinely. Remember when the eurozone would collapse with catastrophic consequences? The China asset bubble would burst? Quantitative easing would cause hyperinflation? All these forecasts got huge play at the time and were allowed to slip quietly out of memory when they proved wrong.
But a forecast that hits? That’s proof the forecaster is an oracle—and journalists love to look into the future with the help of a soothsayer.
 
We see this on business TV every day: A talking head is introduced as the person who successfully called X or Y and is then asked what will happen next. If he ever made a bad forecast in his life, we don’t hear about it. I saw a particularly extreme example on CNBC in 2010, when a financial forecaster was introduced as the person who successfully called the crash of 1987. Yes, 1987. But not a word was said about the fact that this person struggled in the years after 1987 and was actually let go by her firm in 1994.
Michael Moore also illustrates the point. “I think people should start to practice the words ‘President Romney,’ ” he said in 2012. I don’t believe I’ve ever said the words President Romney, but no matter: After that election, there were precisely zero stories about Moore’s forecast.
Heads, I win. Tails, you forget we had a bet. Those are the rules governing expert forecasts in the media.
But even if we did recall hits and misses equally, that still wouldn’t be enough to produce the track records needed to meaningfully judge forecasters
Moore was right (let’s be generous) in 2016 but wrong in 2012. I could find no other presidential forecasts from him. That gives us a grand total of two forecasts—and two at-bats isn’t nearly enough to produce an insightful batting average. So we really have no way of saying whether Moore is better or worse than the average observer.

To calculate performance stats for forecasters, you need very large numbers of forecasts. The forecasts must use precise terms and numbers (“a 30 percent probability North Korea will develop an ICBM capable of carrying a nuclear warhead,” not “a good chance North Korea’s weapons will get stronger”). And, to ensure comparability, you need forecasters to forecast about the same thing. Do that and you can know how much stock you should put in someone’s forecast.
Philip Tetlock, an eminent psychologist at the University of Pennsylvania, developed this methodology and used it to make the most comprehensive investigation of expert political forecasting every undertaken. The intelligence community was so impressed, it funded his subsequent research—research that found a small number of otherwise ordinary people were extraordinary forecasters capable of beating even intelligence professionals with access to classified information. Figuring out what makes these “superforecasters” so good is the subject of Tetlock’s book Superforecasting (which, full disclosure, I co-wrote).

The intelligence community is changing how it operates in light of Tetlock’s research. Finance has taken note, too—Goldman Sachs specifically revised how it does one of its key forecasts to bring it in line with Tetlock’s recommendations.
But the media isn’t interested. Its reporting on forecasts and forecasters is as bad as ever. Of course, that’s only a problem if you think forecasts in the media are serious stories to help people understand events and where things are headed. Some journalists clearly don’t think so. For them, forecasts are for fun. Like horoscopes. And as with horoscopes, their stories about forecasts should run with “for entertainment purposes only” disclaimers.

0 comments:

Post a Comment