A Blog by Jonathan Low

 

Jun 7, 2017

Where Are All the Space Hotels? Why Smart People Make Dumb Predictions

It may be comforting to think that computers aided by algorithmically-enhanced artificial intelligence will make more accurate forecasts. Until it is remembered that those machines and their software are programmed by humans. JL

Charlie Wood reports in the Christian Science Monitor:

Known to psychology for decades, the Planning Fallacy describes the tendency to overstate chances of finishing tasks on time, despite memories of past projects having rarely gone as planned. The problem is the tendency to take the optimism generated by a grand idea and overlay that on execution without applying critical rigor. Cognitive biases, development challenges, and financing conventions make accurate predictions next to impossible.
If all had gone according to plan, the James Webb Space Telescope (JWST) would be celebrating its 10th anniversary of capturing stunning portraits of distant galaxies, NASA would be hard at work on its dark energy detector, Virgin Galactic would be running two daily tourist flights to the edge of space for just $50,000 a head, and a Russian company would be doing brisk business with its orbiting luxury space hotel.
Of course, that's not how it worked out in reality. Last month, Virgin Galactic made its tenth annual prediction that “next year” it will finally shuttle tourists to space, joining the JWST on the horizon of 2018, and the inaugural mission of NASA's new Space Launch System slipped to 2019. As for that space hotel, don't ask.
Planning for the future is part of what it means to be human, but cognitive biases, development challenges, and financing conventions conspire to make accurate predictions next to impossible. With forecasting occupying a central role in our greatest ambitions from space to construction, economists and engineers alike are harnessing new tools that let the past inform the future, aiming to make prediction more science and less art.
At the root of the problem is the tendency to take the optimism generated by a grand idea and overlay that on the execution without applying the critical rigor required to foresee the inevitable hurdles that are likely to complicate the process. Known to psychology for decades, the Planning Fallacy describes the tendency to overstate chances of finishing tasks on time, despite memories of past projects having rarely gone as planned.
“Apparently, people can know the past and still be doomed to repeat it,” wrote psychologist Roger Buehler in his 1995 paper on the topic.Even the most conservative forecasts can fall prey to evidence-dismissing confidence. Of students asked to lay out a timeline that specifies the dates by which they felt 50 percent, 75 percent, and 99 percent sure they would have finished an academic project, fewer than half actually completed the assignment before the 99-percent prediction.
“Kind of counterintuitively, it’s because people base their predictions too much on imagining and planning how the task is going to unfold,” explains Dr. Buehler, a professor of social psychology at Wilfrid Laurier University. Most folks think through a mental simulation of completing various stages of the task and budget time accordingly.
“The problem with that approach is that those kinds of scenarios tend to be oversimplified, idealized, and don’t take enough obstacles into account,” he continues. Indeed, another study found that only 3 percent of participants considered potential roadblocks during the planning process.
Even when reminded of past failed predictions, people tend to assume those were isolated mishaps, forgetting that there are countless ways for a plan to go wrong but only one way for it to go right.
When individuals gather into groups, the situation gets even worse. “If the team is all invested in the project, getting together and planning it out as a group in fact exacerbated the bias,” says Buehler. Apparently, no one wants to be a Negative Nancy.
It’s basically impossible for an individual to overcome his or her optimism bias, according to megaproject management scholar Bent Flyvbjerg. “These are very basic human mechanisms that apply in all walks of life,” he says. “Experts, including myself, are just as optimistic as laypeople. Just because you know you’re optimistic doesn’t mean that you’re not optimistic.”
Such optimism often leads to underestimating the chance of unknown unknowns derailing your project, so planning experts suggest a technique called reference class forecasting, where project planners learn from past risk by predicting overruns based on how similarly complex projects fared before.
“Independent researchers have found that reference class forecasting is the most accurate forecasting method that you can get,” says Dr. Flyvbjerg. “You cannot be optimistic because you’re taking out human judgement.”
The aerospace industry in particular could use some new tricks. Virgin Galactic doesn't release public estimates, but according to the US Government Accountability Office’s annual review, NASA’s large project costs have overrun budget by between 10 percent and 50 percent in each of the last nine years, a figure dominated by the ballooning costs of the JWST.
The 2015 NASA Cost Estimating Handbook outlines three prediction models: drawing holistic analogies to previous projects (if the Curiosity Mars rover cost this much, then the next lander might cost this much), calculating relationships between key characteristics and cost (perhaps doubling the mass doubles price), and building up itemized costs (this much for labor, this much for parts).
But when you’re attempting the unprecedented, be it space tourism or a Pluto probe, how do you build data-based models?
“That is more difficult, but not impossible,” says Flyvbjerg. “People often think of their projects as more unique than they actually are.”
Not one to back down from a challenge, NASA in 2013 and 2014 developed the Technology Cost and Schedule Estimating (TCASE) software, which uses reference class forecasting tenets to predict the most uncertain of undertakings: creating new technology. Mining a database of over 3,000 past technology development projects, the program's creators isolated a handful of characteristics that showed predictive power, such as technology area and a classification scheme known as Technology Readiness Levels, or TRLs, which go from 1 (physically conceivable on paper) to 9 (mission proven).
Such tools formalize techniques that have proved effective, according to former ESA program manager Alan Thirkettle, who oversaw the development of the European ISS components. He describes a budgeting process that has long incorporated reference class forecasting type thinking, but one that may have varied between managers and facilities.
TCASE is just one of a suite of models, and how successful they’ll prove remains to be seen. The 2016 GAO report congratulated NASA for slowing budget creep in recent years, but attributed it only in part to new project management tools, with the rest of the credit going to rising baseline estimates that hide percentage-wise growth.
What’s more, TCASE applies only to technologies in the early development stage. This period may be the most uncertain, but it counterintuitively accounts for less than a fifth of total development resources. A meta-study of a dozen NASA missions found that costs tend to explode toward the end, with nearly half of the total dedicated to moving a technology from TRL 7 (space-ready prototype) to launch.
Mr. Thirkettle says this trend is standard, likening spacecraft development to building an electric car. Even if inventing long-range battery tech is tough, actually building the whole car costs much more.  “The system is far, far more expensive than the technology development part,” he says.
Flyvbjerg suggests another reason, one to which newer organizations may succumb more easily. “Often people get surprised at the end,” he explains. “If people have been hiding the uncertainties, which is very common human behavior, then you will actually have a blowout… where all of a sudden reality hits the project and all the unpleasant, hidden things appear.”
These unavoidable features of the development cycle combine with cognitive bias to make even the best-laid plans go awry. But to make matters worse, even if reference class forecasting could roughly estimate the chance of unknown unknowns cropping up, it's hard to get advance funding for what-if scenarios.
"Historically you can say on the average project you might get somewhere between 5 and 10 percent of surprise that you just couldn’t foresee," says Thirkettle, but "you can't say give me an extra so many million because I may get a launch failure." Available money tends to get spent, so any rainy-day funds have to be carefully earmarked. This paradox means that even though planners may expect delays, initially authorized plans often don't reflect that wisdom, making them closer to a best case scenario than a firm promise.
And space just exaggerates that challenge. "You tend to be fairly close to the state of the art so you can get surprises no matter how much discipline you try to put into things," Thirkettle says.

0 comments:

Post a Comment