A Blog by Jonathan Low

 

Nov 6, 2012

Why We Can't Seem to Solve Big Problems

A photo of an astronaut's foot-print on the moon's surface came to symbolize both the power of collective human action and mankind's fragility when placed against the vastness of the universe.

That was almost 45 years ago (43 to be exact). As America contemplates its future and ponders the choice of who to lead it there - wherever and whatever 'there' is - the notion of the nation coming together in common cause about anything seems so remote as to be unimaginable.

In American history there have been similar periods where fearfulness and uncertainty about the future have led to a turtle-like withdrawl inside the shell of the oceanically protected national borders. But this feels different. Because most of the fiercest disagreements concern internal, strictly national issues - to the extent there is such a thing in a technologically driven, globalized economy. And, of course, that realization lies at the base of this insecurity. The simple fact of having to share. Of having to cede authority - however illusory it may have been for the past generation. Of having not to embrace change, but to acknowledge that change has already happened. And that not everyone who thought themselves a winner beforehand can honestly believe they still are.

The result, as the following article articulates, is that the big problems have become harder to solve because it is so difficult to get agreement on addressing the smaller challenges whose solution is required before the larger ones can be successfully tackled.

Overcoming big challenges like overcoming poverty, securing a robust legal system, feeding the world's hungry - or colonizing Mars - demands cooperation and collaboration amongst myriad individuals and organizations. It is not necessary that they agree on every detail or that they like each other, but they must be able to envision a common future and set goals to realize it.

That capacity for tolerance of diverse opinions, respect for the contribution of others and the ability to work in tandem with some whose methods may seem strange is necessary if the US and the rest of the world are to secure that common future.

The throw-away line that says 'dont sweat the small stuff,' has it backwards. It may well be more accurate to say that one shouldnt sweat the big stuff. It will take care of itself once the small stuff gets sorted out. JL

Jason Pontin comments in MIT Technology Review:
On July 21, 1969, Buzz Aldrin climbed gingerly out of Eagle, Apollo 11's lunar module, and joined Neil Armstrong on the Sea of Tranquility. Looking up, he said, "Beautiful, beautiful, magnificent desolation." They were alone; but their presence on the moon's silent, gray surface was the culmination of a convulsive collective effort.

Eight years before, President John F. Kennedy had asked the United States Congress to "commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth." His challenge disturbed the National Aeronautics and Space Administration's original plan for a stepped, multi-generational strategy: Wernher von Braun, NASA's chief of rocketry, had thought the agency would first send men into Earth's orbit, then build a space station, then fly to the moon, then build a lunar colony. A century hence, perhaps, humans would travel to Mars. Kennedy's goal was also absurdly ambitious. A few weeks before his speech, NASA had strapped an astronaut into a tiny capsule atop a converted military rocket and shot him into space on a ballistic trajectory, as if he were a circus clown; but no American had orbited the planet. The agency didn't really know if what the president asked could be done in the time he allowed, but it accepted the call.

This required the greatest peacetime mobilization in the nation's history. Although NASA was and remains a civilian agency, the Apollo program was possible only because it was a lavishly funded, semi-militarized project: all the astronauts (with one exception) had been Air Force pilots and naval aviators; many of the agency's middle-aged administrators had served in the Second World War in some capacity; and the director of the program itself, Samuel Philips, was an Air Force general officer, drafted into service because of his effective management of the Minuteman missile program. In all, NASA spent $24 billion, or about $180 billion in today's dollars, on Apollo; at its peak in the mid-1960s, the agency enjoyed more than 4 percent of the federal budget. The program employed around 400,000 people and demanded the collaboration of about 20,000 companies, universities, and government agencies.

If Apollo commanded a significant portion of the treasure of the world's richest nation and the coöperation of all its estates, that was because Kennedy's challenge required NASA to solve a bewildering number of smaller problems decades ahead of technology's evolutionary schedule. The agency's solutions were often inelegant. To escape from orbit, NASA constructed 13 giant, single--use multistage rockets, capable of lifting 50 tons of payload and generating 7.6 million pounds of thrust. Only an ungainly modular spacecraft could be flown by the deadline; but docking the command and lunar modules midflight, sending the lunar module to the moon's surface, and then reuniting the modules in lunar orbit demanded a kind of spastic space dance and forced the agency's engineers to develop and test a long series of astronautical innovations. Men died, including the crew of Apollo 1, who burned in the cabin of their command module. But before the program ended in 1972, 24 men flew to the moon. Twelve walked on its surface, of whom Aldrin, following the death of Armstrong last August, is now the most senior.

Why did they go? They brought back little—841 pounds of old rocks, Aldrin's smuggled aesthetic bliss, and something most of the 24 emphasized: a new sense of the smallness and fragility of our home. (Jim Lovell, not untypically, remembered, "Everything that I ever knew—my life, my loved ones, the Navy—everything, the whole world, was behind my thumb.") The cynical, mostly correct answer is that Kennedy wanted to demonstrate the superiority of American rocketry over Soviet engineering: the president's challenge was made in May of 1961, little more than a month after Yuri Gagarin became the first human in space. But it does not adequately explain why the United States made the great effort it did, nor does it convey how the lunar landings were understood at the time.

Kennedy's words, spoken at Rice University in 1962, provide a better clue:

"But why, some say, the moon? Why choose this as our goal? . . . Why climb the highest mountain? Why, 35 years ago, fly the Atlantic? . . . We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills . . ."

Apollo was not seen only as a victory for one of two antagonistic ideologies. Rather, the strongest emotion at the time of the moon landings was of wonder at the transcendent power of technology. From his perch in Lausanne, Switzerland, the writer Vladimir Nabokov cabled the New York Times, "Treading the soil of the moon, palpating its pebbles, tasting the panic and splendor of the event, feeling in the pit of one's stomach the separation from terra—these form the most romantic sensation an explorer has ever known."

To contemporaries, the Apollo program occurred in the context of a long series of technological triumphs. The first half of the century produced the assembly line and the airplane, penicillin and a vaccine for tuberculosis; in the middle years of the century, polio was on its way to being eradicated; and by 1979 smallpox would be eliminated. More, the progress seemed to possess what Alvin Toffler dubbed an "accelerative thrust" in Future Shock, published in 1970. The adjectival swagger is pardonable: for decades, technology had been increasing the maximum speed of human travel. During most of history, we could go no faster than a horse or a boat with a sail; by the First World War, automobiles and trains could propel us at more than 100 miles an hour. Every decade thereafter, cars and planes sped humans faster. By 1961, a rocket-powered X-15 had been piloted to more than 4,000 miles per hour; in 1969, the crew of Apollo 10 flew at 25,000. Wasn't it the very time to explore the galaxy—"to blow this great blue, white, green planet or to be blown from it," as Saul Bellow wrote in Mr. Sammler's Planet (also 1970)?

Perhaps the most influential photograph from the Apollo lunar landings: Buzz Aldrin's footprint in the moon's gray, powdery surface.

Since Apollo 17's flight in 1972, no humans have been back to the moon, or gone anywhere beyond low Earth orbit. No one has traveled faster than the crew of Apollo 10. (Since the last flight of the supersonic Concorde in 2003, civilian travel has become slower.) Blithe optimism about technology's powers has evaporated, too, as big problems that people had imagined technology would solve, such as hunger, poverty, malaria, climate change, cancer, and the diseases of old age, have come to seem intractably hard.

I remember sitting in my family's living room in Berkeley, California, watching the liftoff of Apollo 17. I was five; my mother admonished me not to stare at the fiery exhaust of the Saturn 5 rocket. I vaguely knew that this was the last of the moon missions—but I was absolutely certain that there would be Mars colonies in my lifetime. What happened?

Parochial Explanations

That something happened to humanity's capacity to solve big problems is a commonplace. Recently, however, the complaint has developed a new stridency among Silicon Valley's investors and entrepreneurs, although it is usually expressed a little differently: people say there is a paucity of real innovations. Instead, they worry, technologists have diverted us and enriched themselves with trivial toys.

The motto of Founders Fund, a venture capital firm started by Peter Thiel, a cofounder of PayPal, is "We wanted flying cars—instead we got 140 characters." Founders Fund matters, because it is the investment arm of what is known locally as the "PayPal Mafia," currently the dominant faction in Silicon Valley, which remains the most important area on the planet for technological innovation. (Other members include Elon Musk, the founder of SpaceX and Tesla Motors; Reid Hoffman, executive chairman of LinkedIn; and Keith Rabois, chief operating officer of the mobile payments company Square.) Thiel is caustic: last year he told the New Yorker that he didn't consider the iPhone a technological breakthrough. "Compare [it] with the Apollo program," he said.The Internet is "a net plus—but not a big one." Twitter gives 500 people "job security for the next decade," but "what value does it create for the entire economy?" And so on. Max Levchin, another cofounder of PayPal, says, "I feel like we should be aiming higher. The founders of a number of startups I encounter have no real intent of getting anywhere huge ... There's an awful lot of effort being expended that is just never going to result in meaningful, disruptive innovation."

But Silicon Valley's explanation of why there are no disruptive innovations is parochial and reductive: the markets—in particular, the incentives that venture capital provides entrepreneurs—are to blame. According to Founders Fund's manifesto, "What Happened to the Future?," written by Bruce Gibney, a partner at the firm: "In the late 1990s, venture portfolios began to reflect a different sort of future ... Venture investing shifted away from funding transformational companies and toward companies that solved incremental problems or even fake problems ... VC has ceased to be the funder of the future, and instead become a funder of features, widgets, irrelevances." Computers and communications technologies advanced because they were well and properly funded, Gibney argues. But what seemed futuristic at the time of Apollo 11 "remains futuristic, in part because these technologies never received the sustained funding lavished on the electronics industries."

The argument, of course, is wildly hypocritical. PayPal's capos made their fortunes in public stock offerings and acquisitions of companies that did more or less trivial things. Levchin's last startup, Slide, was a Founders Fund investment: it was acquired by Google in 2010 for about $200 million and shuttered earlier this year. It developed Facebook widgets such as SuperPoke and FunWall.

But the real difficulty with Silicon Valley's explanation is that it is insufficient to the case. The argument that venture capitalists lost their appetite for risky but potentially important technologies clarifies what's wrong with venture capital and tells us why half of all funds have provided flat or negative returns for the last decade. It also usefully explains how a collapse in nerve reduced the scope of the companies that got funded: with the exception of Google (which wants to "organize the world's information and make it universally accessible and useful"), the ambitions of startups founded in the last 15 years do seem derisory compared with those of companies like Intel, Apple, and Microsoft, founded from the 1960s to the late 1970s. (Bill Gates, Microsoft's founder, promised to "put a computer in every home and on every desktop," and Apple's Steve Jobs said he wanted to make the "best computers in the world.") But the Valley's explanation conflates all of technology with the technologies that venture capitalists like: traditionally, as Gibney concedes, digital technologies. Even during the years when VCs were most risk-happy, they preferred investments that required little capital and offered an exit within eight to 10 years. The venture capital business has always struggled to invest profitably in technologies, such as biotechnology and energy, whose capital requirements are large and whose development is uncertain and lengthy; and VCs have never funded the development of technologies that are meant to solve big problems and possess no obvious, immediate economic value. The account is a partial explanation that forces us to ask: putting aside the personal-computer revolution, if we once did big things but do so no longer, then what changed?

Silicon Valley's explanation has this fault, too: it doesn't tell us what should be done to encourage technologists to solve big problems, beyond asking venture capitalists to make better investments. (Founders Fund promises to "run the experiment" and "invest in smart people solving difficult problems, often difficult scientific or engineering problems.") Levchin, Thiel, and Garry Kasparov, the former world chess champion, had planned a book, to be titled The Blueprint, that would "explain where the world's innovation has gone." Originally intended to be released in March of this year, it has been indefinitely postponed, according to Levchin, because the authors could not agree on a set of prescriptions.

Let's stipulate that venture-backed entrepreneurialism is essential to the development and commercialization of technological innovations. But it is not sufficient by itself to solve big problems, nor could its relative sickliness by itself undo our capacity for collective action through technology.

Irreducible Complexities

The answer is that these things are complex, and that there is no one simple explanation.

Sometimes we choose not to solve big technological problems. We could travel to Mars if we wished. NASA has the outline of a plan—or, in its bureaucratic jargon, a "design reference architecture." To a surprising degree, the agency knows how it might send humans to Mars and bring them home. "We know what the challenges are," says Bret Drake, the deputy chief architect for NASA's human spaceflight architecture team. "We know what technologies, what systems we need" (see "The Deferred Dreams of Mars"). As Drake explains, the mission would last about two years; the astronauts would spend 12 months in transit and 500 days on the surface, studying the geology of the planet and trying to understand whether it ever harbored life. Needless to say, there's much that NASA doesn't know: whether it could adequately protect the crew from cosmic rays, or how to land them safely, feed them, and house them. But if the agency received more money or reallocated its current spending and began working to solve those problems now, humans could walk on the Red Planet sometime in the 2030s.

We won't, because there are, everyone feels, more useful things to do on Earth. Going to Mars, like going to the moon, would follow upon a political decision that inspired or was inspired by public support. But almost no one feels Buzz Aldrin's "imperative to explore" (see the astronaut's sidebar).

Sometimes we fail to solve big problems because our institutions have failed. In 2010, less than 2 percent of the world's energy consumption was derived from advanced renewable sources such as wind, solar, and biofuels. (The most common renewable sources of energy are still hydroelectric power and the burning of biomass, which means wood and cow dung.) The reason is economic: coal and natural gas are cheaper than solar and wind, and petroleum is cheaper than biofuels. Because climate change is a real and urgent problem, and because the main cause of global warming is carbon dioxide released as a by-product of burning fossil fuels, we need renewable energy technologies that can compete on price with coal, natural gas, and petroleum. At the moment, they don't exist.

Happily, economists, technologists, and business leaders agree on what national policies and international treaties would spur the development and broad use of such alternatives. There should be a significant increase in public investment for energy research and development, which has fallen in the United States from a height of 10 percent in 1979 to 2 percent of total R&D spending, or just $5 billion a year. (Two years ago, Bill Gates, Xerox chief executive Ursula Burns, GE chief executive Jeff Immelt, and John Doerr, the Silicon Valley venture capitalist, called for a threefold increase in public investments in energy research.) There should be some kind of price on carbon, now a negative externality, whether it is a transparent tax or some more opaque market mechanism. There should be a regulatory framework that treats carbon dioxide emissions as pollution, setting upper limits on how much pollution companies and nations can release. Finally, and least concretely, energy experts agree that even if there were more investment in research, a price on carbon, and some kind of regulatory framework, we would still lack one vital thing: sufficient facilities to demonstrate and test new energy technologies. Such facilities are typically too expensive for private companies to build. But without a practical way to collectively test and optimize innovative energy technologies, and without some means to share the risks of development, alternative energy sources will continue to have little impact on energy use, given that any new technology will be more expensive at first than fossil fuels.

Less happily, there is no hope of any U.S. energy policy or international treaties that reflect this intellectual consensus, because one political party in the United States is reflexively opposed to industrial regulations and affects to doubt that human beings are causing climate change, and because the emerging markets of China and India will not reduce their emissions without offset benefits that the industrialized nations cannot provide. Without international treaties or U.S. policy, there will probably be no competitive alternative sources of energy in the near future, barring what is sometimes called an "energy miracle."

Sometimes big problems that had seemed technological turn out not to be so, or could more plausibly be solved through other means. Until recently, famines were understood to be caused by failures in food supply (and therefore seemed addressable by increasing the size and reliability of the supply, potentially through new agricultural or industrial technologies). But Amartya Sen, a Nobel laureate economist, has shown that famines are political crises that catastrophically affect food distribution. (Sen was influenced by his own experiences. As a child he witnessed the Bengali famine of 1943: three million displaced farmers and poor urban dwellers died unnecessarily when wartime hoarding, price gouging, and the colonial government's price–controlled acquisitions for the British army made food too expensive. Sen demonstrated that food production was actually higher in the famine years.) Technology can improve crop yields or systems for storing and transporting food; better responses by nations and nongovernmental organizations to emerging famines have reduced their number and severity. But famines will still occur because there will always be bad governments.

Yet the hope that an entrenched problem with social costs should have a technological solution is very seductive—so much so that disappointment with technology is inevitable. Malaria, which the World Health Organization estimates affected 216 million people in 2010, mostly in the poor world, has resisted technological solutions: infectious mosquitoes are everywhere in the tropics, treatments are expensive, and the poor are a terrible market for drugs. The most efficient solutions to the problem of malaria turn out to be simple: eliminating standing water, draining swamps, providing mosquito nets, and, most of all, increasing prosperity. Combined, they have reduced malarial infections. But that hasn't stopped technologists such as Bill Gates and Nathan Myhrvold, the former chief technology officer of Microsoft (who writes about the role of private investors in spurring innovation), from funding research into recombinant vaccines, genetically modified mosquitoes, and even mosquito-zapping lasers. Such ideas can be ingenious, but they all suffer from the vanity of trying to impose a technological solution on what is a problem of poverty.

Finally, sometimes big problems elude any solution because we don't really understand the problem. The first successes of biotechnology in the late 1970s were straightforward: breakthroughs in manufacturing, in which recombinant E. coli bacteria were coaxed into producing synthetic versions of insulin or human growth hormone, proteins whose functions we thoroughly understood. Further breakthroughs in biomedicine have been more difficult to achieve, however, because we have struggled to understand the fundamental biology of many diseases. President Richard Nixon declared war on cancer in 1971; but we soon discovered there were many kinds of cancer, most of them fiendishly resistant to treatment, and it is only in the last decade, as we have begun to sequence the genomes of different cancers and to understand how their mutations express themselves in different patients, that effective, targeted therapies have come to seem viable. (To learn more, see "Cancer Genomics.") Or consider the "dementia plague," as Stephen S. Hall has. As the populations of the industrialized nations age, it is emerging as the world's most pressing health problem: by 2050, palliative care in the United States alone will cost $1 trillion a year. Yet we understand almost nothing about dementia and have no effective treatments. Hard problems are hard.

What to Do

It's not true that we can't solve big problems through technology; we can. We must. But all these elements must be present: political leaders and the public must care to solve a problem, our institutions must support its solution, it must really be a technological problem, and we must understand it.

The Apollo program, which has become a metaphor for technology's capacity to solve big problems, met these criteria, but it is an irreproducible model for the future. This is not 1961: there is no galvanizing historical context akin to the Cold War, no likely politician who can heroize the difficult and dangerous, no body of engineers who yearn for the productive regimentation they had enjoyed in the military, and no popular faith in a science-fictional mythology such as exploring the solar system. Most of all, going to the moon was easy. It was only three days away. Arguably, it wasn't even solving much of a problem. We are left alone with our day, and the solutions of the future will be harder won.

We don't lack for challenges. A billion people want electricity, millions are without clean water, the climate is changing, manufacturing is inefficient, traffic snarls cities, education is a luxury, and dementia or cancer will strike almost all of us if we live long enough. In this special package of stories, we examine these problems and introduce you to the indefatigable technologists who refuse to give up trying to solve them.

0 comments:

Post a Comment