A Blog by Jonathan Low

 

Feb 4, 2021

The Next Major Cyberattack Is Already Under Way

Data is money. The attacks will never stop as long as that is the case. Which is to say, they will never stop. JL

Jill Lepore reports in The New Yorker:

The recklessness of the people who have been buying and selling the vulnerability of the rest of us is not just part of an intelligence-agency game; it has been the ethos of Wall Street and Silicon Valley for decades. Move fast and break things; the money will trickle down; click, click, click, click, buy, buy, buy, like, like, like, like, expose, expose, expose. This raises the question of the horse’s whereabouts relative to the barn. If you listen, you can hear the thunder of hooves.

In the nightmare, sirens caterwaul as ambulances career down ice-slicked, car-crashed streets whose traffic lights flash all three colors at once (they’ve been hacked by North Korea) during a climate-catastrophic blizzard, bringing pandemic patients to hospitals without water or electricity—pitch-black, all vaccinations and medications spoiled (the power grid has been hacked by Iran)—racing past apartment buildings where people are freezing to death in their beds, families huddled together under quilts, while, outside the darkened, besieged halls of government, men wearing fur hats and Kevlar vests (social media has been hacked by Russia), flashlights strapped to their rifles, chant, “Q is true! Q is true!”

someone should do something,” reads the T-shirt worn by one of Nicole Perlroth’s sources, a hacker from New Zealand, in “This Is How They Tell Me the World Ends: The Cyberweapons Arms Race” (Bloomsbury). Someone should. But who? And do what? And about which of the Biblical plagues facing humankind? Perlroth is a longtime cybersecurity reporter for the Times, and her book makes a kind of Hollywood entrance, arriving when the end of the world is nigh, at least in the nightmare that, every night, gains on the day.

Perlroth is interested in one particular plague—governments using hacking as a weapon of war—but her book raises the question of whether that’s the root of a lot of other evils. For seven years, Perlroth investigated the market in “zero-days” (pronounced “oh-days”); her book is the story of that chase, and telling that story, which gets pretty technical, requires a good bit of decoding. “A zero-day is a software or hardware flaw for which there is no existing patch,” she explains. Zero-days “got their name because, as with Patient Zero in an epidemic, when a zero-day flaw is discovered, software and hardware companies have had zero days to come up with a defense.” A flaw can be harmless, but zero-days represent vulnerabilities that can be turned into weapons. And, as Perlroth demonstrates, governments have been buying them and storing them in vaults, like so many vials of the bubonic plague.

It’s tempting to say either I can’t worry about this right now or Didn’t we already know this? For all the sensationalism of “This Is How They Tell Me the World Ends”—not least the title—much here fails to surprise: all code has bugs; it’s virtually impossible and prohibitively expensive to write perfect code; and bad actors can exploit those bugs to break into everything from your iPad to the Hoover Dam. Companies and governments therefore pay hackers to find bugs, so that they can be fixed, or exploited. What other choice do they have? you ask. Perlroth’s reply is It’s a lot worse than you think and If there aren’t other choices, it’s time to invent some.

Perlroth’s storytelling is part John le Carré and more parts Michael Crichton—“Tinker, Tailor, Soldier, Spy” meets “The Andromeda Strain.” Because she’s writing about a boys’ club, there’s also a lot of “Fight Club” in this book. (“The first rule of the zero-day market was: Nobody talks about the zero-day market. The second rule of the zero-day market was: Nobody talks about the zero-day market.”) And, because she tells the story of the zero-day market through the story of her investigation, it’s got a Frances McDormand “Fargo” quality, too; in one sequence, Perlroth, pregnant, questions Italian hackers in Miami bars. (They tell her that they live by a samurai code of honor. “Bushido, I thought. More like Bullshit,” she writes.) Reading how Perlroth found out about what’s going on is spellbinding, but it can obscure what happened when. Here, as I read it, is that sequence of events, the spell, unbound.

In the nineteen-sixties, computers, which had been used to store and process information, became communications devices. “Life will be happier for the on-line individual,” J. C. R. Licklider, the visionary behind arpanet, predicted in 1968. But, for all the benefits this development would bring, it struck many people as having unknowable effects—“What all this will do to the world I cannot guess,” the head of Bell Labs wrote that year—and it struck other observers as potentially quite dangerous. Also in 1968, the Pentagon’s Defense Science Board Task Force on Computer Security concluded that “contemporary technology cannot provide a secure system in an open environment.” In a follow-up report from 1972—the year arpanet was publicly demonstrated, at the D.C. Hilton, during the first-ever meeting of the International Conference on Computer Communication—the lead author, James P. Anderson, argued that communication by computers offered a “unique opportunity” for espionage and sabotage; virtually undefended and “totally inadequate to withstand attack,” computers were “a uniquely attractive target for malicious (hostile) action,” and, because of the growing connections among computers, a single attack could take down an entire network.

American intelligence agencies had long preferred offense to defense. As Perlroth writes, “Unimaginable volumes of nation-state secrets—previously relegated to locked file cabinets—were suddenly being transmitted in ones and zeroes and freely available to anyone with the creativity and skill to find them.” In the nineteen-seventies, in a project run jointly by the U.S. Navy, the National Security Agency, and the C.I.A., divers placed a tap on a Soviet cable on the ocean floor north of Japan; they leeched information out of it until the breach was discovered, in 1981. Two years later, the French Embassy in Moscow discovered that the Soviets had bugged its teleprinters. Then, in 1984, an N.S.A. project that involved taking apart and replacing every single piece of electrical equipment in the American Embassy in Moscow discovered an almost undetectable bug in the Embassy’s I.B.M. Selectric typewriters: a single extra coil on the power switch, containing a miniature magnetometer. Every tap of every key was being collected and communicated by radio.

Meanwhile, computer programs got longer and longer, from tens of lines of code to tens of millions, controlling ships and airplanes and missiles. American intelligence agencies began to consider the possibility of catastrophic breaches. In the nineteen-eighties, Jim Gosler, working for the Adversarial Analysis Group at Sandia National Laboratory, pioneered research in detecting vulnerabilities in computer code (in this case, in the code that controlled the nuclear arsenal). As Perlroth argues, Gosler demonstrated that the code was “at once a hacker’s paradise and a national security nightmare.” In 1989, the N.S.A. brought Gosler onboard as a “visiting scientist.” In 1996, he took over the C.I.A.’s Clandestine Information Technology Office. His role seems to have been to explain to people at Fort Meade and, later, at Langley that no computer and no computer program can ever be faultless, an argument with implications for both defensive and offensive operations. Between his two appointments, the Internet opened to commercial traffic, and people throughout the world started uploading and downloading. Perlroth, interviewing Gosler about how dangerous all this is, looks down at her iPhone: “And yet here we were, entrusting our entire digital lives—passwords, texts, love letters, banking records, health records, credit cards, sources, and deepest thoughts—to this mystery box, whose inner circuitry most of us would never vet, run by code written in a language most of us will never fully understand.”

In the dot-com nineties, cybersecurity firms sold antivirus software; penetration-testing companies sold the service of breaking through your firewall, to show you how they got in. (“We Protect People Like You from People Like Us” is the motto of one pen-tester.) They all peddled an amalgam of fear, uncertainty, and doubt that, in the tech world, had come to be abbreviated as fud. Some of those private companies realized that it wasn’t efficient to maintain a big staff of analysts when they could just pay bounties to hackers all over the world to figure out how to break into a system. Governments and intelligence agencies, too, started offering bounties for bugs, paying hackers, brokers, and, above all, defense contractors. Some of these companies, like the Miami-based “100% offensive” Immunity, Inc., and the Maryland-based Vulnerability Research Labs (which was acquired in 2010 by a giant defense contractor), are staffed with ex-intelligence agents, selling zero-days that are worth millions of dollars. After 9/11, the price for bugs went through the roof. With the launch of Google, and especially of Facebook, the amount of data to be found online mushroomed, and so did the ease of government surveillance. Perlroth writes, “It was often hard to see where the NSA’s efforts ended and Facebook’s platform began.” Only the arrival of the iPhone, in 2007, proved a greater boon to government surveillance.


Cyberattacks made headlines, and then vanished. In 2008, Russia got into a network at the Pentagon; hackers broke into the campaigns of both Barack Obama and John McCain; the next year, North Korea compromised the Web sites of everything from the Treasury Department to the New York Stock Exchange. In 2010, a computer worm called Stuxnet, created by the U.S. and Israel in an operation approved by George W. Bush and continued by Obama, was discovered to have devastated Iran’s nuclear program. Perlroth, who started covering cybersecurity for the Times a year later, is arguing that, if you build a worm like that, it’s eventually going to come back and eat you. When the worm escaped, Joe Biden, then the Vice-President, suspected Israel of hastening the program, and breaking it. “Sonofabitch,” he allegedly said. “It’s got to be the Israelis.” It infected a hundred countries and tens of thousands of machines before it was stopped. “Somebody just used a new weapon, and this weapon will not be put back in the box,” Michael Hayden, a former N.S.A. director, said. That somebody was the United States. It had built a boomerang.

The market for zero-days became a global gold rush. You could buy zero-days from anyone, anywhere; no rules obtained. “When it came to zero-days, governments weren’t regulators,” Perlroth writes. “They were clients.” After Chinese hackers attacked Google in 2010, the company started paying bounty hunters a maximum of $1337 a pop (the numerals spell out “leet,” short for “élite,” on your phone); soon, that got bumped up to $31,337 (“eleet”). Microsoft and other major players offered encryption services, which had the effect of raising the price of zero-day exploits. In 2013, the Times called Perlroth into a windowless closet in the office of Arthur Sulzberger, Jr., the publisher, to pore over the documents leaked by Edward Snowden. She was supposed to study attempts by the world’s top intelligence agencies to crack digital encryption but saw that “the NSA didn’t need to crack those encryption algorithms when it had acquired so many ways to hack around them”—that is, by zero-days. “The agency appeared to have acquired a vast library of invisible backdoors into almost every major app, social media platform, server, router, firewall, antivirus software, iPhone, Android phone, BlackBerry phone, laptop, desktop, and operating system.”

Then there are all the mercenaries. Perlroth reports that, in 2015, a company named Zerodium offered a million dollars for a chain of zero-days that could break into an iPhone remotely; in 2019, Google offered $1.5 million for a way to gain remote access to an Android device. Some of those mercenaries are Americans, who sell zero-days to foreign governments. In 2015, a former N.S.A. hacker, David Evenden, was part of a team that observed e-mail correspondence from Michelle Obama on behalf of the United Arab Emirates while he was working for a contractor called CyberPoint: Evenden got in touch with Perlroth to share his story, and to warn other former N.S.A. employees to be careful if they worked for foreign companies.

If it was hard to get people in the know to talk on the record about the zero-day archive, it was harder to get people in power to understand its danger. Perlroth points out that the practice of paying hackers to figure out ways to break into other countries’ power grids, weapons systems, transportation infrastructure, and the like by way of holes in Adobe Reader or Firefox or a fitness app was an extension of pre-digital modes of warfare—the way you’d, say, bomb a bridge or take out a munitions factory—that simply no longer apply. During the Cold War, Perlroth writes, “Americans spied on Russian technology, while Russians backdoored American typewriters.” No more. Instead, people across the world use Microsoft and Google and iPhones. “Increasingly, NSA’s work was riddled with conflicts of interest and moral hazards,” Perlroth argues:

Nobody seemed to be asking what all this breaking and entering and digital exploitation might mean for the NSA’s sponsors—American taxpayers—who now relied on NSA-compromised technology not only for communication but for banking, commerce, transportation, and health care. And nobody apparently stopped to ask whether in their zeal to poke a hole and implant themselves in the world’s digital systems, they were rendering America’s critical infrastructure—hospitals, cities, transportation, agriculture, manufacturing, oil and gas, defense; in short, everything that undergirds our modern lives—vulnerable to foreign attacks.

In 2012, Iranian hackers destroyed the data of thirty thousand computers used by a Saudi oil company. That year, Republicans in the Senate filibustered a law that would have required American companies to meet minimum cybersecurity regulations. Two years later, North Korean hackers attacked Sony. (As Perlroth observes, the press coverage mainly concerned gossip that was found in Sony executives’ e-mails, not North Korea’s ability to hack into American companies.) Russia, in the same period, was “implanting itself into the American grid,” hacking into systems that controlled basic infrastructure, from pipelines to power switches. By 2015, Russians were inside the State Department, the White House, and the Pentagon. The hackers didn’t turn things off; they just sat there, waiting. Beginning in 2014, in anticipation of the 2016 election, they fomented civil unrest through fake Twitter and Facebook accounts, sowing disinformation. They broke into the computers of the Democratic National Committee. As with the Sony attack, the press mostly reported the gossip found in the e-mails of people like John Podesta. All the while, as Perlroth emphasizes, Russian hackers were also invading election and voter-registration systems in every state in the country. Donald Trump’s response, once he was in office, was to deny that the Russians had done anything at all, and to get rid of the White House cybersecurity coördinator.

In the spring of 2017, still unknown hackers calling themselves the Shadow Brokers infiltrated the N.S.A.’s zero-day archive, a box of digital picklocks. They walked into the cyber equivalent of Fort Knox, and cleaned the place out. But it was worse than that, because they stole cyberweapons, the keys to the kingdom. By the next month, hackers from North Korea were using some of those picklocks to break into the computer systems of, among other places, British hospitals, German railways, Russian banks, a French automaker, Indian airlines, Chinese universities, the Japanese police, FedEx, and electrical-utility companies all over the United States. The attack, which was accompanied by ransom demands, came to be called WannaCry. The cost to tech companies, Perlroth reports, was in the tens of billions of dollars.

One month later, Russia tried out its kill-the-grid attack on Ukraine. It could have been much worse were it not for the fact that most of Ukraine’s systems are not online. “What had saved Ukraine is precisely what made the United States the most vulnerable nation on earth,” Perlroth observes. Every second, Americans plug into the Internet a hundred and twenty-seven devices, from refrigerators and thermostats to library catalogues and bicycles. During the pandemic, the infrastructures of testing, care, and vaccination development and distribution have all been attacked, in what amounts to a cyber pandemic. In March, 2020, as the federal government first began to frame a response to covid-19, hackers attacked the Department of Health and Human Services. That spring, hackers started attacking hospitals around the world that were treating coronavirus patients, shutting down thousands of computers with ransomware. In October, the Cybersecurity and Infrastructure Security Agency (cisa), a new division within the Department of Homeland Security, tweeted, “There is an imminent and increased cybercrime threat to U.S. hospitals and health care providers.” In November, Microsoft reported that state-sponsored hackers in Russia and North Korea had repeatedly attacked at least seven companies involved in the research and production of covid-19 vaccines.

Perlroth reports (and it’s hard to tell if this is hyperbole) that the N.S.A. has a hundred analysts working on cyber offense for every analyst working on cyber defense. In the fall, cisa dedicated itself to protecting the election. On Election Day, the agency issued updates every three hours. The goal, as cisa’s head, Chris Krebs, said, was for November 3rd to be “just another Tuesday on the Internet.” On November 17th, after Krebs again publicly declared the election to have been free and fair—he tweeted, “59 election security experts all agree, ‘in every case of which we are aware, these claims (of fraud) either have been unsubstantiated or are technically incoherent’ ”—Trump fired him. The feared Election Day attacks never came, not only because cisa worked well but also, Perlroth suggests, because they were no longer necessary. “Our candidate is chaos,” a Kremlin operative told a reporter in 2016. That candidate stalked the nation in 2016 and again in 2020.

In December, when cisa had no appointed director or deputy director, it was reported that, for months, hackers, likely employed by the Russian government, had broken into Microsoft Office 365 systems at the departments of Treasury and Commerce, partly by way of holes in software updates from a company that supplied network-monitoring cybersecurity software. It has since become clear that the breach reached into the Centers for Disease Control, the Departments of Justice, Labor, Energy, Homeland Security, and State, and classified research centers including Los Alamos National Laboratory, in addition to hundreds of private companies. The scale of the breach, and its consequences, is not yet clear; so far, it’s too big to measure. Trump said he did not believe that Russia could have been involved; the federal government has not retaliated, at least publicly. Biden, in the days before he took office, spoke of actions that would include, and go beyond, sanctions. Meanwhile, the federal government is effectively insecure. So are most of the rest of us. While writing this essay, I got an “important security alert” from my employer: “Microsoft has informed us of an intrusion into Harvard’s Office 365 email service.”

The arrogant recklessness of the people who have been buying and selling the vulnerability of the rest of us is not just part of an intelligence-agency game; it has been the ethos of Wall Street and Silicon Valley for decades. Move fast and break things; the money will trickle down; click, click, click, click, buy, buy, buy, like, like, like, like, expose, expose, expose. Perlroth likes a piece of graffiti she once saw: “Move slowly and fix your shit.” Lock down the code, she’s saying. Bar the door. This raises the question of the horse’s whereabouts relative to the barn. If you listen, you can hear the thunder of hooves. ♦

An earlier version of this article misstated the level of David Evenden’s access to Michelle Obama’s e-mails and the kind of malware that Iranian hackers used against a Saudi oil company.

0 comments:

Post a Comment