A Blog by Jonathan Low

 

Jan 6, 2019

What If Tech Leaders Have Already Lost Control Of The Technology They Created?

The machines are learning, as was intended, but the result is that individual human influence over their decisions is waning as the millions of those doing some inputting and the mass of data takes charge.

The question is whether executives at Google, Facebook et al could retake control even if they wanted to. JL


Nick Bilton comments in Vanity Fair:

Back in the early 1970s, the notion of machines thinking for themselves seemed as far-fetched and futuristic as landing a rover on Mars. (But)  we’ve reached an inflection point. “Once, programmers wrote the instructions to the machines. Since the machines were controlled by instructions, those who wrote the instructions controlled the machines.” Code has come alive: algorithms sift through our search histories, purchases, and location to anticipate our desires. Serendipity has been hacked. The Internet has become a living organism. Companies no longer control of the machines they’ve built and the algorithms that have been touched by hundreds of thousands of engineers, none of whom control a single input or output on the platform anymore.
Back in the early 1970s, America was on the precipice of a new and unprecedented technological epoch. A small, square piece of plastic called the floppy disc was about to hit the market. Intel was perfecting the first microprocessors. Engineers were developing a prototype of a cell phone, the DynaTAC. Notably, a once fantastical subdivision of computer science, artificial intelligence, was becoming all the rage.
In those pivotal, if embryonic days, the notion of machines thinking for themselves seemed as far-fetched and futuristic as landing a rover on Mars. News outlets were thrilled by the possibilities. One exception was Marvin Minsky, a mathematician and computer scientist at M.I.T., better known as a founding “father of artificial intelligence.” While Minsky believed that A.I. might solve the world’s problems, he also recognized how it could all go drastically awry. In an interview with Life magazine in November of 1970, Minsky warned: “Once the computers get control, we might never get it back. We would survive at their sufferance.” In one of his more famous premonitions, he posited, “If we’re lucky, the [machines] might decide to keep us as pets.”
Almost half a century later, we’re starting to see the outlines of the world Minsky fearfully presaged, one in which the machines really do control us more than we control them. No, we haven’t invented Skynet, the fictional neural network from the Terminator franchise, nor are we threatened by rogue Mechanical Men as predicted by Asimov. If HAL 9000 existed today, he’d be piloting a Tesla, not a spaceship. But in other meaningful ways, our present reality begins to suggest the early stages of a sci-fi dystopia. Google and Facebook track your every move as you float around the Web, aggregating the most intimate details of your private life for sale to advertisers. Surveillance cameras, some now powered by Amazon, track our wanderings as Alexa, Google Home, and Facebook’s Portal can monitor our conversations. Make a dumb joke about Robin Byrd and then turn on your iPhone—Instagram may be surfacing an ad for Allbirds.
At the same time, the randomness of human experience has become constrained. Netflix will recommend a new show you’re guaranteed to love, but at what cost? You’re no longer going to stroll down the wrong aisle in the video store and stumble across an incredible indie film, or pick up a book someone misplaced in the bookstore. Serendipity, once a totem of the tech lingua franca, has now been hacked. You’re certainly not going to try a new restaurant unless an algorithm recommends it. The same products that provide entertaining diversions or make our lives easier also come with hidden costs. Scrolling through photos of celebrities on Instagram also provides Facebook with the tracking data it sells to advertisers. The same YouTube recommendation engine that provides children with educational videos can also be hijacked by algorithmic content producers to churn out endless digital nightmares. Soon, self-driving cars will be determining who should live and die in an accident, which will be great if you’re the one who gets to live. Soon enough, as Marc Andreessen noted recently, we’re going to have the health-care technology to predict our heart attacks, our strokes. We may one day be able to bootstrap our lives so that we know when we’ll die, perhaps forcing us to reverse engineer when we need to go on Tinder to find our spouses, or check LinkedIn to change jobs.
Could 2019 be the year that these and other emergent technologies evolve from merely creepy to potentially totalitarian? In a New Year’s Day column published on Edge, a Web site devoted to discussions about science, technology, and philosophy, George Dyson, the science historian and author, argues that we’ve reached an inflection point. “Once it was simple: programmers wrote the instructions that were supplied to the machines,” Dyson writes. “Since the machines were controlled by these instructions, those who wrote the instructions controlled the machines.” Today, code itself has come alive: algorithms sift through our search histories, credit-card purchases, and geolocation to model our personalities and anticipate our desires. For this, a small number of people such as Mark Zuckerberg, Jeff Bezos, Sergey Brin, and Larry Page, have become unimaginably rich.
In the beginning of the essay, Dyson cites the novel Childhood’s End, written by Arthur C. Clarke in 1953, which tells the story of a peaceful alien invasion of Earth by mysterious “Overlords” who “bring many of the same conveniences now delivered by the Keepers of the Internet to Earth.” As Dyson points out, this story, much like our own story, “does not end well.”
Are there still human hands at the wheel? Or, like a passenger in a self-driving Uber, are we condemned to the back seat, with only our smartphones to provide feedback? Dyson posits that Silicon Valley is “through the looking glass,” to the point where tech products take on a life of their own. Google’s search engine, for instance, “is no longer a model of human knowledge, it is human knowledge,” he writes. “What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls.” Each time we mere humans type a query into Google, we’re altering the results of the algorithm that in turns affects the next person. That might not seem like much of a problem, but when the results control what we do and think, which road we drive down and when, the type of opinions we make based on the news we see, and there is no single person actively defining those results, we’re simply at the mercy of the machine that delivers them to us.
The Internet, in other words, has become like a living organism. “If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself,” Dyson writes. “The successful social network is no longer a model of the social graph, it is the social graph.” Going to Facebook, Twitter, Amazon, or Google may seem harmless and may feel like it’s making our life easier, but in reality every time you Like that photo or type into that search box or order that next Kindle book, all because it’s easy, it’s akin to feeding “Audrey” in Little Shop of Horrors.
This, Dyson and countless others argue, is why we should be terrified by these massive companies that currently control the world. Perhaps most prescient is Dyson’s warning about what we have let these massive tech giants become. “We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves,” he writes. “The new gatekeepers, by controlling the flow of information, rule a growing sector of the world.” And yet the companies that do rule the world are no longer in control of the machines they’ve built and the algorithms that have been touched by hundreds of thousands of engineers, none of whom control a single input or output on the platform anymore. The result is a runaway train the size of a continent that has led to things like fake news and Russian bots and Donald J. Trump and Sears going bankrupt and that mom-and-pop store on Main Street going out of business. There is no question that democracy is deteriorating, and technology is hasting that atrophy.
In part this is because no single person controls the algorithm anymore—especially the people at the very top of these companies who gain the most financially for how much we use their products and services. There are over a million people working for Google, Amazon, Apple, and Microsoft. While a large number of them are stocking things in warehouses or helping you fix your iPhone, tens of thousands of engineers are re-writing the code that answers all of our questions and desires. It’s almost as if these companies have become too big not to fail society. No single engineer, or even thousands of them, can see every angle of how these platforms are becoming rulers over our own thinking, and what they might be used for. And yet at the same time, they are being used for everything. When I recently asked a former Facebook engineer what it was like working for the social network and if they felt any responsibility for how it was being used nefariously, they replied, “You don’t really see it from that perspective. You’re just excited to see two-plus billion people using something you partially coded.” In other words, I don’t care if the thing I make is bad for the world, just as long as people are using it. Zuckerberg, I’m sure, is more concerned with how many people don’t use his platform, than how he affects the people who do.
Sooner than you think, every facet of your life, from the car in your driveway to the lights in your living room, will be ruled by algorithms, too. The next version of Amazon won’t just suggest which book you should buy next—it will automatically re-stock your fridge with Amazon products that an algorithm has decided you’ll like or need. Sure, it sounds like a utopia to not have to go to the grocery store, but it also sounds awfully like the end game Minsky predicted in 1970, where we become “pets” to the computers. Can it be stopped? If we really want it to. The most obvious logic would be to unplug the computers, realize we screwed it all up this time around, and start fresh. But as history has proven over and over, we humans are not big on obvious, or logic. We can, however, take it into our own hands, and each of us has the ability to stop feeding the machines—to stop feeding Audrey that drop of blood—as much. This can be done by using the big giants less, and trying to find alternatives. Or even better, to stop using technology as much as we do, and to return to our analog life of getting lost and trying new things, not at the behest of an algorithm. But, we likely won’t. Let’s just hope our new Overlords treat us well and, when they fully take over, they let us sleep curled up at the end of the bed.

0 comments:

Post a Comment