A Blog by Jonathan Low


Mar 19, 2017

Why Technology Doesn't Impress Us As Much As It Used To

Have we become too dependent? Bored with the miraculous we have come to take for granted? Or is technology evolving towards its own interests, not necessarily aligned with those of humans? JL

Ian Bogost comments in The Atlantic:

So many ordinary objects and experiences have become technologized—made dependent on computers, sensors, and other apparatuses meant to improve them—that they have also ceased to work in their usual manner. Technology is also more precarious than it once was. Unstable, and unpredictable. At least from the perspective of human users. From the vantage point of technology, if it can be said to have a vantage point, it's evolving separately from human use.
“No… it’s a magic potty,” my daughter used to lament, age 3 or so, before refusing to use a public restroom stall with an automatic-flush toilet. As a small person, she was accustomed to the infrared sensor detecting erratic motion at the top of her head and violently flushing beneath her. Better, in her mind, just to delay relief than to subject herself to the magic potty’s dark dealings.
It’s hardly just a problem for small people. What adult hasn’t suffered the pneumatic public toilet’s whirlwind underneath them? Or again when attempting to exit the stall? So many ordinary objects and experiences have become technologized—made dependent on computers, sensors, and other apparatuses meant to improve them—that they have also ceased to work in their usual manner. It’s common to think of such defects as matters of bad design. That’s true, in part. But technology is also more precarious than it once was. Unstable, and unpredictable. At least from the perspective of human users. From the vantage point of technology, if it can be said to have a vantage point, it's evolving separately from human use.
* * *
“Precarity” has become a popular way to refer to economic and labor conditions that force people—and particularly low-income service workers—into uncertainty. Temporary labor and flexwork offer examples. That includes hourly service work in which schedules are adjusted ad-hoc and just-in-time, so that workers don’t know when or how often they might be working. For low-wage food service and retail workers, for instance, that uncertainty makes budgeting and time-management difficult. Arranging for transit and childcare is difficult, and even more costly, for people who don’t know when—or if—they’ll be working.
Such conditions are not new. As union-supported blue-collar labor declined in the 20th century, the service economy took over its mantle absent its benefits. But the information economy further accelerated precarity. For one part, it consolidated existing businesses and made efficiency its primary concern. For another, economic downturns like the 2008 global recession facilitated austerity measures both deliberate and accidental. Immaterial labor also rose—everything from the unpaid, unseen work of women in and out of the workplace, to creative work done on-spec or for exposure, to the invisible work everyone does to construct the data infrastructure that technology companies like Google and Facebook sell to advertisers.
But as it has expanded, economic precarity has birthed other forms of instability and unpredictability—among them the dubious utility of ordinary objects and equipment.
The contemporary public restroom offers an example. Infrared-sensor flush toilets, fixtures, and towel-dispensers are sometimes endorsed on ecological grounds—they are said to save resources by regulating them. But thanks to their overzealous sensors, these toilets increase water or paper consumption substantially. Toilets flush three times instead of one. Faucets open at full-blast. Towel dispensers mete out papers so miserly that people take more than they need. Instead of saving resources, these apparatuses mostly save labor and management costs. When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.
Given its connection to the hollowing-out of labor in the name of efficiency, automation is most often lamented for its inhumanity, a common grievance of bureaucracy. Take the interactive voice response (IVR) telephone system. When calling a bank or a retailer or a utility for service, the IVR robot offers recordings and automated service options to reduce the need for customer service agents—or to discourage customers from seeking them in the first place.
Once decoupled from their economic motivations, devices like automatic-flush toilets acclimate their users to apparatuses that don’t serve users well in order that they might serve other actors, among them corporations and the sphere of technology itself. In so doing, they make that uncertainty feel normal.
It’s a fact most easily noticed when using old-world gadgets. To flush a toilet or open a faucet by hand offers almost wanton pleasure given how rare it has become. A local eatery near me whose interior design invokes the 1930s features a bathroom with a white steel crank-roll paper towel dispenser. When spun on its
ungeared mechanism, an analogous, glorious measure of towel appears directly and immediately, as if sent from heaven.
* * *
Rolling out a proper portion of towel feels remarkable largely because that victory also seems so rare, even despite constant celebrations of technological accomplishment. The frequency with which technology works precariously has been obscured by culture’s obsession with technological progress, its religious belief in computation, and its confidence in the mastery of design. In truth, hardly anything works very well anymore.
The other day I attempted to congratulate my colleague Ed Yong for becoming a Los Angeles Times Book Prize finalist. I was tapping “Awesome, Ed!” into my iPhone, but it came out as “Aeromexico, Ed!” What happened? The iPhone’s touchscreen keyboard works, in part, by trying to predict what the user is going to type next. It does this invisibly, by increasing and decreasing the tappable area of certain keys based on the previous keys pressed. This method—perhaps necessary to make the software keyboard work at all—amplifies a mistype that autocorrect then completes. And so goes the weird accident of typing on today’s devices, when you hardly ever say what you mean the first time.
The effects of business consolidation and just-in-time logistics offer another example. Go to Amazon.com and search for an ordinary product like a pair of shoes or a toaster. Amazon wants to show its users as many options as possible, so it displays anything it can fulfill directly or whose fulfillment it can facilitate via one of many catalog partnerships. In some cases, one size or color of a particular shoe might be available direct from Amazon, shipped free or fast or via its Prime two-day delivery service, while another size or color might come from a third party, shipped later or at increased cost. There is no easy way to discern what’s truly in stock.
Digital distribution has also made media access more precarious. Try explaining to a toddler that the episodes of “Mickey Mouse Clubhouse” that were freely available to watch yesterday via subscription are suddenly available only via on-demand purchase. Why? Some change in digital licensing, probably, or the expiration of a specific clause in a distribution agreement. Then try explaining that when the shows are right there on the screen, just the same as they always have been.
Or, try looking for some information online. Google’s software displays results based on a combination of factors, including the popularity of a web page, its proximity in time, and the common searches made by other people in a geographic area. This makes some searches easy and others difficult. Looking for historical materials almost always brings up Wikipedia, thanks to that site’s popularity, but it doesn’t necessarily fetch results based on other factors, like the domain expertise of its author. As often as not, Googling obscures more than it reveals.
Most of these failures don’t seem like failures, because users have so internalized their methods that they apologize for them in advance. The best defense against
instability is to rationalize uncertainty as intentional—and even desirable.
* * *
The common response to precarious technology is to add even more technology to solve the problems caused by earlier technology. Are the toilets flushing too often? Revise the sensor hardware. Is online news full of falsehoods? Add machine-learning AI to separate the wheat from the chaff. Are retail product catalogs overwhelming and confusing? Add content filtering to show only the most relevant or applicable results.
But why would new technology reduce rather than increase the feeling of precarity? The more technology multiplies, the more it amplifies instability. Things already don’t quite do what they claim. The fixes just make things worse. And so, ordinary devices aren’t likely to feel more workable and functional as technology marches forward. If anything, they are likely to become even less so.
Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.
This might seem like a crazy thing to say. What other purpose do toilets serve than to speed away human waste? No matter its ostensible function, precarious technology separates human actors from the accomplishment of their actions. They acclimate people to the idea that devices are not really there for them, but as means to accomplish those devices own, secret goals.
This truth has been obvious for some time. Facebook and Google, so the saying goes, make their users into their products—the real customer is the advertiser or data speculator preying on the information generated by the companies’ free services. But things are bound to get even weirder than that. When automobiles drive themselves, for example, their human passengers will not become masters of a new form of urban freedom, but rather a fuel to drive the expansion of connected cities, in order to spread further the gospel of computerized automation. If artificial intelligence ends up running the news, it will not do so in order to improve citizen’s access to information necessary to make choices in a democracy, but to further cement the supremacy of machine automation over human editorial in establishing what is relevant.
There is a dream of computer technology’s end, in which machines become powerful enough that human consciousness can be uploaded into them, facilitating immortality. And there is a corresponding nightmare in which the evil robot of a forthcoming, computerized mesh overpowers and destroys human civilization. But there is also a weirder, more ordinary, and more likely future—and it is the one most similar to the present. In that future, technology’s and humanity’s goals split from one another, even as the latter seems ever more yoked to the former. Like people ignorant of the plight of ants, and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans—but not necessarily in the service of human ends. It won’t take a computational singularity for humans to cede their lives to the world of machines. They’ve already been doing so, for years, without even noticing.


Post a Comment