A Blog by Jonathan Low

 

Jun 10, 2017

Who Actually Benefits From the Internet of Things?

Those who consume, interpret and re-sell data do.

But it isnt yet apparent that that includes the consumers who purchase the devices. Yet.  JL

Adam Greenfield reports in The Guardian:

Data is never “just” data. It is an asset, and Amazon will exploit it in every way its terms and conditions permit – including by using it to develop behavioural models that map our desires, so as to target them with even greater efficiency in the future. The aim of devices such as the Dash Button is to permit the user to accomplish commercial transactions with as little conscious thought as possible
In San Francisco, a young engineer hopes to “optimise” his life through sensors that track his heart rate, respiration and sleep cycle. In Copenhagen, a bus running two minutes behind schedule transmits its location and passenger count to the municipal traffic signal network, which extends the time of the green light at each of the next three intersections long enough for its driver to make up some time. In Davao City in the Philippines, an unsecured webcam overlooks the storeroom of a fast food stand, allowing anyone to peer in on all its comings and goings.

Though it can often feel as if this colonisation proceeds of its own momentum, distinct ambitions are being served wherever and however the internet of things appears. The internet of things isn’t a single technology. About all that connects the various devices, services, vendors and efforts involved is the end goal they serve: capturing data that can then be used to measure and control the world around us.
Whenever a project has such imperial designs on our everyday lives, it is vital that we ask just what ideas underpin it and whose interests it serves. Although the internet of things retains a certain sprawling and formless quality, we can get a far more concrete sense of what it involves by looking at how it appears at each of three scales: that of our bodies (where the effort is referred to as the “quantified self”), our homes (“the smart home”) and our public spaces (“the smart city”). Each of these examples illuminates a different aspect of the challenge presented to us by the internet of things, and each has something distinct to teach us.

At the most intimate scale, the internet of things is visible in the form of wearable biometric sensors. The simplest of these are little more than networked digital pedometers, which count steps, measure the distance a person has traversed, and furnish an estimate of the calories burned in the course of this activity. More elaborate models measure heart rate, breathing, skin temperature and even perspiration.

If wearable biometric devices such as Fitbits and Apple Watches are, in theory, aimed at rigorous self-mastery, the colonisation of the domestic environment by similarly networked products and services is intended to deliver a very different experience: convenience. The aim of such “smart home” efforts is to short-circuit the process of reflection that stands between having a desire and fulfilling that desire by buying something.
Right now, the perfect example of this is a gadget being sold by Amazon, known as the Dash Button. Many internet-of-things devices are little more than some conventional object with networked connectivity tacked on. The Dash Button is the precise opposite, a thing in the world that could not have existed without the internet. I cannot improve on Amazon’s own description of this curious object and how it works, so I’ll repeat it here: “Amazon Dash Button is a Wi-Fi-connected device that reorders your favourite item with the press of a button. To use Dash Button, simply download the Amazon app from the Apple App Store or Google Play Store. Then, sign into your Amazon Prime account, connect Dash Button to Wi-Fi, and select the product you want to reorder. Once connected, a single press on Dash Button automatically places your order.”
In other words: single-purpose electronic devices, each dedicated to an individual branded item, that you press when you’re running low. Pressing a Dash Button specific to your preferred pet food, washing powder or bottled water automatically composes an order request to Amazon for that one product. I don’t for a second want to downplay the value of such a product for people who have ageing parents to look after, or kids to drop off at daycare, or for whom simply getting in the car to pick up some cat food may take an hour or more out of their day. But the benefit to the individual customer is tiny compared with what Amazon gains. Sure, you never run out of cat food. But Amazon gets data on the time and place of your need, as well as its frequency and intensity, and that data has value. It is an asset, and you can be sure that Amazon will exploit it in every way its terms and conditions permit – including by using it to develop behavioural models that map our desires in high resolution, so as to target them with even greater efficiency in the future.
Again, the aim of devices such as the Dash Button is to permit the user to accomplish commercial transactions with as little conscious thought as possible – not even the few moments it takes to tap out commands on the touchscreen of a phone or tablet. The data on what the industry calls “conversion” is as clear as it is unremitting: with every box to tick or form to fill, the percentage of users that make it all the way to checkout tumbles. The fewer steps there are in a transaction, the more likely people are to spend their money.
Manufacturers, enticed by the revenue potential of conquering the domestic environment, keep trying to eliminate these steps, in the hope that one of their connected products will become as essential to everyday life as the smartphone. The recent industry push toward the “smart home” is simply the latest version of this.
For the moment, this strategy is centred on so-called smart speakers, a first generation of which have now reached the market. These products include the Amazon Echo and Google Home, each of which is supposed to function as the command hub of a connected domestic environment. Amazon’s Echo is a simple cylinder, while the Google Home is a bevelled ovoid. But the physical form of such speakers is all but irrelevant, as their primary job is to function as a branded “virtual assistant”, providing a simple, integrated way to access the numerous digital controls scattered throughout the contemporary home – from lighting and entertainment to security, heating, cooling and ventilation systems.
Google, Microsoft, Amazon and Apple each offer their own such assistant, based on natural-language speech recognition. Most are given female names, voices and personalities, presumably based on research indicating that users of all genders prefer to interact with women. Apple’s is called Siri and will, according to reports, soon be getting its own device, Amazon’s Alexa, and Microsoft’s Cortana, while one simply addresses Google’s Home offering as “Google.”

At first, such devices seem harmless enough. They sit patiently and quietly at the periphery of our awareness, and we only speak to them when we need them. But when we consider them more carefully, a more problematic picture emerges.
This is how Google’s assistant works: you mention to it that you’re in the mood for Italian food, and then, in the words of one New York Times article, it “will then respond with some suggestions for tables to reserve at Italian restaurants using, for example, the OpenTable app”.
Advertisement
This example shows that though the choices these assistants offer us are presented as neutral, they are based on numerous inbuilt assumptions that many of us would question if we were to truly scrutinise them.
Ask restaurateurs and front-of-house workers what they think of OpenTable, for example, and you will swiftly learn that one person’s convenience is another’s accelerated pace of work, or worse. You’ll learn that restaurants offering reservations via the service are, according to the website Serious Eats, “required to use the company’s proprietary floor-management system, which means leasing hardware and using OpenTable-specific software”, and that OpenTable retains ownership of all the data generated in this way. You’ll also learn that OpenTable takes a cut on reservations per seated diner, which obviously adds up to a significant amount on a busy night.
Conscientious diners have therefore been known to bypass the ostensible convenience of OpenTable, and make whatever reservations they have to by phone. By contrast, Google Home’s frictionless default to making reservations via OpenTable normalises the choice to use that service.
This is not accidental. It reflects the largely preconscious valuations, priorities and internalised beliefs of the people who devised Google Home. As throughout the industry, that is a remarkably homogeneous cohort of young designers and engineers. But more important than the degree of similarity they bear to one another is how different they are from everyone else. Internet-of-things devices are generally conceived by people who have completely assimilated services such as Uber, Airbnb and Apple Pay into their daily lives, at a time when figures from the Washington DC-based Pew Research Center suggest that a significant percentage of the population has never used or even heard of them. For the people who design these products, these services are normal, and so, over time, they become normalised for everyone else.
There are other challenges presented by this way of interacting with networked information. It’s difficult, for example, for a user to determine whether the options they are being offered by a virtual assistant result from what the industry calls an “organic” return – something that legitimately came up as the result of a search process – or from paid placement. But the main problem with the virtual assistant is that it fosters an approach to the world that is literally thoughtless, leaving users disinclined to sit out any prolonged frustration of desire, and ever less critical about the processes that result in gratification.
Virtual assistants are listening to everything that transpires in their presence, and are doing so at all times. As voice-activated interfaces, they must be constantly attentive in order to detect when the “wake word” that rouses them is spoken. In this way, they are able to harvest data that might be used to refine targeted advertising, or for other commercial purposes that are only disclosed deep in the terms and conditions that govern their use. The logic operating here is that of preemptive capture: the notion that companies such as Amazon and Google might as well trawl up everything they can, because no one knows what value might be derived from it in the future.
This leads to situations that might be comical, were it not for what they imply about the networking of our domestic environments. These stories circulate as cautionary tales: one of the best-known was the time the US National Public Radio network aired a story about the Amazon Echo, and various cues spoken on the broadcast were interpreted as commands by Echos belonging to members of the audience, causing domestic chaos.
Put aside for one moment the question of disproportionate benefit – the idea that you as the user derive a little convenience from your embrace of a virtual assistant, while its provider gets everything – all the data about your life and all its value. Let’s simply consider what gets lost in the ideology of convenience that underlies this conception of the internet of things. Are the constraints presented to us by life in the non-connected world really so onerous? Is it really so difficult to wait until you get home before you preheat the oven? And is it worth giving away so much, just to be able to do so remotely?
ost of us are by now aware that our mobile phones are constantly harvesting information about our whereabouts and activities. But we tend to be relatively ignorant of the degree to which the contemporary streetscape has also been enabled to collect information. This development is often called the “smart city”. If the ambition beneath the instrumentation of the body is ostensible self-mastery, and that of the home is convenience, the ambition at the heart of the smart city is nothing other than control – the desire to achieve a more efficient use of space, energy and other resources.
Advertisement
A broad range of networked information-gathering devices are increasingly being deployed in public space, including CCTV cameras; advertisements and vending machines equipped with biometric sensors; and the indoor micropositioning systems known as “beacons” that, when combined with a smartphone app, send signals providing information about nearby products and services.
The picture we are left with is that of our surroundings furiously vacuuming up information, every square metre of seemingly banal pavement yielding so much data about its uses and its users that nobody yet knows what to do with it all. And it is at this scale of activity that the guiding ideology of the internet of things comes into clearest focus.
The strongest and most explicit articulation of this ideology in the definition of a smart city has been offered by the house journal of the engineering company Siemens: “Several decades from now, cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service ... The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”
There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
Every aspect of this argument is questionable. Perhaps most obviously, the claim that anything at all is perfectly knowable is perverse. However thoroughly sensors might be deployed in a city, they will only ever capture what is amenable to being captured. In other words, they will not be able to pick up every single piece of information necessary to the formulation of sound civic policy.
Other, all-too-human distortions inevitably colour the data collected. For instance, people may consciously adapt to produce metrics favourable to them. A police officer under pressure to “make quota” may focus on infractions that she would ordinarily overlook, while conversely, her precinct commander, under pressure to present the city as ever-safer, may downwardly classify a felony assault as a simple misdemeanour. This is the phenomenon known to viewers of The Wire as “juking the stats,” and it is particularly likely to occur when financial or other incentives depend on achieving a performance threshold.
There is also the question of interpretation. Advocates of smart cities often seem to proceed as if it is self-evident that each of our acts has a single, salient meaning, which can be recognised, made sense of and acted upon remotely by an automated system, without any possibility of error. The most prominent advocates of this approach appear to believe that no particular act of interpretation is involved in making use of any data retrieved from the world in this way.
Advertisement
But data is never “just” data, and to assert otherwise is to lend inherently political and interested decisions an unmerited gloss of scientific objectivity. The truth is that data is easily skewed, depending on how it is collected. Different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few metres. Perceptions of risk in a neighbourhood can be transformed by slightly altering the taxonomy used to classify reported crimes. And anyone who has ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey.
The bold claim of “perfect” knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it is astonishing that any experienced engineer would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.
The notion that there is one and only one solution to urban problems is also deeply puzzling. Cities are made up of individuals and communities who often have competing preferences, and it is impossible to fully satisfy all of them at the same time.
That such a solution, if it even existed, could be arrived at algorithmically is also implausible. Assume, for the sake of argument, that there did exist a master formula capable of balancing the needs of all of a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently. But the wholesale surrender of municipal management to an algorithmic toolset seems to place an undue amount of trust in the party responsible for authoring the algorithm.
If the formulas behind this vision of future cities turn out to be anything like the ones used in the current generation of computational models, life-altering decisions will hinge on the interaction of poorly defined and subjective values. The output generated by such a procedure may turn on half-clever abstractions, in which complex circumstances resistant to direct measurement are reduced to more easily determined proxy values: average walking speed stands in for the “pace” of urban life, while the number of patent applications constitutes an index of “innovation”, and so on.
Quite simply, we need to understand that creating an algorithm intended to guide the distribution of civic resources is itself a political act. And, at least for now, nowhere in the current smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.

And finally, it is difficult to believe that any such findings would ever be translated into public policy in a manner free from politics. Policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily weighted decision factors, or simply ignored.
As matters now stand, the claim of perfect competence that is implicit in most smart-city rhetoric is incommensurate with everything we know about the way technical systems work. It also flies in the face of everything we know about how cities work. The architects of the smart city have failed to reckon with the reality of power, and the ability of elites to suppress policy directions that don’t serve their interests. At best, the technocratic notion that the analysis of sensor-derived data would ever be permitted to drive municipal policy is naive. At worst, though, it ignores the lessons of history.
So, yes: the internet of things presents many new possibilities, and it would be foolish to dismiss those possibilities out of hand. But we would also be wise to approach the entire domain with scepticism, and in particular to resist the attempts of companies to gather ever more data about our lives – no matter how much ease, convenience and self-mastery we are told they are offering us.

0 comments:

Post a Comment