A Blog by Jonathan Low

 

Nov 29, 2019

Why There Is No Truth Anymore, Only Inputs and Outputs

As algorithmic social engineering becomes more sophisticated and personalized, human notions of desire are increasingly being mediated by the forces seeking a result specific to the individual and to the organization generating the targeted impulse.

Facts become perceptions which can be manipulated with more powerful data. Truth, in such a system, becomes the degree to which an outcome can be predicted. JL


Jamie Bartlett reports in One Zero:

Far more data plus far better computers equals more sophisticated ways of understanding you. Much of this will involve pattern spotting. This profiling will be inferred, meaning automated systems generate insights about people based on their behavior and not because they’ve stated a preference. The social media influencer will become algorithmic and automated. Ads you receive will be for  you alone. The more personal and intrusive the profiling, the greater opportunity to manipulate and control.The result will be ad targeting so effective that you may question the notion of free will. There is no truth anymore, only inputs and outputs.
It’s obvious that in 50 years the amount of data collected about us will be much, much larger than it is now. By the middle of the century, all of us will leave a comprehensive, high-definition, information-rich digital exhaust everywhere we go. As the cost of adding computer chips to objects falls, our baby monitors, coffee machines, Fitbits, energy meters, clothes, books, fridges, and facial expressions will all create data points. So will our public spaces, lampposts, storefronts, and traffic lights. A 70-year-old in 2069 will have had most of her life datafied. This is what most analysts complaining about Facebook or Google miss: The profiling and targeting of people has only just started.
Far more powerful computers than we have now will crunch through this. Though Moore’s law, the “golden rule” dictating that computing power doubles every two years, has recently slowed, it’s a safe bet that future computers will be several orders of magnitude more powerful and cheaper than ours.
Far more data plus far better computers equals significantly more sophisticated ways of understanding you. Much of this will involve pattern spotting — supercomputers churning through vast fields of random information to draw weird and unsettling correlations between the data you create and what you like. Though this kind of targeting is already obfuscated to some extent — it can be hard to understand exactly why a particular video has been served to you on YouTube, for example — these correlations will eventually make even less sense to us, creating new ethical problems.
People do things, feel things, think things, and buy things for reasons they don’t understand.
The driving force behind this is a simple but powerful point: People do things, feel things, think things, and buy things for reasons they don’t understand. Who knows what correlations will be thrown up? I have no idea, and neither will anyone else. All that matters is that these correlations exist. There is no truth anymore, the experts will say, only inputs and outputs.
Some theoretical examples:
  • People aged 30 and 35 who eat eggs on Thursday and have a below average heart rate are more likely to be adrenaline junkies who enjoy…
  • People who watched YouTube videos between 7 and 9 a.m. as teenagers and travel by public transit are more likely to be traditionalists and…
Far more of this profiling will be inferred, meaning automated systems will generate insights about people based on their behavior and not because they’ve explicitly stated a preference. Good examples of this in practice today are the psychographic techniques used by Cambridge Analytica. The group calculated personality types based on surveys and cross-referenced the results against Facebook likes to build a predictive personality model based on likes alone. Despite all the outrage, there’s little evidence this actually worked. But at some point, it will.
In fact, psychographics will be just one of a cluster of techniques that extract something personal about you without your knowledge. Your facial expressions give away your innermost feelings. (One area research around this is called emotional analysis, which tries to determine emotional states from images and video via facial expression analysis.) Your Fitbit data, combined with data generated by your clothing, fridge, and smart meter, will determine when you are depressed. I expect inferred data sets will become the big data scandal of the 2030s and 2040s, since it won’t be clear whether this is personal data that should be kept private.
I should quickly add another trend, even though it will be with us far sooner than 2069: total personalization. I don’t know what devices or platforms you’ll be using in 50 years, but any ads you receive through them will be for you and you alone — and not because you’re in a bucket called “Medium readers ages 18 to 60 using an Apple device.”
The end result will be ad targeting so effective that you may well question the notion of free will altogether. Via complex analysis that no one will understand, taking in data from your fridge, smart car, work calendar, facial expressions, and toilet, your smart TV will fire off a personalized ad about buying that shiny new handgun just at the moment you’re starting to feel… well, in fact, you won’t know what the reason is. It won’t matter, either.
Of course, politicians will also pick up these techniques, which will present significant questions around manipulation and election legitimacy. But that’s for another essay.
People will complain that it’s unfair, illegal, creepy, a breach of privacy. There will, surely, be a concurrent wave of tech-based countermeasures: personalized Faraday cages and facial recognition–blocking masks. I hope someone’s working on these. There may even be a full-scale social rejection of these tracking technologies. But experience suggests this will only delay the development of advanced ad targeting rather than stop it. Privacy, after all, is increasingly about controlling the data that’s collected about you, rather than preventing it from being collected in the first place.
We will also have to decide what to do with the data of the dead. It’s estimated that there will be more dead people than living ones on Facebook by 2065. Should their data be part of the complex algorithm that helps work out other people’s preferences?
There is no truth anymore, the experts will say, only inputs and outputs.
But the most significant change of all will be automation. The trends affecting other industries are coming to advertising as well: The future is a fully automated ad creation and delivery system capable of reaching millions of consumers with personalized, dynamic content. Ad people will of course say something warm about intuition and the human touch, and no doubt general strategy and creative direction will be set by the well-paid higher-ups. Premium advertisers will promise “human-only” work. But according to a recent survey, A.I. experts reckon there is a 50 percent chance of A.I. outperforming us in all tasks within 50 years. Gifted, intelligent, and talented as I’m sure they are, this presumably includes ad execs.
I anticipate a company called iData, which will stand for something like: Individualized Dynamic Automated Targeting Applications. (I can even imagine its motto: “Know your customers better than they know themselves.”) iData will use natural language generation, a machine learning technique that works out what sort of message each person would best respond to and automatically creates it without any human involvement. It will iterate as it goes, constantly testing and refining. The underlying technology is being developed now but will take several years to mature. Some advertisers are already experimenting with early prototypes. Campbell’s, for example, has used this approach to advertise soup when weather data in a local area indicates it is cold or raining and to suggest a set of dishes and recipes given an ingredient provided by a user. In 2017, the senior digital director of Coca-Cola signaled his intention to use A.I. to help generate music and scripts for the company’s ads.
Taking humans out of the loop will introduce a troubling dynamic where gut checks are jettisoned for greater efficiency. What if it turns out that we respond best to the most outrageous base stereotyping? Or perhaps you can sell 15.3 percent more antidepressants to someone if you catch them at a certain point in the week and use messages that play to low self-esteem. What if payday loan ads and gambling offers were precisely worded and targeted at the very moment when someone is most vulnerable or short on cash? To a machine, all of this is irrelevant.
Automation will have other uses as well. The past decade has seen the rise of the social media influencer who posts branded content that appears more authentic than typical advertising. This will also become become algorithmic and automated, perhaps leading to nano-influencers getting automatically paid nano-amounts whenever an algorithm spots them sharing brands with their small number of followers. If you happen to be wearing a pair of Doc Martens in a photo that’s shared by 100 people, for instance, you’ll receive 15 cents.
In the end, despite all the whizzy innovation, correlation, and automation, we will still face the same problem we’ve always had: The more personal and intrusive the profiling, the more effective the ads, which also creates greater opportunity to manipulate and control. When that’s powered by machines and algorithms that no one understands, the underlying tension won’t change — but the stakes will be higher.

0 comments:

Post a Comment