A Blog by Jonathan Low

 

Jan 21, 2018

Gaming the Growth In Algorithmic Surveillance

Will awareness lead to action, however subtle and passive-aggressive, that may ultimately result in vast quantities of tainted data? JL

John Herrman reports in the New York Times:

Maybe knowing that we’re being monitored by judgmental algorithms could affect our behavior. Machine surveillance can be fooled in small ways and big ones: by the creation of blacked-out spaces, the introduction of strategically tainted data and the performance of a particular version of the self. Knowing that our maps were watching us produced individual acts of defiance that amounted to collective acts of resistance.
In a series of studies published in 2012, two psychologists, Ara Norenzayan and Will Gervais, set out to test a simple question: When people think about God, do they feel or act as if they are being monitored? Subjects of varying religiosity were primed to think about God, then asked to complete self-evaluations based on statements like “I am sometimes irritated by people who ask favors of me” and “No matter whom I’m talking to, I’m always a good listener.” Results, researchers said, were consistent with what they call the “supernatural-monitoring hypothesis” — that is, that “thinking of God triggers the same social cognitive processes that are activated by real-time social surveillance.” A sense that you’re being seen — whether by a fellow human or by a supreme consciousness from which no thought can be withheld — does seem to change how you see yourself as well as how you behave.
Since 2012, online platforms have moved to the center of hundreds of millions more lives, popularizing their particular brands of social surveillance. Services like Facebook and Twitter and Instagram are inextricably tied to the experience of being monitored by others, which, if it doesn’t always produce “prosocial” behavior in the broad psychological sense, seems to have encouraged behaviors useful to the platforms themselves — activity and growth. These businesses serve many different purposes, but the one thing they have in common is that they have figured out new ways to monetize the powerful twin sensations of seeing and being seen by others.
But alongside this exhilarating, debilitating, empowering and terrifying human spectacle, these sites have also fostered — inadvertently — a new but not entirely unfamiliar feeling, that of being watched from above. There is little about the ruthless, marketlike social-media experience that primes you to think about any sort of god. But there are, I’ve noticed, persistent and ever-more-obvious experiences that remind you of another kind of higher power: the automated systems on which these platforms run.
Services that used to depend on asking me questions — What do you like? Whom do you want to follow? — have started making more assumptions based on my behavior. I’m reminded that my actions are being recorded and factored into my experience when I am greeted on Twitter by a series of recommendations derived, apparently, from things I’ve liked, looked at or reposted. I’m implicitly prompted to think about algorithmic surveillance when, the day after I wasted a few minutes scrolling through a cycling publication’s account, my Instagram app floods my Explore tab with photos of mountain bikes. Weeks after tapping an unfamiliar name on Facebook and scrolling through the recent posts — a postmarriage surname change, a new profile photo — this person, a former acquaintance I haven’t spoken to in years, is given the same feed placement as current close friends.
These experiences manifest beyond social networks as well. Searches for holiday gifts haunt web browsing, as Amazon “retargeting” ads place the products I looked at, briefly — a thermos, headphones, binoculars — next to, above and in the middle of news articles. They find their way back over the walls too, placing reminders of my recorded behavior in my Instagram feed — ads that say, “We’d like you to buy running gear,” not quite so loudly as they say, “We’ve got eyes everywhere.”
Of course, this is the deal we have entered into with such services: our data for their products. That this surveillance is happening is obvious, even if the ways we’re reminded of it can still be jarring. Spotify, the music-streaming service, recently began an ad campaign that mined and cheekily broadcast actual individual listening habits: “Dear person who played ‘Sorry’ 42 times on Valentine’s Day: What did you do?”; “To the person in NoLIta who started listening to holiday music way back in June: You really jingle all the way, huh?” The company was boldly showing off its intimate knowledge of current users in an effort to gain new ones — a harmlessly creepy example of a strategy that could easily backfire.
Lately I’ve been wondering if a growing awareness of this peculiar arrangement might have secondary consequences. Maybe knowing that we’re being monitored by judgmental algorithms could affect our behavior, too. If this is the case — if awareness of mechanical all-seeing eyes changes how we see and comport ourselves — then, well, how? I started thinking about a version of this question a few months ago, recording small instances where I could. What I ended up with was a list not of behavioral improvements or of flashes of self-aware accountability but of tiny, neurotic evasions.
For example, I thought better of searching for a friend with whom I had lost touch, knowing that his posts would then climb the ladder of my Facebook feed, turning a flash of guilt into a persistent reminder of my failure. For work, I sometimes need to watch videos made by a range of toxic, hateful or just unappealing vloggers, and I found that I’d avoid watching them on my phone, waiting instead until I could get to my laptop, where I could more quickly and totally log out, go incognito and avoid filling my future YouTube recommendations with bile. During the holidays, I shopped in anonymous tabs, assuming that gifts could be revealed during an idle Instagram scroll on the couch.
As with performing on social media for friends, family or co-workers — posting your best pictures, documenting your most exciting activities or professionally marketing yourself — these individually tiny elisions weren’t entirely honest. But unlike these social displays, they were first and foremost intentionally deceptive, and they were rooted in antagonism.
Believers in an omniscient God tend to assume that nothing can truly be hidden, including doubt. Pascal’s Wager, for example, which supposes that believing in God is simply a safer bet against even vanishingly narrow odds of eternal damnation, leans on a potentially fatal premise: Surely any conceivable God would know you’re just making a bet? But we ascribe no such powers to algorithms. Machine surveillance can be fooled in small ways and big ones: by the creation of blacked-out spaces, the introduction of strategically tainted data and the performance of a particular version of the self.
Social platforms also expect faith from their users: that data will be used responsibly, that at least some forms of privacy among people will be respected. But the flashing reminders of the automatic surveillance on which they are increasingly built can undermine those expectations and remind us instead of our ability to withdraw. Residents in neighborhoods that have been flooded by drivers following directions from Waze — which is owned by Google and supplies data to for Google Maps — have resorted to filing false accident reports to divert drivers. Waze has sought to curb these reports, but in so doing, it has come across less as omniscient than unthinking and defensive — and above all, corporate. Knowing that our maps were watching us produced individual acts of defiance that amounted to collective acts of resistance.
Increased awareness of automated surveillance, in other words, is most effective at demystifying the systems doing the watching, not reifying their wisdom and authority. It clarifies our relationships with them: each recommendation or subtle change reminds us not only that we’re being watched, but also that we’ve consented to it. And such awareness will also be a necessity should one or all of these platforms — through more brazen exploitation, calamitous hacking or even greater sharing with governments — cause us to well and truly lose faith. They may aspire to divine levels of omniscience, but they risk driving away their followers in the process.

0 comments:

Post a Comment