A Blog by Jonathan Low

 

Jul 2, 2014

The Facebook Experiment: Hold the Outrage, When Are Big Internet Companies NOT Messing With Your Head?

Whoa! You'd think someone's pocket had been picked or their car stolen or  restaurant bill incorrectly added.

It's not that the experiment Facebook ran to test a theory about whether other people's postings depress their friends was right. It's that Facebook was doing exactly what most people do with the internet everyday. 

They were filtering and testing and attempting to create the reality that best suits their needs. Like favoriting only conservative or liberal websites, or requesting updates from the Red Sox rather than the Yankees, or Man U instead of Chelsea. Does it make us all evil? Does it make Facebook evil? Well, yes, no and maybe.

The fact is that our ability to identify, measure and track internet activity has created an utterly transparent world in which most data are available. Not to everyone, of course, this is a business! And much of it is only for sale, but the data exist if we want it badly enough. So anyone with a commercial stake in the internet is constantly testing.

Now, as we become more knowledgable we may, as a society begin to have doubts about whether all of this is healthy, kosher and just. And companies with the sort of reputation for dissembling that Facebook now has are learning that they have forfeit the benefit of the doubt. They may scoff at the impact of this, at present, but we suspect this is beginning to wear on them, because it comes at a cost. And costs are bad for business.

But the outrage being focused on Facebook right now feels overdone. Because it is nothing but a reflection of our own desires. Which is how it got so big. And which is maybe why this is part of a much broader conversation about what we have willingly let our society become. JL

David Weinberger comments in CNN:

We have given control over our the flow of our social information to commercial entities that have as their primary interest not the health of our society and culture, but their bottom line.
Many people are outraged about the just-revealed psychological experiment Facebook performed in 2012 on 690,000 unwitting people, altering the mix of positive and negative posts in their feeds. Playing with people's emotions without their consent is a problem. But it would be even worse if we think -- after Facebook posts one of its all-too-common apologies -- that Facebook is done manipulating its users.
No. The experiment was only a more intrusive version of what the company does every time we visit our Facebook page.
Facebook's experiment was a version of so-called "A/B" testing, one of the most widely used and effective techniques large websites use to "provide a better customer experience" -- that is, to sell us more stuff.
For example, for years Amazon has routinely experimented with seemingly insignificant changes to its pages, like showing half of its visitors a discount offer on the left side, and the same ad on the right to the other half. If Amazon finds a statistically significant uptick in clicks on the offer when it's on one side, from then on that's where they put the offers. Companies A/B test every parameter about a page, from font sizes to colors to the depth of the drop shadows.
But the Facebook experiment was not normal A/B testing. Usually a test alters some seemingly irrelevant factor. But Facebook's experiment changed the site's core service: showing us what's up with our friends. Worse, Facebook did so in a way likely to affect the emotional state of its users. And that's something to be concerned about.
But much of the outrage is driven by a false assumption: that there is a "real" mix of news about our friends.
There isn't. Facebook always uses algorithms to figure out what to show us and what to leave obscure. Facebook is in the business of providing us with a feed that filters the Colorado River rapids into a tinkling stream we can drink from.
The 2012 experiment is a window onto this larger concern: Facebook, an important and even dominant part of our social infrastructure, makes decisions about what we'll know about our friends based on what works for Facebook, Inc., and only secondarily based on what works for us as individuals and as a society.
This point is illustrated in Eli Pariser's excellent book (and terrific TED Talk) The Filter Bubble. Facebook filters our feeds to make us happier customers. But Facebook defines a happy customer as one that comes back often and clicks on a lot of links.

When it comes to politics, we can easily see the problem: Showing us news that excites our click finger is a formula for promoting shouting and political divisiveness. Too much of that is bad, but in both politics and social relationships more broadly, do we know what the "right mix" is?
Are we sure that filtering social news so that it includes more of the negative is bad? And positive filtering can paint a too-rosy picture of our social network, shielding us from the full force of life as it is actually lived. I don't know the answer, but it can't come from a commercial entity whose overriding aim is to keep us coming back so we can buy more from its advertisers.
There are many options to play with here. For example, we could be given more individual control over our own filters. Or a site could "nudge" us toward feeds that achieve socially desirable aims like making us more willing to explore and embrace differences.
But we're unlikely to see such options so long as we have given control over our the flow of our social information to commercial entities that have as their primary interest not the health of our society and culture, but their bottom line. Sometimes those interests may align, but not reliably or often enough.
So, I'm upset about Facebook's cavalier toying with our emotions, but I'm far more disturbed about what Facebook and other such sites do all the time.

0 comments:

Post a Comment