It's success is dependent on promoting emotion, even - or perhaps precisely - despite cultural norms and civilizational needs - and citizens' physical and digital safety. JL
Alexis Madrigal reports in The Atlantic:
The fake news that ran rampant on Facebook was a symptom of a larger issue. The real problem lies at the very heart of Facebook’s most successful product: Perhaps virality and engagement cannot be the basis for a ubiquitous information service that acts as a “force for good in democracy.”
Donald Trump and Mark Zuckerberg each told partial truths.
First, Trump tweeted that “Facebook was always anti-Trump.” From all available information, it does seem true that the vast majority of Facebook’s employees did not want Donald Trump elected president of the United States. They are disproportionately young, urban, and socially liberal, living in California’s most left-wing region. Trump lost all these demographic groups.
Zuckerberg, Facebook’s CEO, responded to Trump with a post about the company’s role in the election. “Trump says Facebook is against him,” he wrote. “Liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what running a platform for all ideas looks like.”
Trump wants Facebook to be seen as having a traditional anti-Trump bias. Mark Zuckerberg wants the service to be seen as neutral. And they’re both wrong.
Zuckerberg’s statement begins with a play right out of the D.C. congressional playbook: The tough-minded, get-things-done pragmatist knows in his heart that if everyone is mad, he must have done something right.
But the sophisticated critiques of Facebook are not about ideas and content that people don’t like, but rather the new structural forces that Facebook has created. News and information flow differently now than they did before Facebook; capturing the human attention that constitutes that flow is Facebook's raison d’être (and value to advertisers). Now that it has done so, Zuckerberg would like to pretend that his software is a pure conduit through which social and political truths can flow.
The conduit must be pure. The platform must be neutral. Because Mark Zuckerberg wants his company’s role in the election to be seen like this: Facebook had a huge effect on voting—and no impact on votes.
Zuckerberg describes Facebook’s central role in the election himself. “More people had a voice in this election than ever before. There were billions of interactions discussing the issues that may have never happened offline. Every topic was discussed, not just what the media covered,” he wrote. “This was the first U.S. election where the internet was a primary way candidates communicated. Every candidate had a Facebook page to communicate directly with tens of millions of followers every day.”
Facebook even registered 2 million people to vote, which Zuckerberg notes was “bigger than the get-out-the-vote efforts of the Trump and Clinton campaigns put together.”Half apologizing for calling the idea that the spread of misinformation on his platform swung the election “crazy,” he continued, “the data we have has always shown that our broader impact—from giving people a voice to enabling candidates to communicate directly to helping millions of people vote—played a far bigger role in this election.”
But given all that, couldn’t even small structural wrinkles in Facebook have provided more support for one candidate over another? Are we to believe that despite this admittedly enormous impact on 2016, the platform somehow maintained perfect neutrality with respect to the candidates?
One of the foundational documents of the academic field of science and technology studies is a talk given by Melvin Kranzberg, of the Georgia Institute of Technology. In it, he declared six incisive, playful “laws” of technology. The first one is: “Technology is neither good nor bad; nor is it neutral.”
He explains:
“Technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”And this is the rub with Facebook right now. The technology that has, more or less, created Facebook’s current status as the world’s leading purveyor of information is News Feed.News Feed had to solve a basic problem: There were always too many posts by people you know and organizations you follow to show them all to you. So, Facebook’s engineers decided to find a way to rank them that wasn’t just chronological. It makes some sense: Why show you four posts from Sports Illustrated before showing you a post from your father?
The News Feed, then, takes many pieces of data about each story and the way that story is playing in your social network to order your personal feed. As a technology, it is one of the most successful products of all time. People spend countless hours on Facebook precisely because News Feed keeps showing them stuff that they want to interact with.
During this time, the actions they take on the platform signal to the News Feed that they’re interested in something. The industry calls this engagement. Reading, watching, sharing, commenting, reposting to your own page: That’s all engagement. Posts that generate a lot of it (within one’s network and beyond) are more likely to show up on your feed.
There are a lot of factors to this, Facebook’s engineers would tell you. But the complexity can’t hide the most basic fact: The goals of News Feed have nothing to do with democracy. They might overlap sometimes. But they are not the same.
Zuckerberg’s note ends saying that he wants Facebook to be a “force for good in democracy.” To recognize that one’s massive platform can do good, however, requires an understanding that it could also do bad. You can’t have one without the other. This sin of omission runs throughout Silicon Valley: “Change the world.” “Have an impact.” These are incomplete phrases that render incomplete thoughts.
If Facebook wants to be a force for good in democracy, it needs to answer some questions. Does maximizing engagement, as it is understood through News Feed’s automated analysis, create structural problems in the information ecosystem? More broadly, do the tools that people use to communicate on Facebook influence what they actually talk about?Facebook might offer the defense that any changes would reflect equally across partisan lines or that there is no systemic bias that gets baked into the system. But let’s just say that one candidate, in a hypothetical election, was very good at driving engagement on Facebook. Perhaps this candidate was hyperbolic and prone to extreme statements that generated controversy. Perhaps this candidate hit hot-button issues and denigrated opponents personally. Perhaps this candidate used the preexisting fractures among the country’s polity to drive a lot of shares and comments, positive and negative.
The other candidate in this hypothetical election was more measured. The remarks the candidate made were primarily about policy. The candidate tried to calm the passions of political followers. Does anyone doubt that this candidate’s engagement would not be as good?
Now multiply that by all the media that both these candidates generate. Multiply that by the people on Facebook who come to understand that posting an anti-Trump meme gets more engagement than a pro-Clinton meme.
The fake news that ran rampant on Facebook was a symptom of a larger issue. The real problem lies at the very heart of Facebook’s most successful product: Perhaps virality and engagement cannot be the basis for a ubiquitous information service that acts as a “force for good in democracy.”
And if this is true, how much is Facebook willing to change?
0 comments:
Post a Comment