A Blog by Jonathan Low


Apr 30, 2018

Does Facebook Just Harbor Extremists - Or Does It Create Them?

All of the social and psychological reward structures are designed to encourage behavior that incites engagement - whatever the means - or the consequences. JL

Max Fisher and Amanda Taub report in the New York Times:

The incentive structures and social cues of algorithm-driven social media like Facebook train (users) - perhaps without their awareness - to pump anger and fear. Feeding one another, users arrive at hate speech on their own. Primal emotions draw the most engagement. Posts that affirm group identity by attacking another group perform well. That delivers a dopamine boost, training you to repeat what behavior wins the most engagement. To the algorithm, the content of an idea is irrelevant: only its ability to engage counts.
When they talk about incitement to violence on Facebook — a growing problem in developing markets — representatives and critics of the platform alike tend to describe it as a problem created by small factions of extremists.
The extremists, in this view, push out rumors and inflammatory claims to everyday users, who become ideologically infected. So stopping the violence should be as simple as silencing the extremists.
Mark Zuckerberg, Facebook’s chief executive, made that argument recently when asked about his platform’s role in violence in Myanmar. “People were trying to spread sensational messages through — in this case, it was Facebook messenger,” he said, by way of example, in an April podcast interview with Vox.
“That’s the kind of thing where I think it is clear that people were trying to use our tools in order to incite real harm,” he said. Facebook’s response? “We stop those messages from going through.”
But a reconstruction of how Facebook-based misinformation and hate speech contributed to anti-Muslim riots in Sri Lanka last month, along with research on how people use social media, suggests that those who set out to be provocateurs are not the only danger — or even the biggest one.Everyday users might not intend to participate in online outrage, much less lead it. But the incentive structures and social cues of algorithm-driven social media sites like Facebook can train them over time — perhaps without their awareness — to pump up the anger and fear. Eventually, feeding into one another, users arrive at hate speech on their own. Extremism, in other words, can emerge organically.
We saw this firsthand in the small town of Digana, Sri Lanka, a week after anti-Muslim mobs had torn through.
One of the posts seen as inspiring the attacks was a Facebook video posted by Amith Weerasinghe, who has gained a large online following. Just before the mob arrived, he’d filmed himself walking Digana’s shops, warning that too many were owned by Muslims and calling on Sinhalese to take the town back.
But Mr. Weerasinghe’s online stardom was newfound and, according to neighbors, a mere persona.
“He’s from the area, he went to school here,” Jainulabdeen Riyaz, a member of the local Muslim community, said, laughing at the absurdity of this standoffish local boy posing as a crusading outsider. “His father is a carpenter. He’s a normal person.”
When Mr. Weerasinghe began assuming his angry online persona, Mr. Riyaz said, some Muslims in town approached his father to ask him to intervene. It didn’t work.
We met Mr. Riyaz at a gathering for the family of another young
man, Abdul Basith, who was killed by the mob. These three men had grown up side by side, only to have one of them, Mr. Weerasinghe, become a social media celebrity who helped pull the online world into the real Digana. When it was over, a man who had known him his entire life was dead.
Though each individual’s story has its own unique turns, there are some common points that have to do with how social media can amplify certain elements of human nature.
Facebook’s news feed, for instance, runs on an algorithm that promotes whatever content wins the most engagement. Studies find that negative, primal emotions — fear, anger — draw the most engagement. So posts that provoke those emotions rise naturally.
Tribalism — a universal human tendency — also draws heavy engagement. Posts that affirm your group identity by attacking another group tend to perform well.
Finally, social media platforms use color and sound to reward engagement, which humans naturally seek out. Comments and likes are presented like a set of diamonds clicking into place on a slot machine. That delivers a little dopamine boost, training you to repeat whatever behavior wins the most engagement.
The home of Abdul Basith, who died when a mob set fire to his family home in Digana. Credit Adam Dean for The New York Times
These are hardly the only things driving Mr. Weerasinghe or others like him. But that path to attention, praise and a sense of importance and agency can appeal to anyone — even a carpenter’s son railing against his own neighbors.
“It’s disturbing. The radicalization is happening at a very young age,” said Sanjana Hattotuwa, a researcher at the Colombo-based Center for Policy Alternatives, which tracks online hate speech.
“They’re kids; they’re schoolchildren,” he said of the users on Facebook groups prone to extremism. “The parents don’t have a clue that they’re participating. The schoolteachers don’t have a clue that they’re participating. So this is their initiation into communal relations. And it’s hate. It’s really, really bad.”
This dynamic, far from unique to developing countries, bears similarities to the rise of the “alt-right” movement in the United States. Elements of that movement first gained prominence among young people — mostly men — on sites like Facebook and Reddit, which is also driven by an algorithm meant to surface the most engaging content.
The alt-right, for all its origins in old hatreds, took a distinctly modern route out of the margins of society, according to a recent paper by Jessie Daniels, a sociologist at Hunter College. Algorithm-driven platforms, she wrote, “amplify and systematically move white supremacist talking points into the mainstream.”
Studies have found that people tend to shut out ideas when they believe society has deemed them extreme. But they become much more open to ideas that they believe are considered mainstream.
Traditionally, someone hearing an extremist idea for the first time might have encountered it through friends or relatives, who might also convey that the idea is outside the mainstream. Ideas are now delivered through news feeds governed by raw, engagement-driven popular will.
This leads to an ideological flattening. To the algorithm, the content of an idea is irrelevant. Whether it’s extremist or mainstream doesn’t matter; only its ability to draw engagement counts.
In 2014 and 2015, for instance, critical masses of users on Reddit promoted hate against feminists and against people they deemed overweight. Such ideas can naturally proliferate on social media algorithms, by indulging anger against vulnerable targets and us-versus-them tribalism.
Once those ideas had popped to the top of the user-driven algorithm a few times, they quickly became perceived as mainstream and were adopted — and even angrily defended — by much of the site, which is one of the most widely used on the internet. Many users pushed that extremism into the real world, harassing their targets on and offline.
Reddit was able expel those ideas, a study found, only when it went much further than banning a handful of extremists. Instead, it began heavily regulating online discussion — imposing a degree of social control far beyond what other platforms like Facebook and Twitter have done.
Anyone who regularly uses Twitter will recognize the way that it inadvertently trains users — by providing bursts of affirmation when a post goes viral — to win plaudits from fellow travelers by putting down users on the other side of any argument and, if no such argument exists, to start one. It is akin to Facebook’s inadvertent promotion of anger and tribalism with a less active algorithm but on a far larger scale.
Online outrage can sometimes be a necessary way to channel popular opposition to, say, the injustices of systemic racism or other forms of discrimination. It’s not always bad.
The problem arises when negative, tribal emotions begin to permeate social media, which increasingly dominates users’ lives and therefore shapes their perceptions of the world offline. Like in Sri Lanka, that can have real-world consequences, albeit more subtle ones, by deepening the social and political polarization that is one of American democracy’s toughest problems at the moment.
And by contributing to the sense that every issue is just a way to keep score in a zero-sum game between political tribes, that dynamic can make serious problems more difficult to solve.
That sort of process is not typically considered to be a form radicalization — or at least the label is not applied as easily as it might be to Sri Lankan Buddhists spinning up anger against Muslims. But the dynamics and platforms bear similarities.
As Mr. Hattotuwa put it when describing how online hate had spread in Sri Lanka: “The cancer has grown such that you’re looking at ordinary people.”


Post a Comment