A Blog by Jonathan Low

 

Jun 7, 2020

Why Big Tech Is Being Forced To Wrestle With Content Moderation and Responsibility

The self-serving theory that tech companies are just innocent platform providers - allowed to profit but not required to take responsibility - is no longer working.

The question is whether whatever replaces it will make things better. JL

Nellie Bowles reports in the New York Times:

As protests of police brutality continue across the country, many in the tech industry are questioning the wisdom of letting all flowers bloom online. Moderation by the platforms could threaten their protected legal status. And intervention goes against the apolitical self-image some in the tech world have. (But) a hands-off approach by the companies has allowed harassment and abuse to proliferate online.“These platforms have achieved incredible power and influence.' Moderation was a necessary response. “There’s a greater risk to American democracy in allowing unbridled speech on these private platforms.”
The existential question that every big tech platform from Twitter to Google to Facebook has to wrestle with is the same: How responsible should it act for the content that people post?
The answer that Silicon Valley has come up with for decades is: Less is more. But now, as protests of police brutality continue across the country, many in the tech industry are questioning the wisdom of letting all flowers bloom online.
After years of leaving President Trump’s tweets alone, Twitter has taken a more aggressive approach in recent days, in several cases adding fact checks and marks indicating the president’s tweets were misleading or glorified violence. Many Facebook employees want their company to do the same, though the chief executive, Mark Zuckerberg, said he was against it. And Snapchat said on Wednesday that it had stopped promoting Mr. Trump’s content on its main Discover page.
In the midst of this notable shift, some civil libertarians are raising a question in an already complicated debate: Any move to moderate content more proactively could eventually be used against speech loved by the people now calling for intervention.
“It comes from this drive to be protected — this belief that it’s a platform’s role to protect us from that which may harm or offend us,” said Suzanne Nossel, the head of PEN America, a free-speech advocacy organization. “And if that means granting them greater authority, then that’s worth it if that means protecting people,” she added, summarizing the argument. “But people are losing sight of the risk.”
Civil libertarians caution that adding warning labels or additional context to posts raises a range of issues — issues that tech companies until recently had wanted to avoid. New rules often backfire. Fact checks and context, no matter how sober or accurate they are, can be perceived as politically biased. More proactive moderation by the platforms could threaten their special protected legal status. And intervention goes against the apolitical self-image that some in the tech world have.
But after years of shrugging off concerns that content on social media platforms leads to harassment and violence, many in Silicon Valley appear willing to accept the risks associated with shutting down bad behavior — even from world leaders.
“Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves,” Twitter’s chief executive, Jack Dorsey, wrote.
A group of early Facebook employees wrote a letter on Wednesday denouncing Mr. Zuckerberg’s decision not to act on Mr. Trump’s content. “Fact-checking is not censorship. Labeling a call to violence is not authoritarianism,” they wrote, adding: “Facebook isn’t neutral, and it never has been.”Timothy J. Aveni, a Facebook employee, wrote in a separate letter that he was resigning and said: “Facebook is providing a platform that enables politicians to radicalize individuals and glorify violence.”
Ellen Pao, once the head of Reddit, the freewheeling message board, publicly rebuked her former company. She said it was hypocritical for the Reddit leader Steve Huffman to signal support for the Black Lives Matter movement as he recently did in a memo, since he had left up the main Trump fan page, The_Donald, where inflammatory memes often circulate.
“You should have shut down the_donald instead of amplifying it and its hate, racism, and violence,” Ms. Pao wrote on Twitter. “So much of what is happening now lies at your feet. You don’t get to say BLM when reddit nurtures and monetizes white supremacy and hate all day long.”
A hands-off approach by the companies has allowed harassment and abuse to proliferate online, Lee Bollinger, the president of Columbia University and a First Amendment scholar, said last week. So now the companies, he said, have to grapple with how to moderate content and take more responsibility, without losing their legal protections.
“These platforms have achieved incredible power and influence,” Mr. Bollinger said, adding that moderation was a necessary response. “There’s a greater risk to American democracy in allowing unbridled speech on these private platforms.”
Section 230 of the federal Communications Decency Act, passed in 1996, shields tech platforms from being held liable for the third-party content that circulates on them. But taking a firmer hand to what appears on their platforms could endanger that protection, most of all, for political reasons.
One of the few things that Democrats and Republicans in Washington agree on is that changes to Section 230 are on the table. Mr. Trump issued an executive order calling for changes to it after Twitter added labels to some of his tweets. Former Vice President Joseph R. Biden Jr., the presumptive Democratic presidential nominee, has also called for changes to Section 230.
“You repeal this and then we’re in a different world,” said Josh Blackman, a constitutional law professor at the South Texas College of Law Houston. “Once you repeal Section 230, you’re now left with 51 imperfect solutions.”
Mr. Blackman said he was shocked that so many liberals — especially inside the tech industry — were applauding Twitter’s decision. “What happens to your enemies will happen to you eventually,” he said. “If you give these entities power to shut people down, it will be you one day.”
Brandon Borrman, a spokesman for Twitter, said the company was “focused on helping conversation continue by providing additional context where it’s needed.” A spokeswoman for Snap, Rachel Racusen, said the company “will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover.” Facebook and Reddit declined to comment.
Tech companies have historically been wary of imposing editorial judgment, lest they have to act more like a newspaper, as Facebook learned several years ago when it ran into trouble with its Trending feature.
It is complicated when Mr. Dorsey begins doing that at Twitter. Does that mean a person who is now libeled on the site and asks for a fact check gets one? And if the person doesn’t, is that grounds for a lawsuit?
The circumstances around fact checks and added context can quickly turn political, the free-speech activists said. Which tweets should be fact-checked? Who does that fact-checking? Which get added context? What is the context that’s added? And once you have a full team doing fact-checking and adding context, what makes that different from a newsroom?
“The idea that you would delegate to a Silicon Valley board room or a bunch of content moderators at the equivalent of a customer service center the power to arbitrate our landscape of speech is very worrying,” Ms. Nossel said.
There has long been a philosophical rationale for the hands-off approach still embraced by Mr. Zuckerberg. Many in tech, especially the early creators of the social media sites, embraced a near-absolutist approach to free speech. Perhaps because they knew the power of what they were building, they did not trust themselves to decide what should go on it.
Of course, the companies already do moderate to some extent. They block nudity and remove child pornography. They work to limit doxxing — when someone’s phone number and address is shared without consent. And promoting violence is out of bounds.
They have rules that would bar regular people from saying what Mr. Trump and other political figures say. Yet they did not do anything to mark the president’s recent false tweets about the MSNBC host Joe Scarborough. They did do something — a label, though not a deletion — when Mr. Trump strayed into areas that Twitter has staked out: election misinformation and violence.
Many of the rules that Twitter used to tag Mr. Trump’s tweets have existed for years but were rarely applied to political figures. Critics like the head of the Federal Communications Commission, Ajit Pai, have pointed out, for example, that the Iranian leader, Ayatollah Ali Khamenei, has a Twitter account that remains unchecked.
“What does and does not incite violence is often in the eyes of the reader, and historically it has been used to silence progressive antiracist protest leaders,” said Nadine Strossen, a former head of the American Civil Liberties Union and an emerita professor at New York Law School.
“I looked at Twitter’s definition of inciting violence, and it was something like it could risk creating violence,” she added. “Oh? Well, I think that covers a lot of speech, including antigovernment demonstrators.”
Corynne McSherry, the legal director of the Electronic Frontier Foundation, an organization that defends free speech online, said people could be worried about Mr. Trump’s executive order targeting Twitter “without celebrating Twitter’s choices here.”
“I’m worried about both,” she said.
Image

1 comments:

Anonymous said...

In my opinion social media providers are publishers and should be held to the same laws and bear the same liabilities as other types of publishers. Plus it would be a boon to all those poor starving lawyers.

Post a Comment