A Blog by Jonathan Low

 

Nov 22, 2019

Should Juries Decide Which Social Media Ads Are Fake?

It has helped civilized societies administer justice for a thousand years or so - and it certainly couldnt be any worse than leaving it to contractors and their wunderkind bosses from Facebook. JL

Jonathan Zittrain reports in The Atlantic:

No matter how Facebook and its counterparts tweak their policies, whatever these companies do will prompt broad anxiety among experts and their own users. That’s because there are two problems underlying the debate: we the public don’t agree on what we want. And, we don’t trust anyone to give it to us. That brings us to juries. For all that people might disparage them, and try to avoid serving on them, that small group of citizens has been designed to play a vital role in the high-stakes administration of justice, not as much because 12 randos have special expertise, but because they stand in for the rest of us.
Facebook has been weathering a series of disapproving news cycles after clarifying that its disinformation policies exempt political ads from review for truthfulness. There are now reports that the company is considering reducing the targeting options available to political advertisers.
No matter how Facebook and its counterparts tweak their policies, whatever these companies do will prompt broad anxiety and disapprobation among experts and their own users. That’s because there are two fundamental problems underlying the debate. First, we the public don’t agree on what we want. And second, we don’t trust anyone to give it to us. At one moment someone can reasonably question how Facebook thinks itself king, deciding what each of its 2.4 billion users can see or post—and at another ask how Facebook can make a profit of $22 billion a year from distributing others’ content, and yet take no consistent responsibility for what’s inside it.
What we need are ways for decisions about content to be made, as they inevitably must be when platforms rank and recommend content for us to see; for those decisions yet not to be too far-reaching or stiflingly consistent, so there is play in the joints; and for the deep stakes of those decisions to be matched by the gravity and reflectiveness of the process to make them. Facebook recently announced plans for an “independent oversight board,” a tribunal that would render the company’s final judgment on whether a disputed posting should be taken down. But far more than its own version of the Supreme Court, Facebook needs a way to tap into the everyday common sense of regular people. Even Facebook does not trust Facebook to decide unilaterally which ads are false and misleading. So if the ads are to be weighed at all, someone else has to render judgment.
In the court system, legislators write laws, and lawyers argue cases, but juries of ordinary people are typically the finders of fact and judges of what counts as “reasonable” behavior. This is less because a group of people plucked from the phone book is the best way to ascertain truth—after all, we don’t use that kind of group for any other fact-finding. Rather, it’s because, when done honorably, with duties taken seriously, deliberation by juries lends legitimacy and credibility to the machinations of the legal system.
The lack of consensus around how to handle false or dangerous material online can’t simply be credited to the usual partisan divides. A congressional coalition broad enough to include Ted Cruz, Josh Hawley, Nancy Pelosi, and Maxine Waters has expressed deep reservations about current federal law that immunizes social-media platforms for large swaths of content they circulate that originates with others, whether an angry boomer, an oil company, a political candidate, or a Russian-government propagandist.
Rather, there are two distinct ways of thinking about online freedom and dangers, each powerful in its own way and time. From the internet’s first mainstreaming, around 1995, comes an emphasis on individual rights. In that era, tech commentators focused on the amazing, unprecedented ways in which strangers could connect and exchange views—and on making sure that internet service providers, content platforms, and government regulators would leave their liberty to converse, and privately at that, alone except in the most dire of circumstances. The emphasis on personal liberty in the rights framework tends to see people’s abuse of that liberty to harass or misinform one another as the unfortunate but unavoidable price of freedom, because interventions to stop it carry their own steep costs in surveillance that chills speech, or in poor, overbroad decisions that censor it. The specific ill of misinformation or outright lies is thought of as best met by further speech—the kind of “marketplace of ideas” metaphor championed in the American tradition by Thomas Jefferson, John Stuart Mill, and Supreme Court Justices Oliver Wendell Holmes and William Douglas.
Since around 2010, another compelling framework has gained traction: public health. This framework is often at odds with the one grounded in rights. It recognizes the harm that comes from bad speech generated and distributed at internet scale: a reasonable and chilling fear among people that if they speak up online, they’ll be harassed; lurid misinformation about science and health placed on par with, and seen far more than, the considered views of scientists and doctors; falsehoods that incite violence or mislead voters about candidates’ positions, or that try to trick people—for instance, by urging them to “be sure to vote next Wednesday” when Election Day is Tuesday.
Balancing rights and public health hasn’t been easy in theory or practice. Private companies such as Facebook haven’t figured out how to share data about what’s going on within their platforms in a way that helps us measure and understand the real-world effects of false speech. And even if we knew what thresholds for falsity or harassment wewanted, there remains reasonable skepticism about whether these private companies should be making the kinds of speech-limiting decisions that, when governments made them about protests in public squares, become fodder for the most debated decisions from the Supreme Court. And the First Amendment as a public “terms of service” is a very permissive one: Much more speech is restricted by the rules of Facebook and Twitter.
Figuring out what someone can say and others will see is no longer a mere “customer service” function of a social-network help desk, if ever it was. I join those legal academics who are intrigued by the possibilities of Facebook’s independent advisory board. A bunch of retired judges or other thoughtful people on that board can, perhaps, deliberate, show their reasoning, and thus convince even those who don’t agree with them that the process wasn’t rigged against them. It borrows from the design of a legal system, which, when it works, brings otherwise intractable conflicts to resolution and legitimacy, even though some people, even many people, will understandably be disappointed by any given decision that emerges from it. And for Facebook, it’s a way to say: Don’t blame us for that decision that you’re convinced came out wrong; the board did it.
Yet while an independent oversight board might help with the interpretation of content policies, the job of fact-checking questionable ads is, naturally, fact-specific. The 2020 campaign could see the placement of hundreds of thousands of distinct ad campaigns—far more than Facebook’s oversight board could handle either directly or on some kind of appeal. And there won’t be easy consensus—outside of those obviously deceptive vote-next-Wednesday messages—around what’s “demonstrably false.” That’s not a reason not to vet the ads, especially when the ability to adapt and target them in so many configurations makes it difficult for an opposing candidate or fact-checking third party to catch up to them and rebut them. Instead, we should be thinking as boldly as we can about process.
That brings us back to juries. For all that people might disparage them, and try to avoid serving on them, that small group of citizens has been designed to play a vital role in the high-stakes administration of justice, not as much because 12 randos have special expertise, but because they stand in for the rest of us: I might not agree with what they did, but I wasn’t there, and they heard the evidence, and next time it could be me asked to play their role.
In that spirit, why shouldn’t public librarians be asked in small panels, real or virtually convened, to evaluate ads? Today only 33 percent of Americans have trust in the news media, but 78 percent trust libraries to help them find information that is “trustworthy and reliable.” (That number jumps to 87 percent among Millennials.) Or, better, we could ask librarians and public-school teachers to help draw in others to the vetting process. High-school students across the nation could be empaneled in “advertisement juries” as part of their schoolwork to help evaluate the truth of ads, guided and graded by their librarians and teachers, with their anonymized decisions and written explanations carrying the same power on Facebook as that of its external oversight board on binding interpretation of its content policies. Who better to assess these ads than a representative slice of the people they’re trying to persuade, given time and structure to discover the parlor tricks found within them?
Facebook plans to pay to establish and support its oversight board, structured through a trust so that decisions won’t be influenced by who’s writing the checks. Here, Facebook—and other companies confronting similar content decisions—could pay into a fund supporting the schools participating in the program for ad juries, allowing complete independence for its design and implementation by public and private educators.
Such a process would be big enough to scale to the number of ads in play; loose enough to generate different or even incompatible decisions for similar content when there remains debate about what counts as demonstrably false; and broad enough to incorporate the diverse views and perspectives that otherwise preclude top-level consensus. (And for the Russians to Astroturf the process, they’d have to enroll their kids in American schools.) Student decision makers will soon enough be trusted to vote responsibly, and to have to parse as adults the puzzling cacophony of content that unrelentingly spills from social media.
Ideas like this are, of course, flawed in nearly countless ways, to the point of being crazy. But the status quo is also profoundly unsatisfying, even dangerous, and a turn to process offers a third era to match the complexities of sorting through rights and public-health concerns in today’s digital environment. We must think creatively about how to agree on resolving questions, without needing to fully agree on the questions’ actual answers.

0 comments:

Post a Comment