A trust, worthiness and reputation rating? From Facebook? Oh wait, you mean this isnt a Saturday Night Live joke?
The spread of fake news and its impact on the 2016 US elections has elevated the need for better evaluation and prevention. But is Facebook - or any tech company - um, trustworthy? And does anyone want their government doing this after what's being done in China? Probably not. The solution: not yet clear. JL
Elizabeth Dwoskin and Molly Roberts report in the Washington Post:
A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.
Roberts - Trust no one — or, alternatively, trust only those users whom Facebook has awarded a satisfactory trustworthiness score.The Post’s Elizabeth Dwoskin reported on Tuesday that the tech platform has started assigning its users “reputation assessments” ranging from zero to 1 based on their behavior on the site. It turns out this is not quite as dystopian as it sounds. What the story does show is the extent to which we’ve allowed companies such as Facebook access to our minds.Facebook’s effort focuses only on whether users who flag news stories as false are doing so accurately, which helps the company triage as it slogs through the trenches of information warfare. Having a human immediately check every single story every single user alerts them to is unrealistic, and Facebook says some users lash out against articles they disagree with — even when they’re perfectly true. Those who want Facebook to fight falsehoods wailed when the company first announced it would rely on users for help. Now they’re wailing again as its executives treat that crowdsourcing with care. The critics can’t have it both ways.Still, small wonder the “trustworthiness score” makes so many people’s personal doomsday clocks start ticking. The rating may serve a specific purpose, but it’s easy to imagine how it could extend into other realms of online life and then trickle offline, as well. What if, for example, a lender wanted to translate trustworthiness into creditworthiness? The scores are also part of a trend. As companies take a more active role moderating their platforms, they will employ algorithms to suss out who on their platforms might pose a threat, and whose posts deserve promotion or de-promotion in people’s feeds.The systematization of such sweeping determinations worries us even more than it otherwise might because we don’t understand how it’s happening. Executives won’t explain what’s behind their mathematical models for fear that users will try to game them. They also won’t explain because, in some cases, even they don’t know.The anxiety around lack of knowledge suggests a deeper source for the uneasiness we feel when we read reports like the one on Tuesday. We don’t like to see a corporation tell us what sort of people we are, and “trustworthiness” is essential to human-on-human interaction. It’s as though an algorithm, or the technology overlords who run that algorithm, were telling us whether we’re kind, or lovable.The thing is, Facebook already does decide what sort of people we are. That’s how, as a smiling Mark Zuckerberg told a congressman in his testimony on Capitol Hill in April, the company makes its money: “Senator, we sell ads.”Facebook, for instance, thinks I like journalism (yes), linen (also yes), money (ouch, but okay) and . . . “9/11 Truth movement” (evidently the system sometimes misses the mark). And it places me into various categories — say, young professional white women — according to those interests. It’s these categories that Russian actors exploited during the 2016 presidential election, when they dropped rubles on ads aimed at groups such as “Indigenous People of the Americas.” Facebook has since removed one-third of the segments the trolls seized on, but the microtargeting continues even as skepticism increases.The trustworthiness episode is frightening not because the technology itself is frightening, but because of how it reminds us of the slim difference between policies we see as proto-dystopian and policies we accept as the price we pay for connection. It’s just as simple for Facebook to tell us we’re “Trendy Moms” as it is for the platform to tell us whether we’re trustworthy. In both cases, we’ve given the company all the information it needs.That may, in the end, be the most sobering lesson from this story. These sites wouldn’t have the data to define us if users didn’t pour their souls onto Web page after Web page. They also wouldn’t need to clean up their online communities to solve our offline ills if we hadn’t invited them into town and told them to go ahead and take it over. The question now is, can we trust them?
0 comments:
Post a Comment