A Blog by Jonathan Low

 

Sep 15, 2017

Is Facebook Even Capable of Stopping an Influence Campaign On Its Platform?

Given the degree to which its operations and oversight are automated to manage its vast scale, it is not clear to what degree the company is even capable of being aware of how it is being used, let alone what it can do to stop such manipulation. JL

Sam Thielman reports in TPM:

The company employed 17,048  people at the end of last year; with its user base of 1.32 billion, the company has one staffer for every 77,000 people. That employee  customer ratio is unmanageable in human terms, forcing the company to run itself in part through a kind of ad hoc artificial intelligence: a collection of automated customer interfaces that shift to meet preference and advertiser demand. The problem isn’t that Facebook isn’t behaving responsibly, “The people running it don’t have a full grasp of how information is manipulated across it,”
Recent weeks have clarified a few things about Facebook and the broad campaign of Russian interference in the 2016 election. We know a Kremlin-affiliated troll farm spent $100,000 on divisive political ads on the platform, and that Facebook has located and removed a huge number of ads and posts related to the campaign.
The Daily Beast has reported that Facebook won’t share anything but the most cursory assessment of the damage from those ads with the public. The company “declined to commit to releasing information about Russian government-backed Facebook posts, groups, and paid advertisements to the users who encountered them,” according to the Beast.
The obvious question is whether Facebook will do anything more to inform or protect users affected by that abuse of its platform, or else exercise a greater measure of control over the material that appears on it. Experts say the company is unlikely to be compelled to do either unless public pressure on Facebook reaches critical mass.
According to Aaron Mackey, staff attorney at the Electronic Frontier Foundation, there’s not much in the law that can force Facebook to disclose, well, anything.
“The short answer is that as a private entity, there’s no requirement that they provide any additional information,” Mackey said. Facebook did not respond to a request for comment.
Russian operatives manipulating Facebook’s largely automated ad-buying platform are dealing with a kind of information that is totally unprotected by regulation, he said—that makes it different from, say, credit monitoring service Equifax dealing with a huge breach of user data.
A less obvious concern, but perhaps a more serious one, is whether or not the company is even capable of monitoring its 1.3 billion daily active users well enough to stop such sophisticated, clandestine political influence campaigns. Until its disclosure last week about the $100,000 in political ads bought by Russians, Facebook had publicly maintained it had “no evidence” of such buys.
“[Facebook] is something we’ve never seen in history before,” said Rebecca MacKinnon, director of the New America Foundation’s Ranking Digital Rights program. The platform, she said, is a leviathan too large to behave like any other business, and thus to some extent forming its own online zone of lawlessness.
“There’s no precedent,” she said. “How do you govern this thing? How do you hold it accountable? What are the expectations that should be placed on Facebook to be a responsible corporate citizen?”
The company has responded to the pressure of public shaming before, MacKinnon noted. In 2013, when its competitors at Twitter and Google were publishing transparency reports itemizing their dealings with law enforcement, Facebook was a notable holdout—until Edward Snowden’s name started making headlines. MacKinnon credits Facebook’s change of heart to Snowden’s revelation of tech industry cooperation with spy agencies.
“They don’t have a business if they don’t have a basic level of trust,” she said. “I think there are a lot of users with healthy cynicism but if trust drops below a certain level the bottom line is very much affected.”
Facebook’s algorithms and advertising metrics have long been closely held. Details of its operations do leak out from time to time, though: Facebook flatly denied the very existence of an editorial team curating its “trending topics” module—until The Guardian published editorial guidelines for that team. When Facebook fired that secret team of editors in the wake of accusations by Gizmodo that they exercised bias in the topics they promoted, the module went on to be operated solely by an algorithm—and promptly went nuts.
According to a ProPublica story published Thursday, Facebook’s ad business is automated to such a degree that it actually produced bespoke ads for Nazis. The company’s automatically populated categories of interest were sold to advertisers, and those included “jew haters” and people who’d listed their employer as the Nazi Party, according to the report.
Such reliance on automation has to do with Facebook’s scale. The company employed 17,048 people at the end of last year; with its user base of 1.32 billion, the company has one staffer for every 77,000 people and change. That huge employee-customer ratio is unmanageable in human terms, forcing the company to run itself in part through a kind of ad hoc artificial intelligence: a collection of automated user and customer interfaces that shift and blend to meet Facebooker preference and advertiser demand.
That business, as it is run today, is gobsmackingly profitable: 45 percent of Facebook’s $26 billion annual revenue is pure gravy. But without major changes to its daily operations, Facebook runs the risk of eroding the public trust that, as MacKinnon observed, is fundamental to its success.
CEO Mark Zuckerberg has experimented with building more intentional AIs that moderate Facebook content for benign purposes like preventing suicides, presumably with an eye toward expanding that software into the profitable parts of Facebook’s business. One such experiment ended when the AIs began talking to each other in an improvised language the coders were worried they would soon find themselves unable to translate. This was misreported—several publications said the AIs rapidly became too smart—but the truth is a little more worrying: AIs can learn to make incomprehensible decisions, even when they’re not particularly smart.
And if Facebook remains unable to control its leviathan, no one wins.
“In dealing with these problems in a way that serves the public interest from anybody’s perspective—is that just beyond human capability?” MacKinnon asks. “Do you need artificial intelligence? And then do you lose control of the artificial intelligence and stop understanding how that’s working? And then do you have to build another AI to fight it?”
The problem isn’t that Facebook’s management isn’t behaving responsibly, MacKinnon said: It’s that Facebook may be unmanageable.
“The people running it don’t have a full grasp of how information is manipulated across it,” she said. “It’s like a living organism that’s evolving every day.”

0 comments:

Post a Comment