A Blog by Jonathan Low

 

Mar 21, 2018

Why Facebook's Real Problem Is It's Business Model: Will Data Regulation Result?

The real issue is not Cambridge Analytica's sleazy use of Facebook-generated personal data (to say nothing of the fact that its now-deposed chief thought it was ok to brag about it and Facebook's leaders have offered only defensively tone-deaf excuses).

The problem for a data-centric economy is that the real and perceived breaches for which Facebook has been routinely accused and for which it has paid fines over time may - finally - have created a wave of consumer/voter/government opposition to the core of the business model: trading free but eminently monetizable personal information for web service access. An entire economic system has been built around this.

The looming question is to what degree that trust-based model - and the extraordinary margins it provided - may now be at risk. JL


Will Oremus reports in Slate, Roger McNamee and Sandy Parakilas report in The Guardian:

It isn’t just that Facebook was careless with its users’data or that its policy was cavalier and misguided. It’s that Facebook is the chief architect of the socio-commercial arrangement by which people offer their personal information in exchange for free online services. Facebook isn’t just the source of the data that Cambridge Analytica used. It’s the reason this sort of data exists in the first place. What Cambridge Analytica did was what Facebook was optimized for.
Slate  The plot was made for front-page headlines and cable-news chyrons: A scientist-turned-political-operative reportedly hoodwinked Facebook users into giving up personal data on both themselves and all their friends for research purposes, then used it to develop “psychographic” profiles on tens of millions of voters—which in turn may have helped the Trump campaign manipulate its way to a historic victory.
No wonder Facebook is in deep trouble, right? Investigations are being opened; calls for regulation are mounting; Facebook’s stock plunged 7 percent Monday.
Sensational as it sounds, however, the Cambridge Analytica scandal doesn’t indict Facebook in quite the way it might seem. It reveals almost nothing about the social network or its data policies that wasn’t already widely known, and there’s little evidence of blatant wrongdoing by Facebook or its employees. It’s also far from clear what impact, if any, the ill-gotten personal data had on the election’s outcome.
In short, the outrage now directed at Facebook feels disproportionate to the company’s culpability in this specific episode. But that doesn’t mean people are wrong to be outraged. For Facebook, the larger scandal here is not what shadowy misdeeds it allowed Cambridge Analytica to do. It’s what Facebook allowed anyone to do, in plain sight—and, more broadly, it’s the data-fueled online business model that Facebook helped to pioneer.
The Facebook tools and policies that allowed researcher Aleksandr Kogan in 2014 to obtain information for the political-data firm Cambridge Analytica—via an app called Thisismydigitallife—were public and well-known. They were also quite permissive, allowing developers to collect data not only on users who signed up for their app, but also on those users’ Facebook friends. (Facebook has since changed that policy.) As the Washington Post points out, entities ranging from Tinder to FarmVille to Barack Obama’s 2012 presidential campaign used the same tool to collect many of the same kinds of information. As late as 2015, this was simply how Facebook worked. The people who used Kogan’s app explicitly granted it access to that data, albeit for academic, not commercial, purposes. (There’s a strong case to be made, of course, that Facebook should never have allowed users to sign away their friends’ privacy in that way.) For what it’s worth, Facebook’s policies did not permit the sort of misrepresentation that Kogan appears to have engaged in. And by Facebook’s own account, when the company found out that he had used the data for unauthorized purposes, it required both him and Cambridge Analytica to delete it and to certify to the company that they had done so. It now appears that they may have lied. But it’s not clear that Facebook had any way of knowing that.
As for what Cambridge Analytica did with that information, you could argue that has been overblown, too. Sinister as it sounds, “psychographic” targeting—advertising to people based on information about their attitudes, interests, and personality traits—is an imprecise science at best and “snake oil” at worst. Distilled to its essence, the Cambridge Analytica scandal is a case of campaign consultants using some shady tactics to try to get their message out to the most receptive audience in the most effective way they can. That is, in a word, politics.
From Facebook’s perspective, then, the company is on the ropes mostly because an unscrupulous developer abused a permissive data policy that it has since tightened. (There’s also the fact that it apparently kept the 2015 leak of user data quiet for years and that it hired and continues to employ a researcher who was connected to it. Facebook has not commented on either of those reports.) It’s possible to imagine rogue app developers exploiting other platforms, such as Twitter, Android, or Apple’s iOS, in an analogous fashion.
That helps to explain why Facebook executives mounted a tone-deaf defense of their company on Twitter this past weekend, even as the outcry kept growing: They really don’t think they did much wrong here. The company’s chief security officer, among others, pushed back on the March 17 Guardian story that broke the scandal for referring to it as a “data breach.” (That executive, Alex Stamos, has since deleted his tweets, and he confirmed reports Monday that he is no longer Facebook’s CSO and is planning to leave the company in August.) As Tiffany C. Li pointed out in Slate, the semantic point matters because a data breach could expose Facebook to legal action from state governments and the Federal Trade Commission, including fines and other remedies.
But if there was no data breach, and Facebook’s security wasn’t compromised, then why isn’t it just Cambridge Analytica that’s in the barrel this week?
It’s partly because the stakes in this particular data scandal are so high. Had the same data been used to sell people refrigerators or send them email spam, the story would not be playing out on such a big stage. In other words: Almost any significant role Facebook played in the success of the Donald Trump would be a momentous one, because his victory altered the course of history. Many of those who opposed Trump are still furious and still searching for people to blame. And we already know Facebook was a key part of his strategy, as it was for the U.K.’s Brexit campaign, in which Cambridge Analytica was also involved.
But there’s another reason Facebook is getting pilloried over this in a way that another technology company—say, Apple or Microsoft—might not. It isn’t just that Facebook was careless with its users’ data in this instance or that its policy of allowing third-party apps access to information on users’ friends was cavalier and misguided (though it certainly was both of those). It’s that Facebook is the chief architect of the entire socio-commercial arrangement by which people around the world routinely offer up their personal information in exchange for the free use of online services.
Facebook isn’t just the source of the data that Cambridge Analytica used. It’s the reason this sort of data—organized in this way—exists in the first place. Sure, Google and Twitter and plenty of other companies employ similar business models. And the idea of supporting a website by showing people ads has been around longer still. But it was Facebook, more than any of these, that taught people around the world to freely give themselves online and to accept the use of their personal data in targeted advertisements as the price of admission to the modern internet.
If you think of that data, and the ads, as a relatively small price to pay for the privilege of seamless connection to everyone you know and care about, then Facebook looks like the wildly successful, path-breaking company that made it all possible. But if you start to think of the bargain as Faustian—with hidden long-term costs that overshadow the obvious benefits—then that would make Facebook the devil.
This scandal has made the grand bargain of the social web look a little more Faustian than it did before.
What this scandal did, then, was make the grand bargain of the social web look a little more Faustian than it did before.
From that perspective, the real scandal is that this wasn’t a data breach or some egregious isolated error on Facebook’s part. What Cambridge Analytica did was, in many ways, what Facebook was optimized for—collating personal information about vast numbers of people in handy packets that could then be used to try to sell them something.
Yes, the rules were supposed to prohibit these specific packets from being used in this specific way. But with high enough stakes, it was probable, if not inevitable, that those rules would be broken. Facebook appears to have given little thought to how to enforce them, beyond shaking hands and hoping for the best. All indications are that it simply cared more about growth in 2014 than it did about users’ privacy. That the company has evidently matured in recent years doesn’t excuse the way it was built.
TechCrunch’s Josh Constine has followed Facebook as closely as anyone in the media over the past five years, and he’s been known to defend the company when it seems just about everyone else is attacking it. Not this time. In a piece headlined, “Facebook and the Endless String of Worst-Case Scenarios,” he catalogues nearly a dozen instances over the years in which the company has launched products without the safeguards needed to prevent abuse, then ignored or downplayed the consequences.
That habit may be catching up to it at last: Facebook is not getting the benefit of many doubts when it comes to Cambridge Analytica, and it’s hard to feel much sympathy for it. The time for Facebook to self-regulate its way out of the hot seat has probably passed. Now it’s up to the public, legislators, and regulators to rework the terms of that agreement by which people sign away their personal data—and one another’s—for the benefit of tech platforms, their advertising clients, and whoever else might be sneaky enough to get their hands on it.

Guardian Cambridge Analytica acquired 50m Facebook profiles from a researcher in 2014. This appears to have been among the most consequential data breaches in history, with an impact that may rival the breach of financial records from Equifax.
There are many problematic aspects to this. It appears the information was harvested by a researcher who collected data not only on the 270,000 or so users who Facebook said took his survey but also on their friends, who knew nothing about the survey, and then passed it to Cambridge Analytica in violation of Facebook’s terms of service. There are questions now over whether the data was destroyed.Facebook waited more than two years before revealing what the Observer described as “unprecedented data harvesting”.
Facebook did not notify the affected users, as may be required by its 2011 consent decree with the Federal Trade Commission (FTC).
Cambridge Analytica appears to have used the profiles to develop techniques for influencing voters.
The company has denied wrongdoing, saying “no data from [the researcher] was used by Cambridge Analytica as part of the services it provided to the Donald Trump 2016 presidential campaign”. But there are questions over whether the Trump campaign appears nonetheless to have gained an advantage in the election from the data.
The Observer report contradicts Cambridge Analytica’s chief executive, who said the company did not have Facebook data. Facebook waited more than two years after they discovered the breach before suspending Cambridge Analytica from its platform. The New York Times reported that at least some of the data is still available on the internet.

Cambridge Analytica has denied inappropriate use of Facebook user profiles, but a former employee who is now a whistleblower has emphatically contradicted that claim.
Facebook now has 2.1bn active users, 1.4bn of whom use the site every day. As a social networking platform, it enables people to share ideas, photos and life events with friends, which collectively gives Facebook the highest-resolution image of every user of any media company, with an emphasis on emotions.
For advertisers, Facebook is exceptional for its ability to target more than half of all the people in every developed market and the power it gives to advertisers. On Facebook, advertisers can buy the equivalent of the Super Bowl audience – or any other audience – any day of the year.
Five years ago, researchers hypothesized that Facebook algorithms could be used to predict things like product and political preferences from just a handful of “likes”. Those researchers were concerned about the privacy implications, in part because the default Facebook setting for likes was “public”.
Cambridge Analytica thought it could transform US politics by exploiting that insight.It was not just the user's profile data that was harvested, but also that of their friends, non of whom were notified

With the 2016 election cycle fast approaching, Cambridge Analytica did not have time to create its own custom profiles. So it went to researcher Aleksandr Kogan, who created a Facebook app that paid users to take a personality test.
There were problems with this arrangement. First, Kogan did not have permission from Facebook to use the data he gathered for commercial purposes, which best characterizes his Cambridge Analytica relationship. Second, the app not only harvested user profile data – which could be compared with the results of the personality test – but also the user profile data of each test taker’s friends, none of whom were notified.Was any of this illegal? Facebook may be liable for a data breach, which may create legal problems under state law. The attorney general of Massachusetts has announced an investigation. Cambridge Analytica may face charges that it broke US election laws by employing people who were neither US citizens nor green card holders on a US presidential election campaign. Both may be subject to action by the FTC. Or perhaps not.
We live in a world of big data, where companies get rich off our personal information with few constraints and almost no supervision. Companies offer us free applications that are convenient, useful and fun in exchange for perpetual rights to the data they can harvest from our actions online (and sometimes offline).
The big data companies are opaque to consumers and regulators alike, so few people understand the risks and companies can often hide data breaches for a long time. US law provides very little privacy protection, leaving consumers with little or no recourse when they are harmed.
It is past time that the US recognize that data is too important to be unregulated. Equifax has yet to face significant consequences, despite losing control of the financial data of most adult Americans. Is that appropriate? Will Facebook face consequences for the data it lost to Cambridge Analytica? Will Cambridge Analytica or the Trump campaign be held to account?
  • Roger McNamee was an early investor in Facebook and a mentor to founder Mark Zuckerberg. Sandy Parakilas was an operations manager at Facebook in 2011 and 2012, and was responsible for privacy and policy issues on Facebook Platform.



0 comments:

Post a Comment