A Blog by Jonathan Low

 

Nov 14, 2021

Why Ending Data Extraction May Be the Only Answer To Surveillance Capitalism

If people want to end or moderate an economy in which knowledge about them is the product, the only way to do so is to halt the extraction and monetization of their data. JL 

Shoshana Zuboff comments in the New York Times:

The world’s democracies now confront a tragedy of the “un-commons.” Information spaces people assume to be public are ruled by private commercial interests for profit. The internet as a self-regulating market has been revealed a failed experiment. Surveillance capitalism leaves the demolition of social norms and the weakening of democratic institutions. Prediction was the first imperative that determined extraction. Lucrative predictions required flows of human data at unimaginable scale. An economy founded on the extraction of human data assumes the destruction of privacy as a nonnegotiable condition of  business.

Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data — to its vision of connecting the entire world. Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.

These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law. The result has been a hidden revolution in how information is produced, circulated and acted upon. A parade of revelations since 2016, amplified by the whistle-blower Frances Haugen’s documentation and personal testimony, bears witness to the consequences of this revolution.

The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit. The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.

These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.

There is no way to escape the machine systems that surveil us, whether we are shopping, driving or walking in the park. All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.

Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”? Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?

Search and Seizure

Facebook as we now know it was fashioned from Google’s rib. Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.

By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.” Predictions were computed initially by analyzing data trails that users unknowingly left behind in the company’s servers as they searched and browsed Google’s pages. Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.

Prediction was the first imperative that determined the second imperative: extraction. Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors. User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.

When asked “What is Google?” the co-founder Larry Page laid it out in 2001, according to a detailed account by Douglas Edwards, Google’s first brand manager, in his book “I’m Feeling Lucky”: “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”

Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data. Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.

Massive-scale extraction operations were the keystone to the new economic edifice and superseded other considerations, beginning with the quality of information, because in the logic of surveillance capitalism, information integrity is not correlated with revenue.

This is the economic context in which disinformation wins. As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.

Facebook, the First Follower

Mr. Zuckerberg began his entrepreneurial career in 2003 while a student at Harvard. His website, Facemash, invited visitors to rate other students’ attractiveness. It quickly drew outrage from his peers and was shuttered. Then came TheFacebook in 2004 and Facebook in 2005, when Zuckerberg acquired his first professional investors.

Facebook’s user numbers quickly grew; its revenues did not. Like Google a few years earlier, Mr. Zuckerberg could not turn popularity into profit. Instead, he careened from blunder to blunder. His crude violations of users’ privacy expectations provoked intense public backlash, petitions and class-action suits. Mr. Zuckerberg seemed to understand that the answer to his problems involved human data extraction without consent for the sake of advertisers’ advantage, but the complexities of the new logic eluded him.

He turned to Google for answers.

In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.

A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained, according to David Kirkpatrick in “The Facebook Effect.”

The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”

Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.

TechCrunch summarized the corporation’s strategy: “Facebook is forcing users to choose their new privacy options to promote the ‘Everyone’ update, and to clear itself of any potential wrongdoing going forward. If there is significant backlash against the social network, it can claim that users willingly made the choice to share their information with everyone.”

Weeks later, Mr. Zuckerberg defended these moves to a TechCrunch interviewer. “A lot of companies would be trapped by the conventions and their legacies,” he boasted. “We decided that these would be the social norms now, and we just went for it.”

Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.

A Sweeping Economic Order

Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models. Google and Facebook are data companies and surveillance-capitalist pure plays. The others have varied lines of business that may include data, services, software and physical products. In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.

As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information. The promise of the surveillance dividend now draws surveillance economics into the “normal” economy, from insurance, retail, banking and finance to agriculture, automobiles, education, health care and more. Today all apps and software, no matter how benign they appear, are designed to maximize data collection.

Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic. The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.

All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations. With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.

The social effect is a new form of inequality, reflected in the colossal asymmetry between what these companies know about us and what we know about them. The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.

Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement. Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.

Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance. Early in the pandemic, for example, Apple and Google refused to adapt their operating systems to host contact-tracing apps developed by public health authorities and supported by elected officials. In February, Facebook shut down many of its pages in Australia as a signal of refusal to negotiate with the Australian Parliament over fees for news content.

That’s why, when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.

Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.

Democracy’s Counterrevolution

Democratic societies rived by economic inequality, climate crisis, social exclusion, racism, public health emergency and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth. But like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.

Where does that leave us? Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution. But instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.

We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

While liberal democracies have begun to engage with the challenges of regulating today’s privately owned information spaces, the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions: How should we structure and govern information, connection and communication in a democratic digital century? What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society? What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?

Liberal democracies should take the lead because they have the power and legitimacy to do so. But they should know that their allies and collaborators include the people of every society struggling against a dystopian future.

The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.

Will the call to “regulate Facebook” dissuade lawmakers from a deeper reckoning? Or will it prompt a heightened sense of urgency? Will we finally reject the old answers and free ourselves to ask the new questions, beginning with this: What must be done to ensure that democracy survives surveillance capitalism?

0 comments:

Post a Comment