A Blog by Jonathan Low

 

Jan 1, 2019

Which Should Worry Us More: Google or Facebook?

At the new year dawns, it is probably prudent to treat interactions with all tech companies cautiously. JL


Gillian Tett comments in the Financial Times:

Google hangs on to the data it collects, then uses it to create targeted search-and-advertising offerings, customised for users. Facebook lets third-party developers access its data.Extensive research with psychologists and data scientists into Google’s “search”, or “autocomplete”, function (suggests) that search engines can sway our minds in extraordinarily powerful and largely unnoticed ways too — and not only about politics. The better auto-prediction becomes, the greater the potential risk that users will become lazily sucked into digital echo chambers.
A couple of months ago, a veteran investor in Silicon Valley conducted an experiment: he extracted all the data that Facebook and Google each held about him and compared the files.
The results startled him — Google held dramatically more information, by a large multiple. “It’s amazing,” he told me over breakfast in San Francisco. “Why is nobody talking about that?”
It is an interesting question, particularly if you use Google’s services numerous times each day, as I do. One answer might be that Google executives have been savvy in building political support networks. Another is that Google hangs on to the data it collects itself, and then uses it to create targeted search-and-advertising offerings, customised for users. Facebook lets third-party developers access its data, which is why the antics of Cambridge Analytica have sparked so much furore.
This distinction may make Google sound more benign, but does it mean we can relax? Robert Epstein, a psychologist with the American Institute for Behavioral Research and Technology in California, thinks not. In recent years, he has conducted extensive research with fellow psychologists and data scientists into Google’s “search”, or “autocomplete”, function. This has left him convinced that search engines can sway our minds in extraordinarily powerful and largely unnoticed ways too — and not only about politics.
“A search engine has the power to manipulate people’s searches from the very first character people type into the search bar,” says a research paper that this group presented to a psychology conference in Oregon last month. “A simple yet powerful way for a search-engine company to manipulate elections is to suppress negative search suggestions for the candidate it supports, while allowing one or more negative search suggestions to appear for the opposing candidate.”
Epstein’s group asked 661 Americans to pick one of two candidates in an Australian election. Since it was presumed they did not know much about Antipodean politics, the participants were instructed to research them with a Google-type search engine that offered the usual autocomplete suggestions when words were typed in.
However, the researchers also varied the search suggestions shown beneath a candidate’s name, including a range of positive and negative words. The results were stark. When participants were later questioned about their voting preferences, changing the ratio of positive to negative suggestions in the autocomplete was shown to be capable of shifting the preferences of undecided voters by nearly 80 per cent — even though participants seemed free to search for any material they wanted. Another study found that when participants were only offered four autocomplete suggestions, they were very easily manipulated; when there were 10 to choose from, they were not.
These results do not demonstrate that Google — or any other search-engine company such as Bing or Yahoo — has used this power to manipulate its users. But Epstein’s paper highlights some patterns that he considers strange. His group discovered that if you type the names of Google competitors into its search engine, followed by the word “is”, phrases such as “Yahoo is dead” or “Bing is trash” may surface in the autocomplete bar. According to Epstein, at that time the same did not happen on Yahoo or Bing’s own search engines.

Another striking pattern cropped up in August 2016. When the words “Hillary Clinton is” were typed into Google’s search engine, the autocomplete offered phrases such as “Hillary Clinton is winning”; on Yahoo and Bing, the autocomplete suggested “Hillary Clinton is a liar” and “Hillary Clinton is a criminal”.
Google executives say these different auto-suggestion patterns arose because the company has a policy of removing offensive auto-predictions. “Google removes predictions that are against our autocomplete policies, which bar . . . hateful predictions against groups and individuals on the basis of race, religion or several other demographics,” wrote Danny Sullivan, a company senior executive, in a blog post last month.
They have also firmly denied they have ever tried to use the autocomplete tool to manipulate users. They have said Epstein’s work was based on a small sample size, using a Google-style search engine rather than Google’s own data. In a rare official comment in 2015 about some of Epstein’s work, executives said: “Google has never ever re-ranked search results on any topic (including elections) to manipulate user sentiment.”
If nothing else, this research should make us all ponder the way in which we use that “autocomplete” function. The better auto-prediction becomes, the greater the potential risk that users will become lazily sucked into digital echo chambers. Epstein believes there is a simple fix. He thinks the search engines should put a simple “health warning” about the dangers of echo chambers — and manipulation — on their sites to counter these possible risks.
Whether or not you accept Epstein’s research, this seems a good idea. But don’t expect it to happen soon — or not unless more consumers, and regulators, do what my Silicon Valley breakfast companion did: namely look at the data that all the biggest tech companies hold on us, starting — but not finishing — with Facebook.

1 comments:

jamesvegita said...

This blog raises critical awareness about the immense power wielded by tech giants like Google in shaping our online experiences. By shedding light on the disproportionate amount of data Google retains compared to Facebook, it prompts reflection on the implications for individual privacy and collective societal influence. The research findings underscore the subtle yet profound ways in which search engines can impact our thoughts and behaviors, highlighting the need for continued scrutiny and vigilance in the digital age. It's a timely reminder to remain mindful of the potential manipulation inherent in algorithmic systems, urging us to cultivate a more discerning approach to online information consumption.
What are The Grounds for Divorce in New York State
Mutual Protection Orders in New Jersey

Post a Comment