A Blog by Jonathan Low

 

Feb 4, 2017

Can the Internet Defeat 'Alternative Facts?'

Maybe. But only with the concerted efforts of those motivated enough to contribute to exposing falsehood and promoting truth.

Ultimately, it's not just the data, but the origins, the context and the process.  JL

Jimmy Wales, founder of Wikipedia, comments in The Guardian:

Open source showed us that “given enough eyeballs, all bugs are shallow.” If there is kryptonite to false information, it’s transparency. Platforms expose information about the content people are seeing, and why they’re seeing it. But the internet has always been a messy, complicated place. Users range from those committed to truth to those intent on worse.We need visibility because it sheds light on the process and origins of information and creates a structure for accountability.
Last year we saw a proliferation of disinformation online. “Fake news” sites interfered with political discourse and sentiment around the world. Filter bubbles limited our perspectives. Oxford Dictionaries named “post-truth” its international word of the year. As we started 2017, we heard new terms, such as “alternative facts”.It’s tempting to wonder: is this the beginning of the end of reliable information? Were the hopes for an open and inclusive web misplaced? Is this the dark age of the internet?
The spread of false information online is a real threat. The decline of our fact base undermines our capacity to have meaningful conversations and solve problems across the globe.
But the internet has always been a messy and complicated place. Users have long ranged from those committed to finding and sharing truth to those intent on pranks, vandalism and worse. Instead of fake news articles on your cousin’s social media feed, the 1990s saw your older relatives finally get online, only to start emailing the entire family hoaxes with “Fwd: Fwd: Fwd: Fwd:” in the subject line. Yesterday’s fake news travelled via email because that’s the main way people communicated online.
The internet has always been a place for experimentation and ingenuity. Its power comes from its participants. For every harmful action online, there are countless positive contributions by people seeking to connect with one another, express themselves, and expand our shared knowledge base.
I’m an optimist. I had to be to start Wikipedia, a project that sounded impossible 16 years ago. How could we get millions of people to work together, across borders and perspectives, without pay, to build a reliable, accurate encyclopedia? But it worked.
While fake news is not new, delivery methods have evolved. Social media feeds, doctored videos and instant messaging have largely replaced email as the main vehicles for wrong information. Today’s internet is much vaster and more linked, and moves much more relentlessly than it did at the turn of the century. Last year in India, when a new 2,000 rupee bill was introduced, fake news claimed that the bill was equipped with a surveillance chip. Later debunked, the “news” spread like wildfire on the messaging platform WhatsApp, which has 50 million monthly users in India.The question is: how do we, as consumers and institutions, respond?

In restoring our common fact base, legacy media organisations (newspapers, television networks, book publishers) have a crucial role to play. Venerable journalistic institutions have extraordinary reporting, research and marketing resources at their disposal, and must redouble their efforts to remain trusted public mediators of what is true. Given the ideological polarisation of the media and filter bubbles on social media, this is a tall order.
Fake news sites seemingly evolve overnight. Many have the same layout style as legitimate newspaper websites, but lead with alarming headlines that bait readers to click and immediately share. Even if they don’t recognise the source by name, the visuals can look legitimate enough for a casual reader not to notice.In this messy age, we need new tools to distinguish truth from falsehood across the digital sprawl. Many social and digital platforms are trying to address the problem by creating algorithms that can identify fake sources, but what’s missing from this solution is the human element.
Everyone can agree that social platforms need to do something when falsehoods are being shared millions of times, but none of us is comfortable with the social media giants deciding what’s valid or not. It’s impossible to automate fully the process of separating truth from falsehood, and it’s dubious to cede such control to for-profit media giants. What’s needed is human solutions that rely not just on third-party fact-checking bots but on the power of collaboration. We need people from across the political spectrum to help identify bogus websites and point out fake news. New systems must be developed to empower individuals and communities – whether as volunteers, paid staff or both.
To tap into this power, we need openness. Consider the open-source software movement. Beginning in the 1980s, communities of software developers released code under open licences that allowed other developers to access, reuse and improve code, leading to innovation at scale. Open source showed us that, as the developer Eric Raymond put it, “given enough eyeballs, all bugs are shallow”. Today, some of the world’s most popular technologies are open source
Wikipedia has some lessons to offer the builders of new systems. Its editors sift through the online cacophony to differentiate reliable sources from those who traffic in falsehoods. They produce massive amounts of accurate content through an open model. Anyone in the world can add material to articles; anyone can challenge that material and start a discussion. This means more eyeballs on more information and more accountability. No matter what their political leanings, editors have to play by the same rules in creating, refining and fact-checking content: verifiability, neutrality, and no original research. On the discussion pages behind every article, differing viewpoints are displayed.

By being exposed to this process, people can become more balanced and information more reliable over time. A recent Harvard Business School study found that with more revisions and moderators volunteering on the platform, bias and inconsistencies wore away and that editors tend to become less biased over time.
If there is any kryptonite to false information, it’s transparency. Technology platforms can choose to expose more information about the content people are seeing, and why they’re seeing it. We need this visibility because it sheds light on the process and origins of information and creates a structure for accountability. We need online spaces for open dialogue across a variety of viewpoints. These spaces must be inclusive by design – toxic behaviour, including harassment, is unfortunately a fact of the internet. We need ground rules, commitment to verification, civil dialogue and active participation. And we need to apply these principles to all our online activity.
The rise of the internet may have created our current predicament, but the people who populate the internet can help us get out of it. Next time you go back and forth with someone over a controversial issue online, stick to facts with good sources, and engage in open dialogue. Most importantly, be nice. You may end up being a small part of the process whereby information chaos becomes knowledge. And you will be helping eradicate fake news at the same time.

0 comments:

Post a Comment