A Blog by Jonathan Low

 

Jul 14, 2019

Why Content Moderation At Scale Is Impossible

The genie is out of the bottle. Information may not want to be free but it wants to be available - and there is almost always an audience which can legalistically (if not legally) justify its reasoning. JL

Mike Masnick reports in Tech Dirt :

Highlighting the impossibility of content moderation at scale, you can understand why someone might decide that videos that explain how to "bypass secure computer systems or steal user credentials and personal data" would be bad and potentially dangerous -- and you can understand the thinking that says "ban it." The debate over "terrorist content" online -- where many are demanding that it be taken down immediately, but when we look at the actual impact of that decision, we find that removing such content appears to make it harder to stop actual terrorist activity, because it's now harder to track and to stop.
Last week there was a bit of an uproar about YouTube supposedly implementing a "new" policy that banned "hacking" videos on its platform. It came to light when Kody Kinzie from Hacker Interchange, tweeted about YouTube blocking an educational video he had made about launching fireworks via WiFi:
Kinzie noted that YouTube's rules on "Harmful or dangerous content" now listed the following as an example of what kind of content not to post:
Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.
This resulted in some quite reasonable anger at what appeared to be a pretty dumb policy. Marcus "Malware Tech" Hutchins posted a detailed blog post on this change and why it was problematic, noting that it simply reinforces the misleading idea that all "hacking is bad."
Computer science/security professor J. Alex Halderman chimed in as well, to highlight how important it is for security experts to learn how attackers think and function:
Of course, some noted that while this change to YouTube's description of "dangerous content" appeared to date back to April, there were complaints about YouTube targeting "hacking" videos last year as well.
Eventually, YouTube responded to all of this and noted a few things: First, and most importantly, the removal of Kozie's videos was a mistake and the videos have been restored. Second, that this wasn't a "new" policy, but rather just the company adding some "examples" to existing policy.
This raises a few different points. While some will say that since this was just another moderation mistake and therefore it's a non-story, it actually is still an important point in highlighting the impossibility of content moderation at scale. You can certainly understand why someone might decide that videos that explain how to "bypass secure computer systems or steal user credentials and personal data" would be bad and potentially dangerous -- and you can understand the thinking that says "ban it." And, on top of that, you can see how a less sophisticated reviewer might not be able to carefully distinguish the difference between "bypassing secure computer systems" and some sort of fun hacking project like "launching fireworks over WiFi."
But it also demonstrates that there are different needs for different users -- and having a single, centralized organization making all the decisions about what's "good" and what's "bad," is inherently a problem. Even if the Kinzie video was taken down by mistake, and even if the policy is really supposed to be focused on nefarious hacking techniques, there is still value for security researchers and security professionals to be able to keep on top of what more nefarious hackers are up to.
This is not all that different than the debate over "terrorist content" online -- where many are demanding that it be taken down immediately. And, conceptually, you can understand why. But when we look at the actual impact of that decision, we find that removing such content appears to make it harder to stop actual terrorist activity, because it's now harder to track and to stop.
There is no easy solution here. Some people seem to think that there must be some magic wand that can be waved that says, "leave up the bad content for good people with good intentions to use to stop that bad behavior, but block it from the bad people who want to do bad things." But... that's not really possible. Yet, if we're increasingly demanding that these centralized platforms rid the world of "bad" content, at the very least we owe it to ourselves to look to see if that set of decisions has some negative consequences -- perhaps even worse than just letting that content stay up.

1 comments:

Della said...

DominoQQ merupakan sebuah ajang judi kartu online yang keberadaannya tidak pernah lepas dari berbagai mitos yang bersifat simpang siur
asikqq
dewaqq
sumoqq
interqq
pionpoker
bandar ceme
hobiqq
paito warna
http://199.30.55.59/interqq78/
datahk

Post a Comment