A Blog by Jonathan Low

 

Oct 25, 2017

What Is #MeToo Teaching Artificial Intelligence?


The answer is not yet clear. But the concern is that responding - or not - may affect the algorithm driving news feeds, advertisements, promotions, offers and, perhaps, governmental assessments in ways that consumers are completely unaware of, let alone permitted to influence. JL

Shelly Palmer reports in Advertising Age:

How will a self-training A.I. system be biased when learning from the #MeToo hashtagged posts? Would a lack of engagement teach the algorithm that you are not interested in the subject or not empathetic to the cause? What if you were saddened by the content of a post but preferred not to comment? Would posting or sharing graphic details of a traumatic event re-characterize your profile and associate you with a kind of content you're not used to seeing?
Artifical intelligence is getting smarter every day. Google's AutoML project has learned to replicate itself—early steps on the path to superintelligence. Just down the hall at Google parent Alphabet, DeepMind's AlphaGoZero trained itself to beat the human-trained AlphaGo 100 games to zip! As we move closer to a world where machines train themselves—but think for us—complicated questions about fairness and biases arise.
#MeToo
In response to the Harvey Weinstein allegations, the hashtag #MeToo began to surface on social media. The Twitter and Facebook posts were heart-wrenching and, in some cases, gut-wrenching. Not surprisingly, many of the personal stories included words, phrases, and concepts not usually associated with the profiles, the previous behaviors, or even the genders of the authors.
The coincidental emergence of the #MeToo hashtag and self-training, self-replicating A.I. systems got me thinking. How will a self-training A.I. system be biased when learning from the #MeToo hashtagged posts? And how would the advent of self-training A.I. affect the systems that control our news feeds and other curated content presented to us?
Silence is an action
Would a lack of engagement with any given post teach the algorithm that you are not interested in the subject or not empathetic to the cause? What if you were stunned and saddened by the content of a post but didn't know how (or preferred not) to comment? Would posting or sharing graphic details of a traumatic event re-characterize your profile and associate you with a kind of content you're not used to seeing? There is an endless list of questions one could ask.
A.I. biases
In practice, Facebook, Google, Twitter, and all other information systems that rely heavily on A.I. and machine learning have a problem they have been reluctant to discuss: AIA.I.bias.
This problem is not new. There are several popular examples of algorithms getting it "wrong." In September 2017, the Guardian reported about an Instagram ad on Facebook that included Olivia Solon's image and her most "engaged" post, "I will rape you before I kill you, you filthy whore!" Later that month, Facebook's A.I. blocked an ad for a march against white supremacists.
While it's easy for a human to say that the A.I. system "got it wrong," that's not what happened at all. What happened was that the action that the A.I. system scored the highest, and therefore surfaced as the best output for a given input, was deemed either objectively or subjectively "wrong" by the humans who were affected by it.
A set of machine-learning algorithms or a neural network at Facebook, trained to prevent fraudulent profiles, determined that certain Native American and drag queen names looked fake and prevented them from being used in profiles, Vice reported. When Creepingbear's Facebook profile problem was brought to Facebook's attention, Facebook reportedly didn't do a great job responding or fixing the problem. This isn't hugely surprising. It's not like you can flip a switch or change one line of code. To truly solve the problem, the A.I. needs to be retrained.
What should we do?
The last machine we will ever need to build is a machine that can replicate itself. Google took the first steps toward building the brains of that machine this year. There are a couple of ways to look at this issue. In his book "Superintelligence," philosopher Nick Bostrom reasons, "The creation of a superintelligent being represents a possible means to the extinction of mankind." It's a great book, and it will get you thinking seriously about what precautions we should take as we quickly evolve thinking machines.
Then there are some who optimistically believe that the evolution of technology will take care of itself, as it has done in the past. As the machines become smarter, we will adapt and vice versa. Move along, move along, nothing to see here.
If we look exclusively through the lens of technological evolution, history suggests neither extreme will be the case. From stone tools to intelligent machines, we have always survived and prospered. If we think of A.I. as a tool (like a knife or a gun or a computer), then we are implicitly thinking that we control the tools. But I would urge caution.
You can also think about A.I. by likening it to an alien intelligence arriving on our shores. Will humanity writ large do any better against A.I. than did thousands of nations conquered by strangers with superior technology, weaponry, and tactical intelligence in the past?
Start asking questions. Make A.I. biases and fairness an action item for every AI and machine-learning meeting. It's time to bring out your inner philosopher. The future of humanity may depend on it.

Author's note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

0 comments:

Post a Comment