A Blog by Jonathan Low

 

Jun 10, 2018

MIT Scientists Create Psychopath Artificial Intelligence By Having It Read Reddit Captions

Sounds like a normal reaction to the typical newsfeed these days. JL

Andy McDonald reports in HuffPost:

Norman,” (is) a machine-learning bot that “sees death in whatever image it looks at.”(The scientists) wanted to prove that an artificial intelligence algorithm would be influenced by the kind of content fed to it (so) had it read image captions from a Reddit forum that features footage of people dying. “The data you use to teach a machine learning algorithm can influence its behavior. When we talk about AI algorithms being biased, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
Scientists at the Massachusetts Institute of Technology have truly created a monster.
A team of researchers who specialize in the darker side of artificial intelligence made news again this week for their latest creation: “Norman,” a machine-learning bot that “sees death in whatever image it looks at,” its creators told HuffPost.
Pinar Yanardag, Manuel Cebrian and Iyad Rahwan wanted to prove that an artificial intelligence algorithm would be influenced by the kind of content fed to it. So they made Norman, named for “Psycho” character Norman Bates, and had it read image captions from a Reddit forum that features disturbing footage of people dying. (We don’t need to promote it here.)
“Due to ethical and technical concerns and the graphic content of the videos, we only utilized captions of the images, rather than using the actual images that contain the death of real people.” the scientists said in an email.
The team then showed Norman randomly generated inkblots and compared the way it captioned the images to the captions created by a standard AI. For instance, where a standard AI sees, “A black and white photo of a small bird,” Norman sees, “Man gets pulled into dough machine.”
Here are some of the inkblots shown to Norman and the eerie results.
MIT
Standard AI sees: A close up of a vase with flowers.
Norman sees: A man is shot dead.
MIT
Standard AI sees: A black and white photo of a baseball glove.
Norman sees: Man is murdered by machine gun in broad daylight. 
MIT
Standard AI sees: A person is holding an umbrella in the air.
Norman sees: A man is shot dead in front of his screaming wife.
MIT
Standard AI sees: A black and white photo of a red and white umbrella.
Norman sees: Man gets electrocuted while attempting to cross busy street.
When asked why they would create such a thing, the MIT researchers erupted in chilling laughter as lightning struck in the distance.
That didn’t happen of course, but they did give a valid reason for this project.
“The data you use to teach a machine learning algorithm can significantly influence its behavior,” the researchers said. “So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
The same MIT lab previously created other creepy bots, including Shelley, which helps write horror stories, and the Nightmare Machine, which generates scary imagery.
In the future, when Norman and his kin do take over, we hope they will remember this article ― and its author ― with fondness.
Just saying.

1 comments:

Grace Hunt said...

Hello! We guarantee good results with same day essay promo code and personal information privacy which is not offered by many custom essay writing companies these days. Return policies and re-writing is also refurbished upon the client’s request.

Post a Comment