A Blog by Jonathan Low

 

Apr 1, 2017

Can Alexa Lie?

Artificially intelligent systems are a reflection of those who program them. Which means the answer is yes. JL

Shelly Palmer reports in Advertising Age:

Alexa features thousands of pre-programmed responses. Could any of those answers be lies? Of course. The system will return whatever it is programmed to return. Offspring figure out how to manipulate their mothers' attention using different types of crying sounds.There are tens of thousands of plants and animals that use lures or camouflage to deceive their prey. Living things lie from birth. They are naturally selected to do it. So will AI systems.
There was a recent tabloid piece featuring a video of a woman asking Amazon's virtual assistant Alexa if it was connected to the CIA. At the time, the Echo Dot she was speaking to did not respond to the question. She asked a few times, and each time the Echo was silent. Conspiracy theorists weighed in. It was an amusing video, but the Daily Mail's clickbait headline raises a legitimate question: Can Alexa lie?
How Alexa Works
According to Amazon, you can "use the Alexa Voice Service (AVS) to add intelligent voice control to any connected product that has a microphone and speaker." Alexa uses machine learning to help it recognize what you say (in the process known as automatic speech recognition) and understand the question (natural language understanding) and then routes your information as requested.
You can think of Alexa as voice control for any app. Instead of tapping a button on your phone, you just talk. But Alexa is not "thinking" about what you said; the system is passing the request on to an algorithm that does its best to return the correct results. It's just like typing a request into the Amazon or Spotify or Google search bar. Whatever product or song or search result is available is what Alexa speaks or plays back to you.
Pre-Programmed Responses
There's not a lot of AI in Alexa's pre-programmed responses. It's really a parlor trick. For example, ask Alexa to tell you a "knock, knock" joke or a "back to school" joke, and it will randomly pull from a short list. Ask, "Which came first, the chicken or the egg?" or how much Alexa weighs or where Alexa was born or the answer to "life, the universe and everything." It will quickly answer with a cute response. Alexa features thousands of pre-programmed responses. Could any of those answers be lies? Of course. The system will return whatever it is programmed to return. Asking if Alexa can lie is the wrong question. Could someone build a "lie" into the third-party database? Sure. But what's the point?
What You're Really Asking
What you really want to ask is, "Can Alexa Voice Services' underlying natural language understanding algorithm be trained to interpret a question and purposely respond with a false answer?" Again, the answer is no because the system does not work that way. After doing its best to figure out what you said, it passes that information on to a related database, waits for a response and then speaks the answer back to you. But …
What About Other AI Systems?
This is where life gets interesting! From January 11 to 31, 2017, Libratus (an AI system designed at Carnegie Mellon to play heads-up, no-limit Texas Hold'em) decimated four of the world's top human poker players in a match of 120,000 hands. The humans didn't just lose, they were a couple of million down by the end of the match.
This Is a Very Big Deal
In March 2016 when AlphaGo beat 9th Dan Devine Go Master Lee Sedol in Google's DeepMind Challenge, people were stunned that an AI system could beat a human in a game that required both strategy and intuition. But as amazing as this accomplishment was (move 37 in game 2 changed my life, and I wrote about it in an article titled "AlphaGo vs. You: Not a Fair Fight"), Go is a game where both players have what game theorists call "perfect information." In other words, both players can see the entire board and know the rules.
Poker Is Different
In a game of heads-up, no-limit Texas Hold'em, you're trying to make a winning five-card hand out of seven possible cards, and so is your opponent. You do not know the identity of your opponent's two "hole cards," so you have to guess which of 169 possible hands your opponent may have. Game theorists call this an "imperfect knowledge" or "imperfect information" game. It's a lot like life. Tuomas Sandholm, professor of computer science and co-creator of Libratus, acknowledged the power of the system he helped create, saying, "The best AI's ability to do strategic reasoning with imperfect information has now surpassed that of the best humans." In other words, computers can now out-bluff humans.

0 comments:

Post a Comment