Matthew Hutson reports in The Atlantic:
We already know that people can form emotional bonds with Roomba vacuum cleaners and other relatively rudimentary robots. How will we relate to AI agents that speak to us in human voices and seem to understand us on a deep level? The descendants of Siri and Alexa could change our daily lives, thoughts, and relationships
In the coming decades, artificial intelligence will replace a lot of human jobs, from driving trucks to analyzing X-rays. But it will also work with us, taking over mundane personal tasks and enhancing our cognitive capabilities. As AI continues to improve, digital assistants—often in the form of disembodied voices—will become our helpers and collaborators, managing our schedules, guiding us through decisions, and making us better at our jobs. We’ll have something akin to Samantha from the movie Her or Jarvis from Iron Man: AI “agents” that know our likes and dislikes, and that free us up to focus on what humans do best, or what we most enjoy. Here’s what to expect.
1 | A Voice in Your HeadAnyone who’s used Siri (on Apple products) or Alexa (on Amazon Echo) has already spoken with a digital assistant. In the future, such “conversational platforms” will be our primary means of interacting with AI, according to Kun Jing, who oversees a digital assistant called Duer for the Chinese search engine Baidu. The big tech companies are racing to create the one agent to rule them all: In addition to Siri, Alexa, and Duer, there’s Microsoft’s Cortana, Facebook’s M, and Google Assistant. Even Mattel is getting in on the action: It recently announced Aristotle, a voice-controlled AI device that can soothe babies, read bedtime stories, and tutor older kids.
These voice systems might eventually go from something you talk to on a device to something that’s in your head. Numerous companies—including Sony and Apple—have developed wireless earbuds with microphones, so your virtual helper might be able to coach you on dates and interviews or discreetly remind you to take your meds.
You might even be able to communicate back without making a sound. nasa has developed a system that uses sensors on the skin of the throat and neck to interpret nerve activity. When users silently move their tongues as if speaking, the system can tell what words they’re forming—even if they don’t produce any noise and barely move their lips.
2 | Talking Cereal BoxesYour main AI agent won’t be the only new voice in your life. You’ll likely confront a cacophony of appliances and services chiming in, since companies want you to use their proprietary systems. Ryan Gavin, who oversees Microsoft’s Cortana, says that in 10 years you might select furniture at the mall and say, “Hey, Cortana, can you work with the Pottery Barn bot to arrange payment and delivery?” Consider this a digitally democratized version of the old power move: “Have your bot call my bot.”
Nova Spivack, a futurist and entrepreneur who works with AI, says a wearable device like Google Glass might, for example, recognize a book and then connect you to an online voice representing that book so you can ask it questions. Everything in the world could be up for a chat. (“Hello, box of Corn Flakes. Am I allergic to you?”) Your agent might also augment reality with visual overlays—showing you a grocery list as you shop or displaying facts about strangers as you meet them. All of which sounds rather intrusive. Not to worry, says Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence: Future agents, like trusted friends, will be able to read you and know when to interrupt—and when to leave you alone.
3 | Smarter TogetherIn 1997, a reigning world chess champion, Garry Kasparov, lost a match to the
supercomputer Deep Blue. He later found that even an amateur player armed with a mediocre computer could outmatch the smartest player or the most powerful computer working alone. Since then, others have pursued human-computer collaborations in the arts and sciences.
A subfield of AI called computational creativity forges algorithms that can write music, paint portraits, and tell jokes. So far the results haven’t threatened to put artists out of work, but these systems can augment human imagination. David Cope, a composer at UC Santa Cruz, created a program he named Emily Howell, with which he chats and shares musical ideas. “It is a conversationalist composer friend,” he says. “It is a true assistant.” She scores some music, he tells her what he likes and doesn’t like, and together they compose symphonies.
IBM’s Watson, the AI system best known for winning Jeopardy, has engaged in creative collaborations, too. It suggested clips from the horror movie Morgan to use for a trailer, for instance, allowing the editor to produce a finished product in a day rather than in weeks.
Eventually, digital assistants may co-author anything from the perfect corporate memo to the next great American novel. Jamie Brew, a comedy writer for the website ClickHole, developed a predictive text interface that takes examples of a literary form and assists in producing new pieces, by giving the user a series of choices for what word to write next. Together he and the interface have churned out a new X-Files script and mock Craigslist ads and IMDb content warnings.
4 | Mutual UnderstandingMost machine-learning systems are unable to explain in human terms why they made a decision or what they intend to do next. But researchers are working to fix that. The military’s Defense Advanced Research Projects Agency recently announced a plan to invest significantly in explainable AI, or XAI, to make machine-learning systems more correctable, predictable, and trustworthy. Armed with XAI, your digital assistant might be able to tell you it picked a certain driving route because it knows you like back roads, or that it suggested a word change so that the tone of your email would be friendlier. In addition, with more awareness, “the robot would know when to ask for help,” says Manuela Veloso, the head of Carnegie Mellon’s machine-learning department, who calls this skill “symbiotic autonomy.” Researchers are developing artificial emotional intelligence, or emotion AI, so that our agents can better understand us, too. Companies such as Affectiva and Emotient (which was bought by Apple) have created systems that read emotions in users’ faces. IBM’s Watson can analyze text not just for emotion but for tone and, over time, for personality, according to Rob High, Watson’s chief technology officer. Eventually, AI systems will analyze a person’s voice, face, posture, words, context, and user history for a better understanding of what the user is feeling and how to respond. The next step, according to Rana el Kaliouby, Affectiva’s co-founder and CEO, will be an emotion chip in our phones and TVs that can react in real time. “I think in the future we’ll assume that every device just knows how to read your emotions,” she says.
5 | Getting Attached
We already know that people can form emotional bonds with Roomba vacuum cleaners and other relatively rudimentary robots. How will we relate to AI agents that speak to us in human voices and seem to understand us on a deep level?
Spivack, the futurist, pictures people partnering with lifelong virtual companions. You’ll give an infant an intelligent toy that learns about her and tutors her and grows along with her. “It starts out as a little cute stuffed animal,” he says, “but it evolves into something that lives in the cloud and they access on their phone. And then by 2050 or whatever, maybe it’s a brain implant.” Among the many questions raised by such a scenario, Spivack asks: “Who owns our agents? Are they a property of Google?” Could our oldest friends be revoked or reprogrammed at will? And without our trusted assistants, will we be helpless?
El Kaliouby, of Affectiva, sees a lot of questions around autonomy: What can an assistant do on our behalf? Should it be able to make purchases for us? What if we ask it to do something illegal—could it override our commands? She also worries about privacy. If an AI agent determines that a teenager is depressed, can it inform his parents? Spivack says we’ll need to decide whether agents have something like doctor-patient or attorney-client privilege. Can they report us to law enforcement? Can they be subpoenaed? And what if there’s a security breach? Some people worry that advanced AI will take over the world, but Kambhampati, of the Association for the Advancement of Artificial Intelligence, thinks malicious hacking is the far greater risk. Given the intimacy that we may develop with our ever-present assistants, if the wrong person were able to break in, what was once our greatest auxiliary could become our greatest liability.