A Blog by Jonathan Low


Oct 16, 2019

The Key To Building Robots People Can Relate To

Just as in human relationships, research increasingly shows that personality, stimulation and affection matters. JL

Maja Mataric reports in the Wall Street Journal:

What a robot says and what tone of voice it uses can determine whether the user will find it engaging or boring, intimidating or encouraging. (With) robot gestures, which ones and how often makes a difference in whether the user perceives it as lifelike, believable, competent and safe. Boring, personality-free machines won’t last: As robots become more present in our lives, we expect them to provide engaging social interactions, no matter their function. Good interaction design involves discovering the right level of robot imperfection. Imperfection is better tolerated within a character that is appealing and interesting;
When creating relationships with people, humans are wired to look for personality—a bond that we can discover and relate to. We want to understand the other person and be understood in return.
Given that, how can we design robots that people will relate to and want to have as part of their lives?
It isn’t an easy question, but it’s one we’re going to have to answer, as robots continue to fill ever larger, and more complex, roles in our lives. Very often, those roles depend on the kind of trust and cooperation that is usually reserved for human beings. Without it, people won’t be able to get the most out of the machines.
At our Interaction Lab at the University of Southern California, we have been trying to understand what kinds of machines will foster that bond between human and robot, to create machines that will best help people help themselves.
In our work in the field known as socially assistive robotics, we have developed machines to help children with autism to talk and play more, obese adolescents to exercise without stigma, stroke patients to enjoy their rehabilitation exercises, patients about to receive an IV injection to experience less pain, babies to be stimulated to move their limbs to avoid developmental delays, Alzheimer’s patients to recognize and enjoy favorite songs, and healthy elderly to be more physically active.
We have learned, for instance, that what a robot says and what tone of voice it uses can determine whether the user will find it engaging or boring, intimidating or encouraging. Similarly, whether the robot uses gestures, which ones and how often makes a difference in whether the user perceives it as lifelike, believable, competent and safe.
Over the past 15 years of research, we have discovered as much about people as we have about machines. It has yielded numerous insights, some surprising, others obvious yet universally overlooked in technology development. Here are some of them.
A personality is crucial
While today robots are still rare and we are willing to put up with boring, bland, personality-free machines, that won’t last: As robots become more present in our lives, we will expect them to provide engaging social interactions, no matter what their function or use is.
One of our studies endowed robots with different personalities, specifically extroversion and introversion. We found that when we matched the robot’s personality to the user’s, the user worked harder and longer at the task (such as rehabilitation exercise) than when the personalities weren’t matched. This is similar to people working better with other people who are like themselves, and brings up questions about how to create robots with personalities that are believable and acceptable.
Another of our studies explored having robots learn to adapt their personalities to the user’s. This is tricky, because we find rapidly changing personalities, in humans or robots, to be disturbing, so trial-and-error robot learning won’t work. Instead, the robot’s personality can change only gradually. Additionally, not all human personality dimensions are appropriate for robots; some translate well, such as extroversion and introversion, while others may not, such as neuroticism.
Teach people not to bully their robots
Various studies have found that some people treat robots badly. This is important, because in those cases, robots, just like people, cannot be as helpful as intended. For instance, we found that stroke patients sometimes tricked and cheated the robot so they could move to the next exercise.
Why do we behave rudely to robots in ways we would not with other people? A part of the reason may have to do with our disappointment with robots: They are still not very effective, so they elicit frustration in many users. Another part is more fundamental: Robots are just different enough from us that they can be seen as “other” and elicit the worst of human behavior, since we humans are known to be tribal. No matter how well we design robots, and how capable they are of personalizing to their users, some people will treat them badly because some people behave badly.
Still, there may be ways to improve the situation. For instance, our lab has explored using robots to teach children how to stop bullying.
Help robots bring out our compassion
Fortunately, robots can also bring out the best in us. In our work with stroke patients, we discovered that when a robot’s mouth wasn’t moving because of a broken internal motor, and the robot said, “I’m sorry, my mouth isn’t working today,” users were immediately disarmed. They said something supportive to the robot, such as “That’s OK, my arm doesn’t work either,” and were then more forgiving of the robot’s later mistakes and imperfections. They enjoyed interacting with it more because they related to it.
Research has found that such feelings of empathy emerged for all kinds of robots, from toy dinosaur robots when they were “tortured” by being pulled apart, to larger-than-life, highly sophisticated humanoid robots when they fell over while trying to walk. Another finding is that people are willing to be compassionate and tolerant of flawed robots they like interacting with, such as simple vacuum-cleaning robots that people tend to treat like pets. At the same time, people grow bored with robots that are purely functional and not fun to interact with if they fail to perform their function perfectly, since they have nothing else to offer.
Good interaction design involves discovering the right level of robot imperfection. We already know that imperfection is better tolerated within a character that is appealing and interesting; one of our current projects is exploring how much robot vulnerability users enjoy, tolerate or find unacceptable and how vulnerability interacts with humor and other expressions of personality.
Surprise keeps a relationship from getting boring
People think they want reliable, dependable and repeatable machines, but they soon find such machines boring and then annoying and frustrating. We want our machines to do their tasks well, but we also want them to be interesting to interact with. Our research has found that when a robot acts in a surprising way, users pay more attention, do better at their task and like the robot more.
Surprising machine behavior can be unintended but effective, such as when Garry Kasparov tried and failed to understand Deep Blue’s unexpected chess move in the midst of an otherwise brilliant but strategically comprehensive game, because the move was in fact an AI algorithm error. Surprise can also be expected and welcome; children and adults alike love to ask AI assistants like Siri and Alexa all kinds of questions, hoping for unexpected and interesting answers.
The trick is to determine the timing and amount of surprise: The robot should not appear to be acting randomly; our brains drive us to explain everything we observe, so random behavior is unnerving to us. The robots in our lives will have to stay interesting by surprising us occasionally with what they do and say in ways we appreciate, but not in ways that are completely inexplicable and cryptic.
Don’t be creepy!
Most robot designers have heard of the Uncanny Valley, the notion that the more humanlike or animallike a robot is, the more creepy it appears when it fails to be perfect. This phenomenon applies not only to what the robot looks like, but also what it sounds like: When we gave one of our robots the voice of Frank Sinatra, even elders with advanced Alzheimer’s disease could tell that something was strange because “the machine isn’t Frank.” When we changed to a bad singing voice (provided by a researcher), the users found the robot endearing.
In fact, users notice any kind of a mismatch between what the robot promises—whether its looks, tone of voice, speech, movement or even promotional materials—and what it delivers, and become annoyed. What appears to some to be a small matter is often a deal breaker for user acceptance.
Robots don’t need to look like humans or animals to appear lifelike and be appealing; good design and user experience can make even the simplest and most abstract of robots appealing and effective, such as a sphere with lights. What matters is how the robot behaves and how its behavior matches its design and purpose and the user’s expectations.
Think in the long term
Numerous studies have shown that people are very willing to interact with robots, that we are curious about intelligent machines and that we openly share a great deal of personal information with them. However, there has been almost no research on truly long-term human-robot interactions, so we have a great deal to learn and discover about what happens after a few months or more of living together: Do we get bored with our robots? Do we grow more or less tolerant of their imperfections?
We recently finished a couple of studies that examined having robots in homes for extended periods. In one, elderly users interacted with a robot that reminded them to avoid sitting for too long; even though the robot was repetitive in what it said and did, the elderly users followed its advice faithfully and enjoyed its jokes. In another study, we left robots in the homes of families with children with autism-spectrum disorders for a month or longer. We found that children did get bored if the robot didn’t have something new to say or do over time, but still enjoyed having it in the home and didn’t want to part with it.
The real challenge will come in developing robot characters over months and years, just like with all relationships.


Post a Comment