A Blog by Jonathan Low

 

Jan 25, 2017

To Work Optimally, Robots Need Our Trust. They Dont Have It

If we don't trust it, we won't use it. Which is part of the reason why robots are being designed to look friendly or cute.

But until we are assured they wont go haywire - or just soak up what they learn from us and then replace us, building trust is going to be a challenge. JL

Graham Templeton reports in Inverse:

Understanding the correlation between effectiveness and trust is difficult when it comes to robot-human interaction. Trust wasn’t critical to understanding robotics until robots became proficient at complicated tasks to transition from implements to intelligences. A robot that doesn’t inspire trust can’t help anyone, which is why roboticists are working with robots to measure trust.“Humans use trust in all our interactions with the world, even regular ‘machines."
Right now, DARPA is developing robots capable of rescuing humans from burning buildings even if that means knocking down walls and wading through flames. These potential savior-bots are evaluated on an obstacle course designed to test their speed, stability, versatility, and more. They’re getting better, stronger, and faster. But what will panicked people think when a machine crashes into their smoke-filled living room? Will these robots alleviate or compound the terror? Understanding the correlation between effectiveness and trust is profoundly difficult when it comes to robot-human interaction.
Trust wasn’t critical to understanding robotics and automation until robots became proficient enough at complicated tasks to transition from being implements to being intelligences. Today, robots work in an astonishing array of industries and in a growing number of service capacities. And whether a robot is helping an engineer prototype of flying car, an elderly person get into bed, or an arson victim escape a building, the interaction is almost as important as core function. A robot that doesn’t inspire trust is a robot that can’t help anyone, which is why roboticists are working overtime (with robots) to come up with a way to measure trust — a critical step towards being able to study and promote it.
“We humans use trust in all our interactions with the world and that’s true with even regular ‘machines,’” says University of Central Florida researcher Florian Jentsch. “What is unique and different here is that we believe that artificial intelligence and robotics development is at the cusp of… moving away from something that’s really a tool, and toward something that’s a teammate, or a coworker.”
The development of more trustworthy robots (or more trustworthy-seeming robot, there often isn’t a difference), requires specific expertise. Traditionally, robots have been designed by people who are intimately familiar with what that robot can do and what it cannot do, but not what it looks like it’s doing or how it seems to be going about any given task. Impression management is a new thing for engineers and an increasingly important part of the field as all-purpose robots become more feasible. “A robot can move,” Jentsch points out. “It can hit you; it can drive into you; it can do bad things.” He adds that this mobility and flexibility is why trust is more important in robotics than in industrial design, a field that has long informed automaton construction.
The design goal becomes communicating the robot’s purpose and, through its behavior, it’s competence. That requires engineering a form of emotional shorthand, which is no easy task.
All the way back in 1998, the U.S. Air Force commissioned one of the more forward-thinking studies on this subject, hoping to figure out how to integrate automated systems and eventually robots into the military without jeopardizing troop dynamics. The analysis found that people react to robots and each other in the same way, meaning that human-robot trust would be built on the same principles as human-human trust. In so much as trust is a multidimensional concept, it is also a fairly consistent phenomenon. It is always won and lost the same way — even when motivation is removed from the picture.
The messaging here might be a bit on the nose.
The messaging here might be a bit on the nose.
Classically, trust has been evaluated along two numerical scales: motive and competence. Evil and competent doesn’t inspire trust and neither does benevolent and incompetent. Benevolent and competent does. It’s more or less a quadrant system. But it’s one of many and the others are far more complicated. One proposed system uses a 40-point scale, each evaluated between 0 and 100 percent. This questionnaire seems to get better, more predictive results by incorporating more subjective evaluations, like how likely a participant is to think a robot is honest or friendly.
More complex ratings systems might provide better predictions of real-world human interaction before a robot’s release, but not everyone thinks that user ratings are the right way to go. Over at the University of British Columbia, AJung Moon heads up a lab called the Open Roboethics Initiative, which is dedicated to studying human-robot interaction, and the hard ethical choices that arise from it. “I don’t necessarily think questionnaires are the way to go,” she told Inverse. “I think much more of an in-situ experiment is necessary.”
To make the point that it’s very difficult to predict how people will react to a robot until you’ve seen them actually do so, multiple sources referenced a recent study on robot trust. In this experiment, well-educated people would follow a robot toward an exit after an alarm sounded — even if that robot had been lost in the facility minutes earlier. If people will follow a demonstrably untrustworthy robot when they believe their safety is on the line, the argument goes, then we can’t take almost any sort of interaction for granted when planning a robot’s release.
“A very careful scientific analysis needs to be undertaken in order for us to begin to comprehend what it is that a relationship between a human and an intelligent machine will be like,” Dartmouth tech commentator and theoretical physicist Marcelo Gleiser told Inverse via email “Learning by doing may be a very dangerous game to play.”
“We should definitely try to come up with a standardized metric so we can see how one particular platform does before a problem arises during real-world use,” Moon added. “Robotics is such a fast-paced field that we’ve started to adopt these technologies before we can even stop and think about it.”
She mentioned the so-called “Wizard of Oz” approach to demonstrating a robot’s abilities — having a human serve as a puppeteer — potentially creates dangerous misunderstandings about robot abilities. This method of refining interaction could, she says, lead to unexpected consequences including, but not limited to, humans trusting the wrong robots at the wrong times.
Still, there needs to be a system and there need to be a scale so there can be standards and regulations. If a robot’s overall trustworthiness can be put on a linear or even multi-dimensional scale and directly compared to previously released robots, it will be possible to say that a robot need to be at least this trustworthy to be an in-home elder assistant. Or it could lead to a mandate that more trustworthy robots be capable of more tasks and incur more legal liability for their owners and operators.
Ultimately, though, Jentsch argues that all such speculation is useless until society comes to a more concrete understanding of what powers it wants robots to have. Right now the only real discussion of the importance of human trust in robots is going on with respect to self-driving cars. “When you get beyond that,” he says, “there’s really no standard for what a robot should or should not do.”
And so, in absence of real public or governmental direction, roboticists are slowly attempting to find their own way forward. A patchwork of civilian and military research projects are building the first-ever understanding of the social dynamics between man and machine, but with no clear idea of what to do with that understanding once they’ve got it. This means that even the most trustworthy scholars are, in a sense, stalling for time.

10 comments:

Unknown said...

Thank you for sharing your product list which is used by you I will surly try these tips in future. glass mosaic tiles manufacturer

Unknown said...

Anything works ell if treated with care even RFID injectable transponders lasts longer than normal if used wisely

mindnasium said...

This is a fantastic and quite useful article for me. Thank you so much for all of your hard After School Steam Program in giving such excellent information!

Cab Time said...

When it comes to car rental, we charge on an hourly basis. A competent chauffeur and an executive car are provided for transportation to social gatherings, school proms, athletic events, tours, and attractions. In the past, we have also worked with film and television production companies, taxis in reading as well as with other high-profile clients. We have a luxurious fleet of vehicles for nearly any occasion.

Anonymous said...

Amazing post about cuisines and cutlery. The author has picked unique information for the post. AI software development companies

ACE Group said...

Tanks for sharing it. Glad to see it.

Anonymous said...

This blog post is really beneficial. Thank you very much for all of your hard work on the information. casual floral one piece jumpsuit

Anonymous said...

what a fantastic article! I want to thank you for your helpful post and make sure to follow your blog going forward. Making Men out of mice

H-1B approval said...

Fascinating read! Building trust with robots is crucial. On a different note, any insights on how this impacts the work dynamics for professionals under the H1B Visa? Curious to know!

Anonymous said...

what a fantastic article! I want to thank you for your helpful post and make sure to follow your blog going forward. best capsule coffee maker

Post a Comment