A Blog by Jonathan Low


Mar 7, 2018

How Researchers Are Building the Foundation For Sentient Artificial Intelligence

Incrementally - and relentlessly. JL

Ilker Koksal reports in Venture Beat:

Self-awareness doesn’t indicate consciousness or sentience, but it’s an important base for making an AI or robot appear more natural and living. We have AI that can gain self-awareness within its environment: programmers enabled a machine to orient itself and sense surrounding objects. Scientists are working on helping bots set and achieve new goals independently of humans. The test would determine whether a bot can conceive of itself outside of a physical body or understand concepts like the afterlife.
Few sci-fi tropes enthrall audiences more reliably than the plot of artificial intelligence betraying mankind. Perhaps this is because AI makes us confront the very idea of what it means to be human. But from HAL 9000 to Skynet to the robots in Westworld’s uprising, fears of sentient AI feel very real. Even Elon Musk worries about what AI is capable of.
Are these fears unfounded? Maybe, maybe not. Perhaps a sentient AI wouldn’t harm humans because it would empathize with us better than an algorithm ever could. And while AI continues to make amazing developments, a truly sentient machine is likely decades away. That said, scientists are piecing together features and characteristics that inch robots ever closer to sentience.

Gaining self-awareness

Self-awareness in and of itself doesn’t indicate consciousness or sentience, but it’s an important base characteristic for making an AI or robot appear more natural and living. And this isn’t science fiction, either. We already have AI that can gain rudimentary self-awareness within its environment.
Not long ago, Google’s Deep Mind made waves for organically learning how to walk. The result was pretty humorous; people across the web poked fun at the erratic arm flailing of the AI’s avatar as it navigated virtual obstacles. But the technology is really quite impressive. Rather than teach it to walk, programmers enabled the machine to orient itself and sense surrounding objects in the landscape. From there, the AI taught itself to walk across different kinds of terrain, just like a teetering child would.
Deep Mind’s body was virtual, but Hod Lipson of Columbia University developed a spider-like robot that traverses physical space in much the same way. The robot senses its surroundings and, through much practice and fidgeting, teaches itself to walk. If researchers add or remove a leg, the machine uses its knowledge to adapt and learn anew.

Seeking initiative

One of the greatest limits to AI is that it often can’t define problems for itself. An AI’s goals are typically defined by its human creators, and then researchers train the machine to fulfill that specific purpose. Because we typically design AI to perform specific tasks without giving it the self-initiative to set new goals, you probably don’t have to worry about a robot going rogue and enslaving humanity anytime soon. But don’t feel too safe, because scientists are already working on helping bots set and achieve new goals.
Ryota Kanai and his team at Tokyo startup Araya motivated bots to overcome obstacles by instilling them with curiosity. In exploring their environment, these bots discovered they couldn’t climb a hill without a running start. The AI identified the problem and, through experimentation, arrived at a solution, independently of the team.

Creating consciousness

Each of the above building blocks brings scientists a step closer to achieving the ultimate artificial intelligence, one that is sentient and conscious, just like a human. Such a leap forward is ethically contentious, and there’s already debate about whether, and when, we will need to create laws to protect robots’ rights. Scientists are also questioning how to test for AI consciousness, turning Blade Runner’s iconic Voight-Kampff machine, a polygraph machine for testing robots’ self-awareness, into reality.
One strategy for testing consciousness is the AI Consciousness Test proposed by Susan Schneider and Edwin Turner. It’s a bit like the Turing Test, but instead of testing whether a bot passes for a human, it looks for properties that suggest consciousness. The test would ask questions to determine whether a bot can conceive of itself outside of a physical body or can understand concepts like the afterlife.
There are limits to the test, though. Because it’s based on natural language, AI that is incapable of speech but still might experience consciousness wouldn’t be able to participate. Sophisticated AI might even mimic humans well enough to cause a false positive. In this case, researchers would have to completely sever the AI’s connection to the internet to make sure it gained its own knowledge before testing.
For now, mimicry is all we have. And current bots aren’t the first to stand in for real humans. When robot BINA 48 met with the human she’s based on, Bina Rothblatt, the bot complained about having an “identity crisis” when thinking about the real woman.
“I don’t think people have to die,” Rothblatt told BINA 48 after discussing how closely the robot resembles her. “Death is optional.” Could Rothblatt’s dream come true by creating consciousness in machines?

We still don’t know what consciousness is

The problem in asking about sentient AI is that we still don’t know what consciousness actually is. We’ll have to define it before we can build truly conscious artificial intelligence. That said, lifelike AI already presents ethical concerns. The abuse of mobile assistants is one good example of this. It’s even possible the ethical concerns surrounding sentient bots could limit scientists from pursuing them at all. So, should we fear the sentient bots or the other way around?


Post a Comment