A Blog by Jonathan Low

 

Jul 12, 2019

The Ongoing Effort To Build Self Aware Robots

Self awareness is not just about pondering the meaning of life. And those simpler concerns can be modeled. JL


John Pavlus interviews Hod Lipson in Quanta:

“When you talk about self-awareness, people think the robot is going to suddenly wake up and say, ‘Hello, why am I here? But self-awareness is not black-and-white. It starts from trivial things like, ‘Where is my hand going to move?’ It’s the same question, just on a shorter time horizon.” A system that can simulate itself is to some degree self-aware. And the degree to which it can simulate itself factors into how much it is self-aware. We want to see if an AI algorithm can learn a self-model that’s equal or better than what traditional, coded-by-hand model can do.
I want to meet, in my lifetime, an alien species,” said Hod Lipson, a roboticist who runs the Creative Machines Lab at Columbia University. “I want to meet something that is intelligent and not human.” But instead of waiting for such beings to arrive, Lipson wants to build them himself — in the form of self-aware machines.
To that end, Lipson openly confronts a slippery concept — consciousness — that often feels verboten among his colleagues. “We used to refer to consciousness as ‘the C-word’ in robotics and AI circles, because we’re not allowed to touch that topic,” he said. “It’s too fluffy, nobody knows what it means, and we’re serious people so we’re not going to do that. But as far as I’m concerned, it’s almost one of the big unanswered questions, on par with origin of life and origin of the universe. What is sentience, creativity? What are emotions? We want to understand what it means to be human, but we also want to understand what it takes to create these things artificially. It’s time to address these questions head-on and not be shy about it.”
One of the basic building blocks of sentience or self-awareness, according to Lipson, is “self-simulation”: building up an internal representation of one’s body and how it moves in physical space, and then using that model to guide behavior. Lipson investigated artificial self-simulation as early as 2006, with a starfish-shaped robot that used evolutionary algorithms (and a few pre-loaded “hints about physics”) to teach itself how to flop forward on a tabletop. But the rise of modern artificial intelligence technology in 2012 (including convolutional neural networks and deep learning) “brought new wind into this whole research area,” he said.
In early 2019, Lipson’s lab revealed a robot arm that uses deep learning to generate its own internal self-model completely from scratch — in a process that Lipson describes as “not unlike a babbling baby observing its hands.” The robot’s self-model lets it accurately execute two different tasks — picking up and placing small balls into a cup, and writing letters with a marker — without requiring specific training for either one. Furthermore, when the researchers simulated damage to the robot’s body by adding a deformed component, the robot detected the change, updated its self-model accordingly, and was able to resume its tasks.
It’s a far cry from robots that think deep thoughts. But Lipson asserts that the difference is merely one of degree. “When you talk about self-awareness, people think the robot is going to suddenly wake up and say, ‘Hello, why am I here?’” Lipson said. “But self-awareness is not a black-and-white thing. It starts from very trivial things like, ‘Where is my hand going to move?’ It’s the same question, just on a shorter time horizon.”
Quanta spoke with Lipson about how to define self-awareness in robots, why it matters, and where it could lead. The interview has been condensed and edited for clarity.

‌You’re clearly interested in big questions about the nature of consciousness — but why are you investigating them through robotics? Why aren’t you a philosopher or a neuroscientist?

To me the nice thing about robotics is that it forces you to translate your understanding into an algorithm and into a mechanism. You can’t beat around the bush, you can’t use empty words, you can’t say things like “canvas of reality” that mean different things to different people, because they’re too vague to translate into a machine. Robotics forces you to be concrete.
I want to build one of these things. I don’t want to just talk about it. The philosophers, with all due respect, have not made a lot of progress on this for a thousand years. Not for lack of interest, not for lack of smart people — it’s just too hard to approach it from the top down. Neuroscientists have approached this in a more quantitative way. Still, I think they’re also hampered by the fact that they’re taking a top-down approach.
If you want to understand consciousness, why start with the most complex conscious being — that is, a human? It’s like starting uphill, the most difficult way to start. Let’s try to look at simpler systems that are potentially easier to understand. That’s what we’re trying to do: We looked at something very trivial, [a robot] that has four degrees of freedom, and asked, “Can we make this thing self-simulate?”

Are self-simulation and self-awareness the same thing?

A system that can simulate itself is to some degree self-aware. And the degree to which it can simulate itself — the fidelity of that simulation, the short-term or long-term time horizon it can simulate itself within — all these different things factor into how much it is self-aware. That’s the basic hypothesis.

So you’re reducing a term like “self-awareness” to a more technical definition about self-simulation — the ability to build a virtual model of your own body in space.

Yes, we have a different definition that we use that is very concrete. It’s mathematical, you can measure it, you can quantify it, you can compute the error to what degree. Philosophers might say, “Well, that’s not how we see self-awareness.” Then the discussion usually becomes very vague. You can argue that our definition is not really self-awareness. But we have something that’s very grounded and easy to quantify, because we have a benchmark. The benchmark is the traditional, hand-coded self-model that an engineer gives to a robot. With our robot, we wanted to see if an AI algorithm can learn a self-model that’s equal or better than what that traditional, coded-by-hand model can do.

Why is a physical robot necessary? Why not investigate self-awareness in a disembodied system?

We’re looking for a closed system that can potentially simulate itself — and to do that, it needs to have inputs and outputs, but there also has to be a boundary, a place where you draw the “self.” A robot is a very natural kind of thing that does that. It has actions, it has sensations, and it has a boundary, so things happen to it and there’s something to simulate. I’m a roboticist, it’s sort of my first choice.

Did the robot create its self-model from a total blank slate?

We started with absolutely nothing, just as a matter of principle, to see how far we can go. In the previous case [with the starfish-shaped robot], we didn’t have the computational horsepower. We had to tell it, “You don’t know what you are and where your pieces are, but let me tell you about F = ma and other rules from physics that we know are true, and you take it from there.”

How does artificial intelligence play into this?

For some reason, we are very happy to have robots learn about the external world [using artificial intelligence], but when it comes to themselves, for some odd reason, we insist on hand-coding the model. So what we did is actually fairly trivial: We said, “Let’s take all that infrastructure that people have made to help robots learn about the world, and we’re going to turn it inside, on itself.” In one sentence, that’s all we did.

The robot made 1,000 random movements to gather data for the deep-learning algorithm to create the self-simulation. Is this the process that you describe as being like babbling in a human infant?

That’s exactly it. The robot waves around, and it sort of observes where its tip is. Imagine yourself moving your muscles around and watching the tip of your finger. That’s your input and output. The robot is out there babbling for 30-odd hours, and once we collect all the data, we can go home. From there on out, it’s purely a computational challenge [to learn the self-model].
What we did then is we broke the robot [by adding a deformed part] and did it again. And we saw how the broken robot can start with the intact model and correct it. The second time it learns, it doesn’t have to learn from scratch. There is a re-babbling period, but much less than it needed originally — only 10% [of the original period].
But before it even does the re-babbling, it needs to know that something went wrong. That’s a very powerful thing. How does it know? When you have a self-model and something goes wrong, you instantly know because if you open your eyes, you’ll see your hand is not where it’s supposed to be. You’re expected to be within four centimeters of the point you wanted it to be, but it is suddenly 16 centimeters away. You get that feedback immediately. So the robot immediately knows there’s something wrong. Then it takes it a while to figure out how to compensate, but even knowing that something is wrong is, I think, very important.

Is this self-model analogous to parts of the human brain that act like an internal map of the body?

I think that’s what it is. And again, that’s why it’s so crude and so simple. The fact that our robot was a four-degree-of-freedom arm is what put this within reach. If it were a humanoid that has 800 degrees of freedom, that might have been too complicated for the AI that we have today.

If this really is a form of self-awareness, why should robots have it? What good is it?

It makes robots ultimately much more resilient. You can model robots manually, as we do today, but that’s very laborious and it’s slowing us down. When a robot in the real world deforms, or breaks, if a wheel falls off or a motor slows down, then suddenly the model is incorrect. And it’s not just going to be a factory robot that puts a screw in the wrong place. If you consider driverless cars, we can trust our lives to autonomous robots. This is serious stuff. You want these robots to be able to detect that something has gone wrong and to be able to do that reliably.
The other reason is flexibility. Let’s say the robot does one task and as it’s doing that task, it’s continuously updating its self-model. Now if it needs to do a new task — say it needs to put a screw in a different place, or it needs to spray instead of putting screws in — it can use that same self-model to learn and plan how to do that new task. From the outside, it looks like what we call “zero-shot learning,” which is one of these things that humans appear to do. You can stare at a tree you’ve never climbed before and then you can walk up to it and climb it. When the robot can self-model, it can learn internally like that: You don’t see that it’s been training for hours inside its own internal simulation. All you can see is that it did one task, then it sat there for a while, and suddenly it can do a new task without ever trying.

What’s the connection between a robot that can self-simulate its own body and a robot that can have internal “thoughts” — something that sounds more like the informal meaning of “self-awareness”?

We have other projects that we’re working on which have to do with self-modeling not the physical body, but the cognitive process. We’re taking baby steps in both of these directions. It is completely a leap of faith to believe that eventually this will get to human-level cognition and beyond.

So you’re hypothesizing that those two paths — self-simulating the body, and self-simulating the mind itself — will converge?

Exactly. I think it’s all the same thing. That’s our hypothesis, and we’ll see how far we can push it.

0 comments:

Post a Comment