A Blog by Jonathan Low

 

Sep 30, 2016

At the Bleeding Edge of Artificial Intelligence: Quantum Grocery Picking and Catastrophic Forgetting

Learning is one goal. Not forgetting due to newer information overriding previously assimilated data is another, increasingly crucial one. JL

Bob Dormon reports in ars technica:

You can train neural networks to do one thing well but transferring that knowledge from one task to another is a challenge (like) when a previously learned thing is overwritten by the information gleaned from the latest task. A move from a binary to a quantum could optimise tasks which feature numerous variables. If automation resulted in robots crushing 50,000 boxes of eggs with tins of beans as it learned to pick and pack, automation wouldn’t get very far. Simulation is the linchpin
Don’t laugh, but there may come a time when quantum computers are sorting out your grocery deliveries, and if Paul Clarke, CTO of the online food store Ocado is right, it could be sooner than you think.
In an interview with Ars Clarke revealed his interest in quantum computing to solve the huge mathematical problems that surround automating delivery services. In theory, quantum computing is well suited to probabilistic tasks and will outperform classical computing platforms in this area… just not yet.
Even so, a future move from a binary to a quantum method, while complicated, could ultimately optimise vehicle routing tasks, which feature numerous variables. It could also deliver a boost to Ocado’s forthcoming robot grid technology, optimising the 4D (space and time) conundrums that this much-touted but yet-to-be-unveiled system grapples with.
The Ocado Service Platform continues to evolve: still in development is its own hive of robots that involve some intensive number crunching to ferry around goods to pickers and pre-empt demand.
Enlarge / The Ocado Service Platform continues to evolve: still in development is its own hive of robots that involve some intensive number crunching to ferry around goods to pickers and pre-empt demand.
In operation, a robot takes from the grid (or hive) a crate full of identical products, say, chocolate biscuits, to a picker who then takes the quantity needed for the customer basket and then the robot returns the crate back in the same position in the hive. However, it might choose to take it elsewhere if an upcoming order might need the same item.
So what happens next comes down to the algorithm’s savviness with time variables and placement, which is just the sort of 4D optimisation task quantum computing is good at. Indeed, having such a system might one day not just be the wishful thinking of a CTO but a necessity to realise a competitive advantage.
There’s something wrong with this picture though, and it’s not the bots or qubits: it’s the human element. All that sophistication for a crate to end up at a station for a human to grab the item and steadily complete a shopping list. Where’s the robot doing that job?
If only they could, as this is not a predetermined production line.
Even with AI, the problem is that robots and the neural networks that will imbue them with sufficient cleverness to perform a particular task have immense difficulty in learning new tricks while remembering old ones. So handling an inventory of over 48,000 items of all shapes, sizes, and consistencies would be a big ask.
With simulation, combined with reinforcement learning, you could teach your robot arm how to pick biscuits from a crate—as they won’t always be in exactly the same place—and put them in the basket (or tote) with awareness of existing items and available space. If it needs to identify different products and intelligently operate with alternative policies for handling them, then it’s going to struggle. There’s the risk of catastrophic forgetting, a real term that describes when a previously learned thing is overwritten by the information gleaned from the latest task. Alas, you can train neural networks to do one thing well but transferring that knowledge from one task to another is a challenge.At the Re•Work Deep Learning Summit in London last week, Raia Hadsell, senior research scientist at Google DeepMind emphasised this point. Referring to earlier research she gave examples of separate neural networks being used to classify images, play Atari games, and music generation by mimicking existing audio. But, she declared, “There is no neural network in the world (and no method right now) that can be trained to identify objects in images, play space invaders, and listen to music. This is a problem. If we’re really going to get to general artificial intelligence, we need something that can learn multiple tasks.”

You never forget how to ride a bicycle. Unless you're an AI.

Experiments at DeepMind have produced a potential solution to enable transfer learning without losing previous knowledge. Called progressive neural networks, Hadsell described how, after using deep reinforcement learning to train a neural network for a specific task, a new neural network can be added (referred to as a column) and trained for a different task (see images below). This new column is linked to the previous one with lateral connections at each layer. The weights of the original column are frozen so that when the second column is trained using gradient back propagation, features already learned in column one can be used but not overwritten. Rinse and repeat for additional tasks, freezing previous columns as each new one is added.
Even if the science is difficult to grasp, the concept of building up knowledge in this way to inform new tasks proved to rapidly increase learning rates compared to conventional learning from scratch. The older tasks are still there and unchanged if needed to be utilised, although the way this approach works prevents older tasks from learning from newer ones.
To focus on the idea of transferring knowledge from one domain to the other, these progressive neural net insights were then applied to a range of experiments with a Jaco robotic arm. As robots are expensive and can break, a robotic arm simulator, in effect a physics engine called MuJoCo, was used to run tens of thousands of training episodes involving reacher tasks: random start/random target, catch a falling ball, and “catch the bee” tracking.
“It took about a day to train. If the same batch of experiments had been done on a real robot it would have taken 55 days to work. That’s a huge difference,” said Hadsell referring to the first simulation task.
The real challenge was to transfer the simulation data to an actual robotic arm and introduce progressive neural nets by adding new columns where the input is an actual camera that connects and is looking at the actual robot, the Jaco. With this arrangement, maximum performance was achieved in two hours of real world training time. For more complex experiments the input was proprioception plus target XYZ. When another column was added to enable training for a new task, training was optimised in 30 minutes from the cumulative knowledge in the progressive neural network.
Although the number of parameters grow quadratically with the number of the tasks, producing a scaling problem, Hadsell noticed later columns had less use, as features learned from existing columns were transferred. She is confident that the size of the model can be controlled over time with pruning, compression, and distillation methods.
This transfer of simulation to robotics provides ways to deal with real world applications where data is hard to acquire or manage, and deep learning evidently has a role to play. If, in years to come, a company’s enthusiasm for automation resulted in robots crushing 50,000 boxes of eggs or bruising 20,000 cucumbers with tins of beans as it learned to pick and pack, advanced automation probably wouldn’t get very far. Simulation is the linchpin, and progressive neural networks could be one way of learning new rules to play the pick-and-pack game.And while they are about it, some quantum computing would definitely come in handy to work out the fastest, most space efficient and delicate way to fill up a customer’s basket.
* * *

1 comments:

Maitlan Brown said...

AI has evolved a lot and will be part of our routine life soon, and machine learning will be providing great ease shortly. And I hope that there will be some innovation that will help me in getting rid of the assignment trouble. However, Top Academic Writing Help Online is always there for my help, but AI will take over many of our routine tasks in the future.

Post a Comment