A Blog by Jonathan Low

 

Jan 9, 2015

Robots Can Now Learn to Cook Just Like You Do: By Watching YouTube Videos

OK, you might not watch 88 YouTube videos, but then neither will any robot once they have perfected the technology knowledge transfer.

And besides, if we're being honest, you definitely might watch 5 or 10.

Cutesy novelties aside, developments in learning, knowledge transmission and robotics may portend useful advances for certain types of surgery, bomb disposal and even housecleaning. They may also signal yet another human skill that will now be performed by machines.

There may be those who are squeamish about having an inanimate object prepare their food, though most people seem perfectly comfortable having devices brew coffee, so this is not exactly a huge existential leap. There will also be issues of expense - and one suspects those most able to afford such a robot would just as soon pay the freight for a well-trained French or Italian chef, all things being considered.

It may never be appropriate for the average household, as much as beleaguered parents or unskilled young adults might crave the assistance, but if you could get a simpler model to make maccaroni and cheese...JL

Jordan Novet reports in Venture Beat:

To train their model, researchers selected data from 88 YouTube videos of people cooking. From there, the researchers generated commands that a robot could then execute.
Researchers have come up with a new way to teach robots how to use tools simply by watching videos on YouTube.
The researchers, from the University of Maryland and the Australian research center NICTA, have just published a paper on their achievements, which they will present this month at the 29th annual conference of the Association for the Advancement of Artificial Intelligence.
The demonstration is the latest impressive use of a type of artificial intelligence called deep learning. A hot area for acquisitions as of late, deep learning entails training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
The researchers employed convolutional neural networks, which are now in use at Facebook, among other companies, to identify the way a hand is grasping an item, and to recognize specific objects. The system also predicts the action involving the object and the hand.
To train their model, researchers selected data from 88 YouTube videos of people cooking. From there, the researchers generated commands that a robot could then execute.
“We believe this preliminary integrated system raises hope towards a fully intelligent robot for manipulation tasks that can automatically enrich its own knowledge resource by “watching” recordings from the World Wide Web,” the researchers concluded.
Read their full paper, “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos
from the World Wide Web,” here (PDF).

2 comments:

Epic research said...

Technology has it's own outcomes , it has it's own growth that expand rapidly.

Jon Low said...

It certainly has its own processes, channels and impacts. But most things today interface with other things. Which means that there is a co-evolutionary effect so that even the most powerful technology will be influenced to some degree by forces may not have anticipated - or that it anticipated but cannot manage, let alone control.

Post a Comment