Robots that can see into their future developed

PTI

Technology, In Other news

The technology enables robots to imagine the future of their actions to figure out how to manipulate objects they have never seen before.

The technology developed by researchers at University of California, Berkeley in the US may help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.(Representational Image)

Scientists have developed a novel learning technology that enables robots to imagine the future of their actions so that they can figure out how to manipulate objects they have never encountered before.

The technology developed by researchers at University of California, Berkeley in the US may help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.

However, the initial prototype focuses on learning simple manual skills entirely from autonomous play. Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements.

These robotic imaginations are still relatively simple for now - predictions made only several seconds into the future - but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are, researchers said.

That is because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

"In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualise how different behaviours will affect the world around it," said Sergey Levine, assistant professor at Berkeley, whose lab developed the technology.

"This can enable intelligent planning of highly flexible skills in complex real-world situations," Levine said.

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot's actions.

Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

"In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own," said Chelsea Finn, a doctoral student in Levine's lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Read more...