The robots usually react in real time: something happens, they respond. Today, researchers at the University of California at Berkeley are working on a system that allows robots to "imagine the future of their actions" so that they can interact with things that happen to them. They have never seen it before.
This technology is called "prospective vision" and allows "robots to predict what their cameras will see if they perform a particular sequence of motion".
Write the researchers:
These robotic imaginations are still relatively simple for the moment – predictions made only a few seconds into the future – but they are enough for the robot to find how to move objects around a table without disrupting obstacles. . Basically, the robot can learn to perform these tasks without any human assistance or prior knowledge about physics, its environment or what objects are. This is because the visual imagination is learned entirely from unattended and unattended exploration, where the robot plays with objects on a table. After this phase of play, the robot builds a predictive model of the world and can use this model to manipulate new objects that he had never seen before.
"In the same way that we can imagine how our actions will move objects into our environment, this method can allow a robot to visualize how different behaviors will affect the world around it," said Assistant Professor Sergey Levine. at Berkeley & # 39; s Department of Electrical and Computer Engineering. "This can enable intelligent planning of highly flexible skills in complex real-world situations."
The system uses convolutional recurrent video prediction to "predict how the pixels of one image will move from one image to the other depending on the actions of the robot". This means that he can play scenarios before touching or moving objects.
"In this past, robots learned skills with a human supervisor assisting and providing feedback.What makes this job exciting is that robots can learn a whole range of visual manipulation skills. Objects by themselves, "said Finn Chelsea, PhD student in Levine's lab and inventor of the original DNA model.
The robot does not need any special information about its environment or special sensors. A camera is used to analyze the scene and then act accordingly, just as we can predict what will happen if we move objects on a table.
"Children can get to know their world by playing with toys, moving them, grabbing them, and so on … Our goal with this research is to allow a robot to do the same thing: learn how the world is functioning through autonomous interaction, "said Levine. "The capabilities of this robot are still limited, but his skills are learned entirely automatically, and allow him to predict complex physical interactions with objects that he has never seen before based on models of Interaction previously observed. "
Featured image: Kevin Smart / Getty Images