Anyone would enjoy a fanciful, futuristic world where robots slave to fulfill your every desire and perform any mundane action you desperately want to avoid doing yourself. An army of chrome maids could massage your body, hand you drinks and clean the mountain of dishes left over from your last family dinner party. While this exact situation may still be a carbon-nano pipe dream, two scientists—graduate student Hema S. Koppula and assistant professor of computer science Ashutosh Saxena of Cornell University—have managed to construct a robot (“Rosie,” referencing The Jetsons) that can learn to adjust its behavior according to accumulated experiential data.

This mechanical sidekick, visually analyzing the world with a Microsoft Kinect 3-D camera, records and evaluates scenes and objects in its view. When these same items appear again in the landscape, statistical techniques are employed by the robot to evaluate what actions it should perform. 120 3-D videos were processed by one such robot to form a foundation of data to begin inferring subsequent actions. . The sophisticated methodologies applied by the scientists enable the robot to complete such tasks as opening refrigerator doors when being approached by humans, refilling empty cups, and storing items  that were no longer being used.

To elucidate and dissect the actions it observes, the robot divides motor activity sequences into smaller pieces, such as lifting, lowering, rotating, pushing and pulling. This creates a toolbox of small motor movements which, when used in combinations, allows the robot not only to predict what actions a human might want to perform next (such as pouring drinks and opening/closing doors), but to provide them a helping hand in performing the actions.

robot-babysitter

While this current robot seems like it could replace your cleaning person, its capabilities are not without limits. The further in the future the robot prognosticates its behavior, the more erroneous it becomes. When performing actions only one second in advance, the robot made correct predictions 82 percent of the time. When predicting for three seconds into the future, 71 percent of its actions were correct, and ten seconds into the future 57 percent of its actions were correct. These flaws are surely of a statistical and algorithmic nature, and will no doubt be improved upon by future artificial intelligence researchers.

The robot could  also not adapt to changing conditions once motor activity sequences began to be executed. For example, if the robot begins reaching for a cup to refill it, but a human picks up the cup first, the robot could end up pouring liquid into open air and onto the bare ground. However, while the robot is pouring the drink, if a person reaches for the cup the robot ceases pouring more liquid into the cup.

Even though humans are predictable, they are only predictable part of the time,” Saxena said. “The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond.”

To bring their work into the spotlight of the greater scientific community, Koppula and Saxena’s robotics work will be presented at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems Conference, June 24-28 in Berlin, Germany.

The research was supported by the U.S. Army Research Office, the Alfred E. Sloan Foundation and Microsoft.

 

tn2m picRandy studies neuroscience at Florida Atlantic University in Boca Raton, FL. When not studying or working in the lab, he enjoys keeping up with the latest scientific discoveries, producing electronic music, and taking a stroll through the park. He can be e-mailed at rellis19@fau.edu.