November 5, 2013

Robots Predict Human Actions

Print More

By RYAN O’HERN

Autonomous robots can build cars, manage the inventory at Amazon’s warehouses and run gene-sequencing experiments. However, since these robots operate in static environments, performing the same actions thousands of times, the technology that powers them has limited use in environments where change and unpredictability are present.

Cornell researchers are trying to change this.

In order for scientists and engineers to build robots that can work with humans, they must be able to build robots that can understand and predict human activity in order to handle human sources of change and chance.

Hema Koppula grad and Prof. Ashutosh Saxena, computer science, have begun to solve the problem of predicting human actions in order to moderate the actions of robots.

In Saxena’s lab, a human-sized robot with two sophisticated arms observes the world using a Microsoft Kinect 3-D Camera. The Kinect, a camera that uses thousands of bursts of an infrared laser to gather three-dimensional data about the world, has become the bread and butter of robotics research because of its low price and efficacy at mapping spaces in 3-D. Using the Kinect, the robot observes humans performing simple household tasks such as warming food in a microwave or drinking from a coffee mug. Then, the robot, which has been programmed to perform helpful actions like refilling the mug or opening the refrigerator, uses information about the human’s current activity to decide what action it should take next.

Courtesy of Hema Koppula GradRobot to the rescue | Hema Koppula grad built a robot that can anticipate human actions. The robot is uses a 3-D camera to view and help with common domestic tasks.