Prof. Ashutosh Saxena, computer science, has been working with robots for several years, and he and his team at Cornell’s Personal Robotics Lab have designed robots that can detect human activities, understand scenes that occur in front of them, and recently, even pick up and place objects.While blockbusters like I, Robot and The Matrix tend to give robots an overall bad reputation, they do not have to be frightening, and can actually be programmed to be helpful in practical matters, like doing the dishes.
Prof. Ashutosh Saxena, computer science, has been working with robots for several years, and he and his team at Cornell’s Personal Robotics Lab have designed robots that can detect human activities, understand scenes that occur in front of them, and recently, even pick up and place objects.
“There are many everyday tasks -- recognizing objects, loading/unloading a dishwasher, cooking simple meals, cleaning a cluttered house, assembling an object from kit of parts--that, while simple for humans, are extremely challenging for robots, as they involve detailed, tightly-coordinated perception and manipulation abilities,” Saxena explained. His goal is to resolve the fundamental problems in robotics perception and manipulation, and allow personal robots to be useful in common household and office environments.
According to Saxena, being able to detect and recognize human activities is essential for several applications, including smart homes and personal assistive robotics. To teach robots how to observe and recognize basic human activity, Saxena and his team used a RGB-D [Red, Green, Blue, Depth] sensor called Microsoft Kinect as an input sensor, and computed a set of features based on human pose and motion.
Saxena and his team have also been able to teach robots how to understand situations and label certain objects by scanning a room. “Thanks to the availability of Kinect sensors, our robots can now easily obtain colored 3D pointclouds of their environments. We built a module which can label these pointclouds into 17 object categories [such as bed, chair, pillow, wall] with micro averaged precision and recall of over 70%,” he explained. “It uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships.”
In addition to this advanced level of “thinking,” the robots have been taught to identify objects, pick them up and place them elsewhere. This research, presented at the 2011 Robotics: Science and Systems Conference on June 27 at the University of Southern California, is particularly groundbreaking because though other researchers have developed ways to have robots pick things up, putting them down has proven to be much more challenging. “[To deal with that,] we just show the robot some examples and it learns to generalize the placing strategies and applies them to objects that were not seen before," Saxena explained. "It learns about stability and other criteria for good placing for plates and cups, and when it sees a new object like a bowl it applies them."
To perform seemingly simple activities such as unloading a dishwasher and placing dishes on a drying rack, robots must undergo a series of steps. First, they must locate the dish rack by taking a catalog of all the objects in the entire room—something humans subconsciously do as well. Together, Prof. Thorsten Joachims, computer science, and Saxena have developed a system that allows a robot to scan a room and identify its objects. Pictures from the robot's 3D camera are stitched together to form an image of the entire room.
While their robots are still learning how to do their household chores, Saxena and his team’s research could one day program robots that not only perform these basic functions, but also use activity detection technology to help the elderly and people with disabilities. Personal assistant robots would be able to check that people are eating, taking their medication and even alert emergency personnel if a person has fallen and isn’t getting up. As promising as they are, however, “robots have a long way to go,” Saxena said.