January 29, 2013

Neuron-Based Software Allows Prof. Saxena’s AirRobot to Autonomously Avoid Objects

Print More

Flying robots, also known as miniature aerial vehicles, are useful in several surveillance situations –– the robots can swoop down to snap photos to aid in a search-and-rescue mission or assess the damage after a natural disaster.

But these robots run into problems –– literally. Prof. Ashutosh Saxena, computer science, develops visual perception software that reduces the number of times that MAVs collide with objects in the real world.

“The goal is to design an algorithm for a robot so that it can perceive the world,” he said.

Saxena’s lab developed software which allows MAVs to “see” and autonomously react to obstacles. Human controllers may not be able to see the robot or steer it clear of obstructions quickly enough to avoid crashes, and GPS may be patchy in remote areas with many obstacles.

The algorithm that allows MAVs to perceive and avoid objects is based on Saxena’s previous work in visual perception, a program called Make 3-D. The software can convert a single image into three-dimensions using properties such as texture and color.

A MAV uses this algorithm to figure out how far away objects are, and then take evasive action if necessary. This type of processing would ordinarily use a lot of power, but Saxena and his team designed the algorithm to run on neuron-based hardware.

“This hardware is an approximation of the human brain,” said Ian Lenz grad, who, along with Mevlana Gemici ’12 grad, worked on designing and implementing the algorithm.

Like a human brain, the algorithm can learn.

“Rather than designing an algorithm to avoid trees, or light poles, or any other kind of specific object, we wanted to design a more general software,” Lenz said. The system learns by defining connections between the individual “neurons,” as opposed to normal programming.

“It’s a lot different than your standard computer, where you basically just give it instructions,” Lenz said.

The neuron platform is advantageous because it reduces the amount of power the machine needs. According to Lenz, the software uses 100 to 1,000 times less power than regular computer software. The reduced amount of power necessary is beneficial because small aerial robots need to be lighter to fly.

Saxena, Lenz and Gemici installed the algorithm in AirRobot, an autonomous four-rotor helicopter measuring about three-feet in diameter that is equipped to fly outdoors. Outside, there is a constant flux of subtle changes –– rain, snow, or even the position of the sun in the sky can change the robot’s video-feed perception of its surroundings.

“It’s hard for the robot to deal with these slightly different images of the same objects,” Saxena said.

Lenz and Gemici trained AirRobot to cope with changes in its perception of the environment. First, the team gathered images that the robot took with its on-board camera. In the pictures, they labeled the obstacles that the robot should avoid. Then, using the neural platform, the team taught the algorithm to avoid certain objects.

AirRobot was trained to avoid trees. Lenz and Gemici presented the algorithm with labeled examples of trees viewed from different perspectives. During test flights, the robot was able to steer clear of trees that it had never seen before since it had learned what trees look like in general.

The researchers tested how well AirRobot was learning by flying it in the Arts Quad and the Engineering Quad. After placing the robot in a starting position, they issued the command to go forward.

“Then we let the robot figure out if it could go straight forward, or if it needed to alter its planned path,” Lenz said.

The constantly changing environment made training AirRobot tricky. The team started training AirRobot in the summer, when it had to avoid tree foliage in order to prevent collisions.

“It learned that leaves are bad,” Saxena said, “but when autumn came, it was still scared of leaves. We had to add more data so it could figure out that leaves on the ground were okay.”

The algorithm can also be retrained, which means that the researchers can continuously update the objects that the MAVs should or should not be avoiding.

“A robot trained on trees is not as successful at avoiding man-made objects such as telephone poles,” Saxena said.

AirRobot is only one use for this new software –– Saxena’s group also developed an indoor robot that can navigate partially obstructed hallways, sharp corners and narrow flights of stairs.

It is important that MAVs be able to learn to autonomously avoid collisions both in difficult indoor environments and uncontrolled outdoor ones, since these challenging situations are where they may be needed most for surveillance tasks Saxena said.

Original Author: Jacqueline Carozza