FacebookMySpace TwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineLinkedinRSS FeedPinterest
Pin It

Engineers in Zurich have proposed a new control technique that makes a legged robot, called ANYmal, able to traverse rugged terrain quickly and confidently. With the help of machine learning, the robot was able to combine visual assessment of the surrounding space with touch for the first time. Steep slopes on slippery ground, steps, rockslides, and a forest path with roots: the road to Mount Etzel on the southern tip of Lake Zurich at 1,098 meters is strewn with obstacles. Nevertheless, the four-legged ANYmal robot from the Zurich Central Laboratory quickly covered the entire 120 meters vertically in 31 minutes.

It exceeds the estimated walking time for humans by 4 minutes without any falls or mistakes. Such a route was realistic due to the application of new robot control technologies, which ETH Zurich recently published.

This robot has learned to combine visual assessment of the environment with a sense of touch-based on direct contact with the feet. It allows it to move through rough terrain faster, more efficiently, and, most importantly, more steadily. In the long term, ANYmal can be used in areas unsafe for humans or challenging to traverse, even for other robots. 

A clear view of the environment

Humans and animals combine the visual concept of their surroundings with their perception of their legs and arms in a completely automatic manner to navigate rugged terrain. It enables them to easily traverse slippery or loose ground and move quickly without interference, even in poor visibility. Until now, robots with legs have been able to do this to a minimal extent.

The fact is that data about the immediate environment recorded by laser sensors and cameras can be incomplete and unclear. For example, high grass, shallow puddles, or snow are perceived as insurmountable obstacles or turn out to be partially invisible, while the robot can cross them. In addition, the robot's vision may be obstructed by difficult lighting conditions, dust, or fog in the field. For this reason, robots such as ANYmal must learn to decide when to trust the visual sense of the environment and move quickly and when to move cautiously and in small increments.

Virtual boot camp

With a new controller based on a neural network, the bipedal ANYmal robot can combine external and visual senses. Before the robot could test its abilities in a real-world environment, researchers tested it using a virtual training camp. The system encountered numerous obstacles and sources of error. Through this, the network learned the optimal method of passing barriers that allow the robot to cope with them and figure out when to rely on external data and when it's best to ignore them. 

Thanks to such training, the robot can master the most challenging natural terrain without being able to see it before. Such a system also works when information from sensors about the environment is unclear or ambiguous. In this case, the robot acts cautiously and relies on its proprioception. It allows the robot to combine the best of both worlds: the responsiveness and efficiency of the external sensors and the reliability of the proprioceptive sensors. 

Critical environment operation

Whether in the aftermath of earthquakes, after nuclear disasters, or during forest fires, robots like ANYmal can be used primarily where humans are too dangerous to work, and there are challenges that other robots cannot handle.