Ads by Google

Thursday, May 28, 2015

Robots that can adapt like animals




Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret, Robots that can adapt like animals, Nature, 2015.


Abstract: Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot ‘think outside the box’ to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, 11, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot’s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.


Article reference:

Friday, May 15, 2015

Baidu’s Artificial-Intelligence Supercomputer (Minwa) Beats Google at Image Recognition



The race for ever-increasing discriminative power in image classification has been heating up over the last period.

2 days ago the chinese Baidu search company announced that they beat the previous record in image recognition set by Microsoft Research, by a marginal 0.36% less error rate. Microsoft was the first to surpass human recognition performance almost 3 months ago in February 2015, with Google currently holding the 2nd best recognition performance.

All this is made possible through the use of deep convolutional networks and deep learning schemes, namely, the construction of neuromorphic recognition schemes where raw information passes through multiple intermediate layers before giving the desired class recognition output. This is made possible by using immense computational power (super-computers) which is directed into training a system onto huge amounts of ground-truth data.

These news come as a follow-up to the previous post on human emotion emulation and recognition where scientists reported that the corresponding system could reach and marginally exceed the human recognition performance of emotions!

For those interested, you may have a look at the news talking about the technological breakthrough here:

and at the arxiv repository for corresponding scientific documentation on the respective systems: