Ads by Google

Tuesday, November 24, 2015

Ancient humanoids: the automatic maid of Philon of Byzantium

Translated from Ancient Hellenic Technology:

The automatic maid of Philon:
(the first operational robot in history)

   This is about a humanoid robot in the form of a maid (in natural size), holding on the right hand a wine jar. When a visitor placed a cup on the palm of her left hand, she would initially pour wine and subsequently mix it with water following the visitor' s desire.




Description of operation: There are two containers in the interior of the maid, filled with water and wine, respectively. Two tubes that start at the bottom of the containers go through the right hand and conclude at the tip of jar. Two air tubes are exposed to the top of the two containers and go through their interior, passing through the stomach. Her left hand is connected through a joint to her shoulder while a spring placed at its center lifts the arm upwards. Another two tubes commence from the same point and go down (going through and blocking the perforated corners of the air tubes). The tubes of the joint dispose two holes/openings at their endings, the hole that connects with the wine container preceding that which connects with the water container. When the cup is placed at the palm of the maid, her left hand goes down and the tubes of the joint go upwards. The opening of one of the tubes is aligned with the air tube of the wine container, air enters the container and wine flows from the tube of the wine jar to the cup. When the cup becomes half full with wine, the hand (due to its weight) goes further down, the opening of the air tube for the wine is blocked and the flow is interrupted. At the same time, the opening of the second tube is aligned with the air tube for the water container and water starts to flow in the cup for diluting the wine. When the cup is entirely full, the hand continues to go further down (due to its weight), the opening of the air tube to the water container is blocked and water flow is interrupted. Moreover, if the cup is removed from the hand at any time, the left hand goes upwards, the tubes of the joint go down, thus blocking the air tubes and creating a void in the containers and the flow of liquids is halted. Thus, the maid fills our cup with pure wine or wine diluted with water at the desired ratio, depending on the time when we remove the cup from her hand.

Monday, November 2, 2015

Next INNOROBO in Paris!

From 24 until 26 of May, INNOROBO exhibition will take place in Paris, France. I will try to keep this post updated with new information related to the event.

Full information below:

http://innorobo.com/en/home/





Thursday, May 28, 2015

Robots that can adapt like animals




Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret, Robots that can adapt like animals, Nature, 2015.


Abstract: Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot ‘think outside the box’ to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, 11, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot’s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.


Article reference:

Friday, May 15, 2015

Baidu’s Artificial-Intelligence Supercomputer (Minwa) Beats Google at Image Recognition



The race for ever-increasing discriminative power in image classification has been heating up over the last period.

2 days ago the chinese Baidu search company announced that they beat the previous record in image recognition set by Microsoft Research, by a marginal 0.36% less error rate. Microsoft was the first to surpass human recognition performance almost 3 months ago in February 2015, with Google currently holding the 2nd best recognition performance.

All this is made possible through the use of deep convolutional networks and deep learning schemes, namely, the construction of neuromorphic recognition schemes where raw information passes through multiple intermediate layers before giving the desired class recognition output. This is made possible by using immense computational power (super-computers) which is directed into training a system onto huge amounts of ground-truth data.

These news come as a follow-up to the previous post on human emotion emulation and recognition where scientists reported that the corresponding system could reach and marginally exceed the human recognition performance of emotions!

For those interested, you may have a look at the news talking about the technological breakthrough here:

and at the arxiv repository for corresponding scientific documentation on the respective systems:

Sunday, April 19, 2015

Audiovisual emotion emulation; meet virtual "Zoe" avatar



A marvelous technological achievement brought when social sciences and engineering sciences meet altogether, by talented people. Can you imagine this expression capacity to be demonstrated by an actual robot? The amount of empathy or antipathy would you develop as a response? What would happen if these automatic emotive expressions could be intelligently emulated in order to induce a certain emotional response to the recipient? The following article is a MUST read...

Cambridge University & Toshiba | Zoe the emotional avatar of the future

Wednesday, February 4, 2015

Wild encounters

And this summarized that day's lesson of father's kangoroo to the child kangoroo, on dealing with annoying flying creatures even if you are not biologically engineered to deal with this kind of situations.

It is called improvisation...



Tuesday, January 6, 2015

Are we robots ourselves?

Perhaps this is not the most common question one may ask to him/herself. The original question that i posed to myself was whether we will eventually be able to build robots as we envision them today. Is such a goal really feasible, it is realistic?

Below is the definition of the word "Robot", quoted from different on-line dictionaries:



and so on... The origin of the word "Robot" is from Czech, literally meaning "forced labor".


The above two definitions are particularly interesting. Because they both clearly suggest that one can immediately turn himself into a robot if he simply put himself at the service of an external authority-power. 

Afterwards, the degree to which somebody is more skillful and effective compared to others in doing things automatically and repeatedly, only concerns how good or bad robot he or she is. It is not so much a matter of the quality of the "forced labor" to actually be a robot. This only affects the level of responsibility that will be appointed to the voluntary human robot.

So i ask again, are we robots ourselves? I think the answer is undeniable. Human behavior tends to be identical to the very definition of robot. But is this actually truly human or not? Or better, are we still in the state of robots before having accomplished to express our pure human nature?

We are trying to build artificial robots, and still we seem not to have accomplished much of our expectations. We all however become robots whenever we give away our own will to someone else's and provide our skills to another's service.

Given the current status, it seems to me more likely that we are going towards a robotization of ourselves. Metaphorically and literally...

As of the time of this writing, the hour is late. Time for me to recharge...