Ads by Google

Tuesday, December 24, 2013

Cubli: a fancy self-balancing cube

The Cubli is a 15 × 15 × 15 cm cube that can jump up and balance on its corner. Reaction wheels mounted on three faces of the cube rotate at high angular velocities and then brake suddenly, causing the Cubli to jump up. Once the Cubli has almost reached the corner stand up position, controlled motor torques are applied to make it balance on its corner. In addition to balancing, the motor torques can also be used to achieve a controlled fall such that the Cubli can be commanded to fall in any arbitrary direction. Combining these three abilities -- jumping up, balancing, and controlled falling -- the Cubli is able to 'walk'.



More details can be found at the project' s homepage, here:

Friday, December 20, 2013

First ever conversation between human and a robot, in space

The first ever documented conversation between a human and a robot just took place in the International Space Station (ISS) between Japanese commander Wakata and robot Kirobo.

Kirobo is a small humanoid robot designed to serve as a companion for astronauts who normally spend several months in space before returning to earth. This implies extended periods of time of isolation from human-human interactions which is also something that is expected as plans are made for longer-term space missions (as for example within the Mars One project).

The following video shows parts of the conversation that was documented in ISS last days, between Kirobo and Wakata.


Tuesday, December 17, 2013

Merry Christmas: the Robots' point of view

The annual 2013 "Merry Christmas" video of the ASL (Autonomous Systems Laboratory) in ETH Zurich, has just been released. This time it features more robots than previous years and an overall more exquisite scenario.

The Masterpiece, directly below:

Saturday, December 14, 2013

Mars One launches crowd-funding project


Mars One is an international initiative aiming at sending the first humans on the surface of the red planet and eventually building a permanent human settlement. It consists of a series of missions that will start with sending a robotic vehicle in 2018 which will deploy the first modules and demonstrate the application of the necessary technologies before actually sending humans, starting in 2024.

Figures say that more than 200.000 people from all around the world have applied for an opportunity to be among those stepping on the surface of Mars for the first time in human history. Next to this record, Mars One claims to be the first space project that is open for private funding. In fact, a crowd-funding scheme has just been launched at Indiegogo where anyone can be a contributor and write history.



Obviously the amount of funds set by the crowd-funding scheme ($400.000) are not meant to support the total budget of such an extraordinary project, but it gives the opportunity to people to be actively involved in making this happen. After the successful missions of the earlier missions of Opportunity and Curiosity, Mars One can be the next leap in space exploration and boost robotics technology to unprecedented frontiers.

Wednesday, December 11, 2013

A custom-made robotic gripper

We have all heard about dexterous manipulation and generally the challenge of robotic grasping when it comes to every day objects that are fragile and require delicate operations. True problem one may think that can only be dealt with sophisticated computational solutions and advanced robotic grippers with touch sensors, force-feedback controls and so on.

Well you may want to think again after watching the following cool design of a custom-made robotic gripper, that uses as materials a balloon, coffee grain, plastic tubes and an air pump. A truly inspiring idea showing how simple solutions can be of great utility.


Monday, December 9, 2013

Failsafe hovering of quadrocopter after loosing propellers

The following video shows the application of a failsafe routine that is executed at the moment when the micro aerial vehicles looses one or more propellers. Obviously, while the ongoing mission objectives can no longer be pursued one still wishes to preserve safety and prevent the crashing of the copter.

The algorithm developed by the Institute for Dynamic Systems and Control of ETH in Zurich allows the continuation of a stable hovering and eventually ensures a safer landing on the ground.


A promising module for increasing safety in autonomous flying, although probably it could prove more useful if the failsafe routine could further deal with semi-breaking, or crashing of propeller during the flight which is something more likely to happen, compared to loosing completely one propeller as is shown in that video.

Thursday, December 5, 2013

Robots of IREX 2013 Exhibition at Tokyo

The International Robot Exhibition (IREX) 2013 is taking place in Tokyo this year, showcasing the world' s most advanced robots to date. The following video featured at IEEE Spectrum news site shows the exhibits as they operate live in front of the camera.


A full article is available in this link.

Kinect or AsusXtionPro does not (re)start under ROS: Solution

Have you noticed that Kinect or AsusXtionPro under ROS appears to have a stochastic behavior? In other words, do you have problem that sometimes you are able to launch the driver of the sensor but sometimes you cannot, although you have not changed anything in your system-environment? Then the following hint may come in very handy.

Solution

Here' s a quick thing you can try and see if you have any better chance after. The following command may cause NO new problems whatsoever, so try this in your terminal:

kill $(ps aux | grep Xn | awk '{print $2}')

This command will try to kill any processes with a name that includes "Xn", which in our case aims at killing XnSensorServer and XnVSceneServer that may run already and prevent ROS to take control of the sensor.

If you want to see whether you have already these two processes running you can try this:

ps aux | grep Xn

For example, in my terminal the output looked something like this:


If NONE of these two processes appear in your output while you are trying to use the sensor through ROS, then your problem is different and the solution proposed here will not apply to your case.

But if you DO see, one, or both of these processes already running, then if you kill these two processes with the first command that i gave in the beginning you can be sure that YOU WILL NOT GET the error message:

"....enumerating image nodes failed. Reason: One or more of the following nodes could not be enumerated:

Image: PrimeSense/SensorV2/(your_sensor_verison): Failed to set USB interface!..."

through ROS.

Now try to initiate the sensor through the ROS wrapper (roslaunch openni_launch openni_launch) and see if you are more lucky.


Note: i arrived to this solution by reading this post:

Tuesday, December 3, 2013

Air drones deployed in Philippines for search and rescue

Air drones are being deployed in Philippines after the disastrous pass of the Typhoon, assisting rescuers and first-aid responders to assess the situation from viewpoints that would otherwise be difficult to reach. The air drones can provide an augmented perceptual field of view as they are able to inspect areas from high altitude and map important details of the environment, such as collapsed buildings and people. Below is a video that demonstrates their deployment at various sites together with footage from their recordings:



Below you can watch the video of the human-robot mission that was dispatched in the earthquake-hit area of Emilia Romana in Italy, back in summer 2012, as part of the EU project NIFTi (www.nifti.eu).

Monday, December 2, 2013

Doctors "beamed up" by robots

Robotic platforms are gradually (but steadily) entering human environments in various contexts. 

One of provisioned applications is to allow the so-called telepresence, in other words, providing a physical body in a remote location that can be teleoperated so as to provide the artificial feeling of one's being situated in the remote environment.

As an example, robots are beginning to be used in hospitals as a means for a doctor to visit patients. Here is a video that exposes the general idea.




The full article can be found here:
http://www.euronews.com/2013/11/20/doctors-beamed-up-by-robots/



Saturday, November 30, 2013

Dog-Robot Interactions

I recently came upon an interesting article talking about experiments on the behavior of dogs in encounters with robots.


Quoting from the article' s abstract:

"...Dogs spent more time staying near the robot experimenter as compared to the human experimenter, with this difference being even more pronounced when the robot behaved socially. Similarly, dogs spent more time gazing at the head of the robot experimenter when the situation was social..."


Full article can be found here:


and THEN... i remembered when i witnessed for the first time an (accidental) encounter between a robot and dog, sometime around 2011-2012. Below you can watch the corresponding video :)




Friday, November 29, 2013

Using tongue piercing for controlling robotic wheelchair

Researchers from U.S. Universities and in collaboration with medical institutes have recently designed and tested a somewhat "unconventional" (and maybe revolutionary) human-computer interface using the tongue.

In detail, their aim is to allow people with mobility disabilities to control their wheelchair by using a tongue implants (piercing). The developed system is called TDS, standing for, Tongue Driving System.

See the following video:




The full research article can be found here:
http://dx.doi.org/10.1126/scitranslmed.3006296

Thursday, November 28, 2013

TED Talk by Rodney Brooks: Why we will rely on robots

This is recent talk given by prof. Rodney Brooks on why robots seem to become part of our present and future, from industry to our everyday life.


Tuesday, November 26, 2013

inFORM display by MIT

inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table's surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. inFORM is a step toward our vision of Radical Atoms


Monday, November 25, 2013

Simulating LIDAR sensors





Prologue

Simulation of LIDAR sensors concerns the process of simulating the sensing acquisition process of laser-range-lidar sensors by a computer program.

The way in which these sensors operate is by "active" perception, in other words, by touching the surface at which they are directed. Touching is implicitly performed by casting rays of photos (or rays of sound) and detecting the collisions that occur along their direction.

Active sensors are used extensively in robotics as they can provide measurements that are invariant to lighting conditions as well as denser depth acquisitions compared to passive (camera) sensors.

Available resources


Generally, in order to simulate the behavior of an active sensor through software, the proper way to do it is by employing "ray tracing", as is very well known in Computer Graphics in order to realistically render 3D scenes.

In this post, i would like to share some resources that i found in the internet which provide this functionality as well as some relative papers that employ simulation of active sensing. I hope it would be constructive and useful for all those out there that wish to play with these concepts ;).

Software

  • http://www.blensor.org/Blender Sensor Simulation basically provides a blender add-on that can simulate various standard laser sensors and control various parameters. In my opinion, just superb!

Papers


Tutorials


I will try to keep this list updated as i come upon relevant material-resources. Any such proposal will be greatly appreciated :).




Friday, November 15, 2013

Article submission to EES (Elsevier Editorial System)

Having submitted articles to several different journals of elsevier through the EES site, it seems that submitting a latex-based manuscript can be quite burdensome.

In detail, i always had problems in compiling and building the entire manuscript including figures and bibliography. Finally, i arrived to successfully compile and build a prospective article hence i would like to share the recipe of success :). In detail:

* Make sure that you are compiling and building your manuscript locally in your computer without errors. Although you may be able to produce a readable pdf on your computer, EES will not produce a pdf unless there are no errors in the compilation phase.

* Following the previous rule, at least in my case, implied that i had to use .EPS figures.

* Uploading a separate bibliography file (.bib) is not sufficient for EES. You have to additionally upload the corresponding .bbl file that should be produced if you can build your article locally. Eventually, both the .bib file and .bbl file should be designated as "Manuscript" files when uploading.

* If you are only using the style files that come along with the elsarticle package, avoid uploading them again in your submission.


That should be all :).

Good luck.

Tuesday, October 15, 2013

Drone-assisted archeology (euronews)

... Archaeologist Olivier Feihl is making a 3D map of the dig using a drone equipped with a camera.

“It takes a photo every 1.5 or 2 metres to make sure the whole surface is covered, so that the photos will overlap, enabling us to measure the archaeological dig in 3D,” he tells euronews.
It’s a true revolution for archaeologists and helps them save a lot of time.
“Before we had this kind of technology, everything was done by hand,” says Sebastien Freudiger, who is also working on the project. “Each wall was drawn by hand, each layer was drawn by hand. Now, this new technology enables us to do all of that on a computer.”
It takes the drone just ten minutes to take all the snaps needed. They are then processed ...

A robot that flies like a bird

Plenty of robots can fly -- but none can fly like a real bird. That is, until Markus Fischer and his team at Festo built SmartBird, a large, lightweight robot, modeled on a seagull, that flies by flapping its wings. A soaring demo fresh from TEDGlobal 2011.

Tuesday, July 30, 2013

How Productivity and Impact Differ Across Computer Science Subareas

A recent article in "Communications of the ACM" that sketches the impact of different areas of Computer Science in community, with statistics over recent years.

circuit board


Some computer science researchers believe different subareas within CS follow different publishing practices, so applying a single production criterion would be unfair to some areas. It is reasonable to believe the subarea of, say, theory follows different publishing practices from...


Full article

Thursday, June 27, 2013

Prosthetic Arm by DARPA

A team of researchers at the Rehabilitation Institute of Chicago (RIC) demonstrated a type of peripheral interface called targeted muscle re-innervation (TMR). By rewiring nerves from amputated limbs, new interfaces allow for prosthetic control with existing muscles. Former Army Staff Sgt. Glen Lehman, injured in Iraq, recently demonstrated improved TMR technology. In this video, Lehman demonstrates simultaneous joint control of a prosthetic arm made possible by support from the RE-NET program.


Full story is available here.

Monday, June 17, 2013

Terrain Traversability Analysis Methods for Unmanned Ground Vehicles; A Survey

Motion planning for unmanned ground vehicles (UGV) constitutes a domain of research where several disciplines meet, ranging from artificial intelligence and machine learning to robot perception and computer vision. In view of the plurality of related applications such as planetary exploration, search and rescue, agriculture, mining and off road exploration, the aim of the present survey is to review the field of 3D terrain traversability analysis that is employed at a preceding stage as a means to effectively and efficiently guide the task of motion planning. We identify that in the epicenter of all related methodologies, 3D terrain information is used which is acquired from LIDAR, stereo range data, color or other sensory data and occasionally combined with static or dynamic vehicle models expressing the interaction of the vehicle with the terrain. By taxonomizing the various directions that have been explored in terrain perception and analysis, this review takes a step toward agglomerating the dispersed contributions from individual domains by elaborating on a number of key similarities as well as differences, in order to stimulate research in addressing the open challenges and inspire future developments.

http://dx.doi.org/10.1016/j.engappai.2013.01.006



Thursday, June 13, 2013

Nanorobots and their potential in medicine

A stimulating TED talk by Dr. Ido Bachelet, on how nanorobots could revolutionalize medicine (among other things...)