Cornell University scientists have developed a way for robots to identify physical interactions by analyzing only the shadow of a user.
Their ShadowSense system uses a USB camera from the shelf to capture the shadows produced by hand gestures on the skin of a robot. Algorithms then classify the movements to derive the user’s specific interaction.
Lead author Guy Hoffman said the method is a natural way to communicate with robots without relying on large and expensive sensor setups:
Touch is such an important means of communication for most organisms, but it was virtually absent in human-robot interaction. One of the reasons is that touching the body previously required a large number of sensors and was therefore not practical to implement. This research offers a cheap alternative.
The researchers tested the system on an inflatable robot with a camera under the skin.
[Read: How Polestar is using blockchain to increase transparency]
They trained and tested the classification algorithms with shadow images of six gestures: palm touch, punch, two-hand touch, embrace, point and not touch.
It successfully distinguished between the gestures with 870.5 – 96% accuracy, depending on the lighting.

The researchers provide mobile guide robots that use the technology to respond to various gestures, such as facing a person when he detects a poke, and moving away when he feels a tap on his back.
It can also add interactive touch screens to inflatable robots and make home assistants more privacy friendly.
“If the robot can only see you in the form of your shadow, it can detect what you are doing without taking a high degree of fidelity from your appearance,” Hoffman said. “It provides you with a physical filter and protection and offers psychological comfort.”
You can read the full study paper here.
Published on February 11, 2021 – 19:37 UTC