For the first time, a soft robotic arm is enabled to understand its configuration in 3D space

For the first time, a soft robotic arm is enabled to understand its configuration in 3D space

For the first time, MIT researchers have succeeded in designing skin sensors that give soft robots environmental awareness .

This finding is important because in the field of soft robots granting autonomous control is a complex task since they can move in a practically infinite number of directions at any given moment .

"Sensorized" skin

It is not practical for soft robots in real world applications to employ the same procedure as conventional robots, that is, large multi-camera motion capture systems that provide robots with references to movement and positions in 3D.

A system of soft sensors that cover the body of a robot to provide "proprioception", which means awareness of the movement and position of its body , is much more effective, as these MIT researchers have shown in a study published in the IEEE Robotics and Automation Letters magazine.

That feedback is found with a new deep learning model that filters through the noise and captures clear signals to estimate the robot’s 3D configuration, making it easier to make artificial limbs that can more deftly handle and manipulate objects in the environment .

According to Ryan Truby , first author of the study at MIT’s Computer and Artificial Sciences Laboratory (CSAIL):

We are sensorizing soft robots to get feedback for the control of the sensors, not the vision systems, using a very easy and fast method for manufacturing. We want to use these squishy robotic arms, for example, to orient and control ourselves automatically, to pick up things and interact with the world. This is a first step towards that more sophisticated type of automated control.

In the experiments, the researchers caused an arm to rotate and extend in random configurations for about an hour and a half. They used the traditional motion capture system on the ground. In training, the model analyzed data from its sensors to predict a configuration and compared its predictions with terrain data being collected simultaneously.

Image | Ryan L. Truby, MIT CSAIL