Robots that can detect fabric layers may one day assist with washing.

A new study from Carnegie Mellon University’s Robotics Institute may enable robots to feel layers of fabric rather than just see them. The work might enable robots to help humans with home activities such as laundry folding. Humans utilise their sight and touch senses to pick up a glass or a piece of fabric. It is so mundane that little consideration is given to it. These activities, however, are exceedingly tough for robots. The quantity of data collected by touch is difficult to measure, and the sensation has been difficult to imitate in robotics—until lately.

“Humans look at something, reach for it, and then use touch to make sure we’re in the appropriate position to grasp it,” said David Held, an assistant professor in the School of Computer Science and the director of the Robots Perceiving and Doing (R-Pad) Lab. “A lot of what we do with our hands is instinctive. We don’t think about it often, therefore we don’t appreciate how precious it is.”

To fold clothes, for example, robots need a sensor that mimics how a human’s fingertips can feel the top layer of a towel or shirt and grip the layers underneath it. Researchers could train a robot to feel and grip the top layer of fabric, but without perceiving the other layers of cloth, the robot would only grab the top layer and never fold the material.

“How can we get this fixed?” Held inquired. “Perhaps all we need is tactile sense.”

ReSkin, created by Carnegie Mellon and Meta AI researchers, was the appropriate answer. To detect three-axis tactile impulses, the open-source touch-sensing “skin” is constructed of a thin, elastic polymer embedded with magnetic particles. Researchers employed ReSkin in recent work to enable the robot to feel layers of material rather than depending on its vision sensors to view them.

“We can accomplish tactile sensing by measuring changes in magnetic fields caused by depressions or movement of the skin,” said Thomas Weng, a PhD student at the R-Pad Lab who worked on the experiment alongside RI postdoc Daniel Seita and grad student Sashank Tirumala. “By pinching with the sensor, we can utilise this tactile sense to identify how many layers of material we’ve gathered up.”

Other studies have utilised tactile sense to pick up hard items, but the fabric is “deformable,” which means it shifts when touched, making the job much more challenging. The robot’s attitude and sensor data are both affected by adjusting its grip on the material.

The robot was not taught how or where to hold the cloth by the researchers. Instead, they taught it how many layers of cloth it could grab by first calculating how many layers it could hold using ReSkin’s sensors, then altering the grip to try again. The researchers assessed the robot’s ability to pick up one and two layers of fabric, as well as varied textures and colours of cloth, in order to show generalisation beyond the training data. Because of the ReSkin sensor’s thinness and flexibility, it was feasible to educate the robots on how to handle something as delicate as layers of fabric.

“Because the profile of this sensor is so tiny, we were able to execute this extremely precise work of putting it between textile layers,” Weng said. “We were able to utilise it to do jobs that were before impossible.”

However, there are still a lot of studies to be done before turning over the laundry basket to a robot. It all begins with flattening a crumpled fabric, selecting the appropriate amount of layers of material to fold, and folding the cloth in the proper direction.

“It’s just an experiment to see what we can achieve with this new sensor,” Weng said. “We’re investigating ways to enable robots to feel soft objects with this magnetic skin, as well as easy techniques to handle fabric that we’ll need for robots to someday be able to wash our laundry.”

The team’s article, “Learning to Singulate Cloth Layers Using Tactile Feedback,” will be presented at the 2022 International Conference on Intelligent Robots and Systems, which will be held in Kyoto, Japan, from October 23 to 27. It was also named Best Paper in the conference’s 2022 RoMaDO-SI workshop.

Leave a Comment