With the advancement of materials science and semiconductor technology, electronic textiles, which combine the two, have gradually become a hot research field, and are also regarded as one of the most anticipated development directions in the wearable device and textile industry.

Back in 2014, Google’s ATAP team worked with Levi’s to promote “Project Jacquard”, and the following year launched a smart jacket with conductive yarn that enables basic audio playback operations and mobile phone control on clothes with the help of small terminal devices.

The project has now expanded to include backpacks and sneakers, while other teams at Google are working on similar efforts.

This time, Google’s AI team focused on smart ropes.

For example, the following seemingly ordinary rope actually hides a mystery:

If combined with gesture recognition technology, you can control the phone:

The researchers hope to work on the cords found in many clothes and electronics, exploring more intuitive ways of human-computer interaction, such as gently squeezing the headphone cable to play or pause a song, or twisting the hoodie cord to slide a web page.

At present, Google’s AI team has made three prototypes, including a headphone cable, a hoodie drawstring and an interactive power cord, all of which use new spiral induction matrix technology to recognize touch gestures and support six operation categories, including twisting, flicking/flicking, sliding, squeezing, grasping and tapping. Combined with parameters such as force, speed, and direction, each type of operation can also derive a wide variety of response actions.

This work is actually an extension of another study conducted by the team in 2018, which they named “e-textile microinteractions” and published in the form of a paper at ACM CHI 2020, the top academic conference in the field of human-computer interaction.

Google Research Scientist Alex “By improving aesthetics, comfort and ergonomics, textiles can help integrate high technology into our everyday environments and objects,” says Alex Olwal. Advances in materials and flexible electronics have made it possible to wear clothes with sensing and display functions. ”

Helix induction matrix

Adding conductive fibers to fabrics is not a new technology, and basic gesture capture and control can be achieved with the addition of a few capacitive sensing sensors. But instead of using this traditional way to make the rope, the Google AI team developed a new Helical Sensing Matrix structure that can recognize larger gesture manipulation spaces while reducing misoperation.

In simple terms, a braid with an HSM structure consists of conductive yarns wrapped in an insulating layer and non-conductive support yarns, part of which can also be replaced with optical fibers for efficient visual feedback. Judging by the official pictures, its appearance is almost indistinguishable from ordinary strings.

The conductive yarns in the fabric have positive and negative directions, and undertake the task of transferring current in multiple electrodes to achieve capacitive sensing, capture and recognize gestures. Since the conductive yarn is woven along the entire string, it is theoretically possible to apply a manipulation gesture at any position.

The specially designed helical structure can distinguish the relative angles of individual yarns. This is a very critical feature for twisting actions. After the electrodes are activated, by tracking the angular changes between the yarns, their relative movement can be captured to accurately identify the user’s gestures.

For example, if the offset of two symmetrical yarns from their initial position exceeds 90 degrees, it means that the user may be intentionally twisting the string. Of course, parameters such as the force, area and time of the touch are also taken into account to reduce false judgments.

Figure | Basic structure of textile rope (Source: Google)

Based on the touch-sensitive nature of the cord, the interaction principle is different from that of rigid structures such as glass or plastic, and the research team designed two interaction guidelines for this: simple gestures and closed-loop feedback.

The former emphasizes that the operation gesture must be short, whether it is a touch or a continuous operation, it must be intuitive. The latter means that users receive appropriate and continuous visual, tactile, and auditory feedback throughout the interaction, reducing uncertainty.

The aforementioned addition of optical fiber to the fabric is the core way to provide visual feedback, and different color beams can provide diversified dynamic real-time feedback to users.

For proof of concept, the Google team has developed three products that combine electronic textile threads, all of which are common in life, including USB-C headphones (cables) that can control mobile phone media playback, hoodie drawstrings that add music control functions to clothes, and power cords that can control smart speakers.

Appropriate manipulation gestures

To determine what constitutes a simple, intuitive gesture, the Google team conducted a study of gesture manipulation. For the new study, they collected an additional sample of 864 gestures from 12 volunteers, each of whom was asked to perform 8 gestures and repeat them 9 times.

When repeating the same action, people cannot be completely consistent, and everyone has individual differences, such as some people are used to pinching and twisting with the thumb and forefinger, some people are used to using the thumb and middle finger, some people have great strength, some people have little strength, and so on. Therefore, the analysis of gestures must be trained and screened to reduce the influence of personal style and preferences.

To enhance consistency, the team also trained volunteers and introduced a real-time feedback system to help them adjust to their behavioral differences.

For a motion capture system, each gesture has 16 features, linearly distributed over time across 80 observers. In other words, each gesture is carefully observed from start to finish, and the key features captured are used to train a system that has nothing to do with the user’s style.

In the end, the volunteers’ operating data showed that gesture recognition performed on the cord could reach 94% accuracy. The Google team sees this result as positive, because the proof-of-concept cord in the experiment has only 8 electrodes, which is less accurate and still has a lot of room for improvement.

Figure | Deformation and displacement caused by different gestures

In addition, they also quantified different means of operation, using electronic textile cords, comparing traditional headphone controls and touchpads.

The results showed that the control of the electronic cable was faster than the push-button traditional headphone cable control, comparable to the trackpad on a laptop. And compared with the headset control, the interactive experience of pure wire rope operation is more favored. One of the reasons is that the electronic cord is more sensitive and can apply effective operation anywhere in the headphone cable, without having to look for the location of the button or press the wrong key.

As a headphone cable, it is inevitable to touch the skin and clothes. To avoid misoperation, the research team also added a high-pass filter. But they also acknowledge that more work needs to be done in the future to assess its durability and long-term stability under real-world use.

Finally, the Google AI team said that it hopes to expand more application scenarios in the future, introduce micro-interaction technology into existing wearable devices and traditional textiles, and improve the interactive experience between people and technology while retaining the essence of industrial design.