Will Gestures Make Smartwatches Truly Useful?

An intriguing concept imagines wearables as the gesture-controlled remote for your life.

We all seem to agree: It makes sense that we’ll be wearing more of our technology in the future. But no one has cracked exactly how an ideal smartwatch will work, yet. The Apple Watch didn’t give the industry a perfect role model to copy, while Android Wear has shifted from providing you one magic blue button that does anything to enabling full QWERTY.


Where are smartwatches going next?

Gestures, maybe. For example, Google’s Advanced Technology and Projects (ATAP) division continues to show off Project Soli, which allows you to control a watch by gesturing around the screen. And now Invoc, a project by Fjord/Chaotic R&D, takes that idea a step further. Using a smartwatch’s built-in accelerometer, it reads any motion your arm makes to control connected devices via Bluetooth.

In real terms, that means twisting your hand might turn up the volume in Spotify. Flicking your wrist could turn on a light, or hailing a cab could call for an Uber. Any gesture your watch hand makes could become programmed as a control to some connected system.

Is it practical, though? Not yet. Cleverly, Chaotic Moon built a sort of safety into the software so that you aren’t accidentally activating gestures willy-nilly. You rotate your wrist in a very specific way to activate it (the gestural equivalent of saying “Okay Google” or “Hey Siri”), then you make the actual functional gesture directly afterward. Of course, this safety means that every simple function you want to activate really requires you make two gestures rather than one.

Furthermore, Matthew Murray, the Creative Technologist leading the project, tells us that because Invoc is a piece of standalone software and can’t take full advantage of the efficiencies inside smartwatch’s core APIs and code, it burns through battery too quickly to be practical. And it’s also currently unable to talk to any app but Chaotic Moon’s own media controller.

During their tenure at Frog, Mark Rolston and Jared Ficklin (now of Argodesign) began working on a similar concept called Room-E. Rather than using a wearable, Room-E tracked a user’s motions through depth cameras like the Microsoft Kinect. They found that something as simple as turning up your stereo by twisting the salt and pepper shakers could be a hugely gratifying, relatively intuitive gesture–that they could track accurately, too.


That said, I’m not so sure wearables can pull off the same feat at the scale of real life. Visual systems, like depth cameras, can take all sorts of cues from you and your environment that would be otherwise invisible to a wearable. They can theoretically see if you’re looking at the lamp you want to turn on, for instance, using the same sort of social logic a human might–“Oh, he’s looking at me, he must also be talking to me.”

Wearables can rely on all sorts of radio frequency communication to connect your watch to your smart home, but to make its environmental awareness so good that we can flick, hail, and twist our ways to more convenience? Of that, I’m highly skeptical–even if I do love the idea of air punching each time I send a tweet.


About the author

Mark Wilson is a senior writer at Fast Company. He started, a simple way to give back every day.