Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

3 minute read

Google's Project Soli Can Now Identify Any Object

In the hands of third-party researchers, Google’s wearable sensors get a wild upgrade.

Google's Project Soli Can Now Identify Any Object

Google's Project Soli was was revealed last year, born out of the company’s most secretive lab. To create Soli, interface guru Ivan Poupyrev fit a tiny radar into a smartwatch, allowing it to recognize the tiniest of gestures of the wearer.

It was a remarkable feat, the fruits of which are still yet to be seen. But that hasn’t stopped University of St. Andrews researchers from taking Soli to the next level on their own. Using a Soli AlphaKit, which Google shared with a select few labs, the St. Andrews researchers created a new project called RadarCat that not only recognizes gestures; but specific objects and materials.

Steel. Glass. Copper. Apples. Oranges. Hard drives. Air. By touching just about anything to Soli's radar plate, their system can learn to identify it.

In the above demo featured on Kottke, you can watch the identifying system at work. It’s instantaneous and highly accurate. Unlike an auto-ID alternative we’ve seen from CMU that relies on measuring the micro vibrations of electrical objects, and therefore can't work on anything without a plug or a battery, the RadarCat system uses RADAR (radio waves) that reflect against any object to create a unique thumbprint.

But beyond mere proof of function, the RadarCat team also shared a few potential use cases for the technology. At a bar, they demonstrated how an empty glass placed on a Soli coaster could cue an auto-refill. (And don’t fret, the same effect would theoretically work for the Olive Garden’s unlimited soup/salad/breadsticks.) They also demonstrated that, simply by touching an electronic stylus to different surfaces, Soli could treat these like a palette of inputs, augmenting line style or color in an app like Photoshop without using a drop-down menu. Perhaps most impressively, however, the RadarCat team illustrated what UI can do when it’s context aware. While your iPhone of today has no clue how you’re holding it, or what’s around it, the researchers' Soli-equipped phone is able to make icons larger when it's being held in a bulkier, gloved hand, or launch a specific app simply by touching it with a specific finger or article of clothing.

Naturally, these examples get a bit gimmicky, quick. But what’s most exciting is how knowing your context could mix with other interfaces in our world—to make all of our experiences more seamlessly smart. Think of how it could change something as simple as a cooking app. It could collaborate with Amazon's Alexa so that, when you touch your fridge to pull out ingredients, Alexa's voice could guide you on the correct items. Put a pot on the stove, and not only could Soli spot if you’d chosen the wrong one, but Alexa could speak up, letting you know that, "Hey, if you’re looking to sear a piece of meat, go with cast iron or steel, not the Teflon that you grabbed." Via Bluetooth, your stove could cross reference the recipe and set the electric burners to the perfect temperature for a simmer when you deglaze the pan.

It is, perhaps, an obnoxious example of our hyper-connected future! I haven’t stopped to ask the question of whether or not we truly need this much help cooking dinner, after all. But it’s also the sort of mindless, assistive interface that futurists have been dreaming about since the '50s, finally getting to a point of feasibility that doesn’t require screen-based UX cop-outs like tablets.

Ultimately, I imagine that UX will disappear into the background, rather than distracting us all day every day. But it’s exactly technologies like Soli, silently tracking everything we touch without our prompting, that we need to make that calmer future of computing possible.

related video: Facebook Wants To Win At Everything, Including Artificial Intelligence

loading