Gestural user interfaces may be the future, but we’re not quite comfortable with them yet--as users or designers. Their tactile simplicity makes them efficient once you get the hang of them, but their affordance-less, "pictures under glass" nature can make them inscrutable to first-time users. Meanwhile, how can designers fully grasp the possibility space of these interfaces without anything to, well, grasp?
Designer Juan Sanchez offers an intriguingly low-tech solution: origami models. "As a designer I find it’s important to remove myself from that [digital] space and explore solutions that can originate in physical space," he writes. To better understand the opportunities implicit in Clear’s gestural UI, he made a paper version that replicated its main pinching gesture.
Big whoop, you might be thinking; we already know how Clear works, so isn’t Sanchez’s model redundant? Actually, no. By physically modeling the interface in paper, Sanchez quickly found ways to expand upon its core ideas. His "accordion" and "fold and peel" gestures intuitively exploit the affordances implied by (but not included with) Clear’s pinch-to-open and pinch-to-close commands. A designer putting a Clear-like gestural interface together digitally would have to come up with these extra ideas out of thin air, which could lead to a grab bag of interesting but disconnected gestures. Toying around with a paper model, on the other hand, lets new gestures reveal themselves organically and stay intuitively related. After all, if your paper model is folded into segments that smoosh down into an accordion-like stack, they probably shouldn’t swipe left and right like Scrabble tiles at the same time. If it doesn’t make sense in physical reality, why should it act that way in your app?
Some designers might cringe at these kind of skeuomorphic constraints. Digital is digital--why should we use the clunky, limited physical world as our model? Here’s why: Because touch-screen interfaces are (for the moment) stuck in a grey area between the digital and the physical, and that’s confusing. Every gestural UI is a puzzle: They offer us the appearance of objects, but deny us the physical affordances (like texture, three-dimensionality, and contiguity) that we expect from those objects. And so the user has to start guessing with their fingers. But designing gestural UIs from physical prototypes (like Sanchez’s) could encourage designers to build in more of the "clues" that reduce this guesswork for the user. Getting the hang of one gesture would intuitively "reveal" what other ones are possible--a solution to the menulessness of gestural UIs.
In fact, why stop at paper? Sanchez shows that origami models can afford UI designers all kinds of useful insights, but what about other tangible materials like clay, rubber, plush, sand, or ink? If touch is the future, the physical world is overflowing with inspiration for UI designers who are willing to simultaneously accept the limitations of "pictures under glass," and think beyond them.