"The Minority Report interface" is a pop-culture (and design) phenomenon that persists even as more practical gestural-interface prototypes enter the marketplace. But in order for Tom Cruise-like hand acrobatics to ever have a chance of replacing the ol' mouse and keyboard, the gestures themselves have to be as natural as speaking with your hands already is. Unfortunately, that's not the case — computers are still dumb as rocks in this department, and as a new paper on the ArXiv explains, "a large amount of research is focused on ?xed movements geared towards e?ciency of recognition, not interaction."
From these "eigengestures," a standard toolkit can be assembled for designers.
If you have to tie your hands in knots to get your computer to understand you, what's the point? The authors of that ArXiv paper suggest an alternative for discovering gestural commands which are standardized enough for computers to understand and natural enough to be instantly intuitive. Using statistical analysis and a motion capture glove, the authors mathematically distilled 22 common hand gestures (including "A-OK," "thumbs up," "crazy," "walking," and "cutting") into so-called "eigengestures": the pure essence, if you will, of these movements that are common across most instances. It's a clever variation of a technique that helps face-recognition software work ("eigenfaces" are mathematically defined "standard face ingredients" that computers can understand).
The idea is that by investigating and discovering these "eigengestures," a standardized toolkit can be assembled for gestural interface designers to draw from, just like "click," "double-click," and other mouse-driven commands are standardized now. But because they're based on studying the gestures we're comfortable making already — rather than building ones from scratch that are optimized for the computer's benefit — the odds are stacked in favor of usability.
That's the theory, anyway. Technology Review is rightfully skeptical of eigengestures' real-world utility: apparently many of these gestural ur-texts are even more impractical than the ones that Spielberg and Co. already thought up. (To the researchers' credit, they admit this in their paper.) So eigengestures may end up being an important but academic research tool, rather than a glimpse of the iThings of the future. It's not so surprising: human beings (like Steve Jobs) will probably always have better insights into what "feels natural" than statistical algorithms ever will.