Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

2 minute read

Future Forward

A Concept For Taking Pics Simply By Looking And Blinking

Humans are experts at looking at things, but do cameras get in our way?

A Concept For Taking Pics Simply By Looking And Blinking

You can almost see the story playing out. An old man sits on his porch, showing off his dusty Nikon D4 to his intrigued grandson. He tells the tale of how cameras went from bulky devices with huge lenses to sleek phones in our pockets. But then most of us stopped carrying cameras at all, opting to take photos invisibly in our experience, through glasses, or maybe something like Iris.

Iris is a prototype camera by recent Royal College of Art in London graduate Mimi Zou (the conceptual polished product is pictured here). The name isn’t just clever; it uses iris tracking to identify a user and follow where they look. When a photographer wants to snag a shot, they simply focus on that part of the frame—they look at it—and they blink to take the photo. (This idea might seem familiar: Innumerable sci-fi and spy thrillers have posited cameras in our eyes, activated in the same way.)

"I was very interested in exploring the role of personal identity in the future development of intelligent consumer electronic products," Zou tells Co.Design. "I believe that as we develop deeper relationships with our products, they could also learn about us in order to create the best user experience. Specific to photography, the user has such an intimate relationship to their scenery through their products, I wanted to demonstrate one feasible vision for the kind of products we can have in the future."

That’s the cleverness of Iris. Rather than simply giving photography a clever blinking shutter button, its core systems constantly learn from their user, adjusting for the nuance of glances and stares to become more responsively personalized over time. That learning is a key component in making such an interface feel natural for the user, the difference, I’m assuming, between just looking at something to take a photo and laboriously dragging a camera around with your eyeballs, frustrated by your own human twitches that constantly thwart the process.

"What I hope to have demonstrated is that when products take our natural human inputs into consideration and respond intuitively, disruptive new interfaces are possible," writes Zou. "And in order to achieve the same goals of capturing what we see, for example, buttons may very well become unnecessary—our eyes have it all."

Indeed. The more human our interfaces become, the more human they’ll need to be. It’s much like the uncanny valley. You’ve probably heard the idea before, but in short: Humans may adore Wall-E, but we become repulsed by something that’s 99% human and 1% Wall-E.

In interfaces, my liberal definition of the uncanny valley is why my niece has more tolerance for a Wiimote that disappears from the screen than the Kinect when it doesn’t understand her voice. When a system fails at accepting user inputs and there’s no visible system to blame, the user is left battling their own core instincts and actions. It’s not just dragging around a mouse, it’s moving your hand. It’s not just aiming a camera, it’s looking.

[Hat tip: The Creators Project]

loading