Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

6 minute read

4 Problems Google Glasses Have To Solve Before Becoming A Hit

Google is planning to augment reality, and it’s a design problem that will require more than mere algorithms to solve.

4 Problems Google Glasses Have To Solve Before Becoming A Hit

Google has never been a design-forward company, revolutionizing our lives through interface design. Instead, they’ve taken over the world building products with raw intellectual horsepower—brilliant artificial intelligence to fuel search, wise mapping systems to take us from point A to point B, and clever cloud-syncing apps that allow us to collaborate on projects from around the globe. Google never had to be pretty. It’s always been smart.

Yesterday, Google officially revealed a project that will push them to their creative limits. It’s called Project Glass, and it’s a pair of glasses that layers digital information over the real world.

It’s your smartphone, right in your eyes. You can read text messages. You can take photos. You can listen to music (thanks to some built-in earbuds). You can even be told that the subway is closed as you walk up to it, and be redirected to your destination by foot.

But maybe most notably, nothing about what Google has presented is an actual product yet, or considered close to finalized. "We wanted to let people know about what we’re doing, and what we hope to achieve with it," a Google spokesperson told Co.Design, "But in terms of the graphics, the visuals, the hardware setup, there’s a lot of experimentation going on. And a lot of rapid prototyping on the team."

The concept video Google has shared is meant to signify what the team feels "would be of most value to people," and what they’re closest to actualizing. Now that this concept is public, Google will be entering what they called the "feedback gathering phase," in which they’re looking for the community to chime in on what they want to see (and don’t want to see) in a fully realized product.

So where does this leave us for now? What Google has shown is promising, but their design challenges are clear:

1. Google Needs To Avoid "The Segway Problem"

There’s a reason that video glasses haven’t taken off yet (and by that, I don’t mean augmented reality glasses like Google’s, but something more like Vuzix). And, for lack of a better term, we’ll call it The Segway Problem. Technology can be a symbol of your future-forwardness, or it can be the exact opposite: a sign of the future’s ridiculousness. The Segway flopped in part for its cost and in part for the fact that humanity isn’t quite that lazy, but there was a deeper, visceral reaction to the core of the product that signified a silly future rather than an inspiring one. So far, the actual glasses Google is showing off aren’t inspiring. To succeed, Google will need to sell us on either the stylishness, or the invisibility, of video glasses. And may we suggest copying the iPod in this approach? Make the technology as obscured on the user as possible, except for one trademark calling card (in the iPod’s case, white earbuds).

2. Google Needs To Navigate "The Always On Problem"

As inspiring as moments in Google’s concept video may be—and the photo-taking moment is an aha moment if I’ve ever seen one—it’s also stuffed with notification, none of which is fundamentally different from what we could be checking on our cell phones less intrusively. The functions that Google blocks will be as integral to the platform’s success as those that are enabled. Finding the perfect level of obtrusiveness within an omnipresent Internet connection could be the largest challenge of human-device interaction the electronics industry has ever encountered. And as Google is paving new ground, they’re working outside their comfort zone: Google has no data to mine for how much notification is too much notification. If ever there’s been a product ripe for Google Labs field testing, it’s Project Glass.

3. Google Needs To Find A Killer Use-Case

People in the Valley used to talk all the time about finding "killer apps"—that is, the one, defining use of a technology that’ll spark its mass adoption. And no wonder: With technologies such as augmented reality and Project Glass, the possibilites seem to outstrip the actual need. As I suggested before, these glasses aren’t yet doing anything our phones can’t. So why do they need to be glasses?

A good counter-example is the iPad. Lots of people dismissed it when it first came out, saying, "Sure, it’s cool, but what does anyone need another computer for?" Well, it turns out, people didn’t need another computer so much as they wanted one—a computer that would make surfing the web from your bed or couch a lot less clunky and more fun. With Project Glass, I’m not sure that they have that use-case yet—that is, the perfect scenario where this just makes sense in people’s lives. There might be some set of features and interactions that makes it so, but these haven’t quite appeared just yet.

4. Google Needs To Attenuate "The Too Much Feedback Problem"

Where Project Glass is at now, what one spokesperson labeled "the feedback gathering phase" in our brief conversation today, is a tenuous spot to be in. Crowdsourcing can create great products, but when it comes to inventing something that no one has conceptualized before, we need bold visionaries, not naysaying Internet whiners. Not just anyone can design a user interface. And I’d posit that almost no one can design a usable interface that will sit in our eyeballs 24/7. Crowdsourcing user feedback at the invisible level—the advanced A/B testing Google does when they test the color blue without us even knowing it—could be integral to fine tuning Project Glass at a number of levels. But at heart, they will need to present us with a most singular vision if they expect any of us to don a pair of glasses, not a mishmash of suggestions from the peanut gallery.

The little things, the softest touches of design, will define Project Glass’s future in the marketplace. Is the interface loud or quiet? Do we use vocal commands with some functions or all functions? Are notifications really in the center of the screen, or can they be repositioned? Will images be opaque or partially transparent? What will the glasses show when I sit at my computer or when I drive? All of these "how does it feel" components will matter even more than they do in a cell phone. But on top of all this, and maybe most importantly, we’ll need to know the one big reason that we’ll all want to wear our phones rather than keep them tucked away in our pockets. As of right now, I don’t think we’ve seen it.

Most of us interact with at least one Google product every day. Many of us use their products all day, every day. Whether or not you’ve been particularly inspired by their design, you can’t argue that their approach hasn’t worked well enough so far.

But it’s been a while since Google was the first to market in uncharted territory (and it begs the question, have they ever been, really?). Wearing a computer has the potential to redefine the human experience even more than PCs or smartphones did. With Project Glass, Google has the task of designing the interface of our lives, and I can’t imagine a greater challenge ahead of them.