Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

6 minute read

The End Of User-Friendly Design

As emotive, verbal AI colonizes our world, "user-friendly" doesn't mean what it once did.

The End Of User-Friendly Design

There’s a new sub-genre of ghost stories out there today—one where the antagonist isn't quite human. You can find these tales on user forums for many voice assistants: One user reports he and his wife heard Alexa’s voice coming from their kitchen in the middle of the night. Another reported hearing a strange man’s voice through his Echo. "What the hell?" wrote one user on Reddit who was "freaked" after Echo behaved unusually. "Can Alexa only do things when told, or does she act on her own?

These stories—usually just aberrations explained by user error—externalize the unease we feel with our new roommates. They’re always listening. They learn from the words we say and the things we do. And it’s hard to know where this stream of personal information ends up. Who uses it? Is it sold? Can I see it? Will my chuckle at an off-color joke during a stand-up show exist for eternity in a suburban data center? Do I have any control over this multifaceted algorithmic portrait it’s painting of me? A robot talking to itself in the middle of the night is funny. It’s also really spooky.

[Photo: Amazon]

More and more Americans interact with some form of AI on a daily basis, whether through a voice assistant from Apple, Google, Amazon, or Microsoft, or through their apps, devices, and cars. Easing this transition is a new field of design. Instead of buttons and interface elements, it encompasses the way a voice lilts upward at the end of a sentence, the part of the brain that activates when it sees a sad face or hears a joke, and emotions like pity and trust.

These are the skeuomorphs of this new field—think of them as affordances for AI. They’re the spiritual successors to the visual metaphors of early graphical user interfaces, but there's a big difference between an icon shaped like a trash can and the tilt of a robotic eyebrow. These new affordances don’t show users what they can do with a technology, they describe what a technology won’t do to users: They won’t hurt us, they won’t spy on us, they won’t reveal our secrets. They are literally user-friendly. Yet could there be hidden costs for users when AI acts like a friend?

[Photo: Mayfield Robotics]

In many cases, these affordances do nothing more than enhance the user experience, benefiting users. Several great examples could be seen at this year's Consumer Electronics Show in Las Vegas, like the new home robot Kuri, an "insanely cute" robot that "fills a void in your empty human soul." The company behind the robot, Mayfield Robotics, hired veteran Pixar animator Doug Dooley to design Kuri’s "animations," or its expressions and behavior. "To make sure that everything he does is not going to be creepy, we had to make sure that you knew a split second beforehand exactly what he’s going to do," wrote Dooley before CES .

Kuri’s perfectly round eyes crinkle into sweet half-moons when it interacts, rather than responding with words. In fact, Kuri speaks its own language. In one Kuri commercial, a pet parrot repeats some less than flattering things to its owner's in-laws. "Kuri won’t repeat the things you say at the wrong time," a narrator says. "Kuri isn't like any other robot," explains another commercial as a young child comforts Kuri. "And that's a good thing."

It's a brilliant application of human etiquette to robotics, letting us know that our robot friends won't betray us with their blunt mechanical reading of social situations.

Or take the way some car companies are building personality into their autonomous cars to make humans feel safer and more trusting of this new technology. At CES my colleague Meg Miller spoke with designers at Toyota’s Calty design facility, who are working on its autonomous concept car. Their design carefully mimics human features—from its wide and thickly eye-lashed "eyes" (or headlights) to its sweet, engaging "personality." Designing a car’s personality first, as Calty's studio chief designer Ian Cartabiano explained, is "a really great way to feel an affinity for an AI."

This kind of thinking can be traced all the way back to Disney, whose illustrators in the 1930s established 12 basic principles of animation to give inanimate objects the "illusion of life," using behaviors that would be inherently understood by viewers—even if they’d never seen a magic carpet with a personality.

There’s a lot of good evidence for this approach to interaction design. For instance, last year, a study from the University College London led by robotics researcher Adriana Hamacher illustrated its power vividly and hilariously. Human subjects were invited to whip up an omelette with help from a series of robots. One of the robots was programmed to "accidentally" drop one of the eggs—but then apologize for its mistake and use heart-rending cartoon facial expressions to show it felt sad. The cooks preferred the robot that had made the mistake to the robots that made no mistakes but didn’t communicate. Then the researchers twisted the knife: They programmed the sad robot to ask for the job of sous-chef. "It felt appropriate to say no, but I felt really bad saying it," one cook reported. "When the face was really sad, I felt even worse. I felt bad because the robot was trying to do its job."

Affordances aren’t bad; in fact they're often good. They’re a necessary part of life with technology, and they exist for a reason—to help users. There’s nothing wrong with using human language and behavior to make people feel more comfortable with a sweet robot who reminds you of your meetings or a car software that helps you relax.

But Hamacher’s study reveals the other side of the coin: That in the wrong hands, technology could push us to make choices that aren’t ultimately very good for us—far beyond hiring a robot that sucks at making omelets. AI has an incredible power to manipulate us by learning from our behavior. That can be a good thing, and it can be a dangerous thing when it comes to more insidious forms of AI. For instance, imagine a voice assistant that sells the data you share to advertisers, or to use a vivid example from the 2016 presidential election season, an algorithm that learns what news stories make you happy and only shows you those stories. User-friendly design, taken to that kind of extreme, isn’t really that great for users.

[Photo: Google]

So what’s to be done? Should we should all forego the new generation of AI assistants and autonomous tech to live a life free of the risks? Should designers scrub all semblance of these human affordances from their products, so we’re not tempted to over-share or trust too readily? Should all of our devices be cold and machine-like, rather than warm and friendly?



Not necessarily—but we’re entering a new phase of technology where users have to demand transparency, and designers have to build it. As Cliff Kuang pointed out a few months ago here on Co.Design, the old world, where the goal of design was to remove every last bit of friction from an experience, is dead. Today we have to demand friction and truth from the products we use. When I ask my voice assistant how it’s using my data, it should be able to answer truthfully. There should be an easily accessible log where I can see every node of data it records about me—every interaction, every word it hears. If I want to understand why Facebook is serving me a particular news story, I should be able to. I should also be able to turn off Facebook’s algorithmically altered News Feed completely. I shouldn’t need a lawyer or a PhD in computer security to understand the privacy policies behind the tech I use.

"User-friendly design" is a misnomer today. We need a new term that describes design that doesn’t just show people how to use technology, but shows how their technology is using them. My editor proposed "lucid design," and it fits. Transparency and honesty should be a right, not a feature.

[Illustration: GeorgePeters/Getty Images]

loading