Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

The Secret UX Issues That Will Make (Or Break) Self-Driving Cars

In an unassuming research lab, Volkswagen is solving problems that Tesla and Google haven't come close to cracking.

The Secret UX Issues That Will Make (Or Break) Self-Driving Cars

At a workbench at Volkwagen's UX research lab, a prototype steering wheel designed specifically for an autonomous car.

All Photos (unless otherwise noted): Damien Maloney for Fast Company

We were rolling eastward across the San Mateo Bridge in an Audi A7 at a dutiful 55 miles per hour, and I was riding shotgun accompanied by two of the car's engineers. With a sticker price topping $70,000, the A7 is a fancy car, but not an uncommon one along the stock-option-paved highways of Silicon Valley. I looked around at the drivers around us, knowing they hadn't a clue about what was happening right beside them. Traffic was getting thick, as rush hour approached. Outside the window, the water of the San Francisco Bay was a dull green, like patinated copper, pitted by tiny waves. A bright blue sky. Our car’s test driver was smiling pleasantly, hands on his thighs, not touching the steering wheel at all.

Then the car in front of us slowed. Our car, sensing this, began to change lanes gradually. But then came a driver to our left, dickishly racing into our blind spot, cutting us off. I griped to myself about California drivers with their composting and their unvaccinated kids and their vestigial turn signals. But the Audi wasn’t bothered. It merely sensed that other car coming, drifted back to center of our lane, and eased onto the brakes so as not to hit the car in front of us. You wanted to be scared, but it was over before you realized it happened. It was like being driven by that uncle who could tell you how to survive a snake-bite or order a roast chicken in seven languages. You trusted what was happening, and that was remarkable: The car, by design, was calming me before any worries could surface. This is exactly where so many carmakers have failed.

Amid the drumbeat about driverless cars—Google! Tesla! Apple!—it's easy to miss just how far they have come, and how fast. I've asked a few people what they think these things are capable of. The consensus seems to be something a bit more advanced than a remote-controlled car. Maybe a bit like a monorail at Disneyland without the rail?

Here's the reality: cars that drive themselves are already arriving to market, one button at a time. We have cars that swerve to avoid accidents or park themselves. Cars like these are already wreaking havoc. Maybe one of the funniest viral examples you can find on Youtube is of a bunch of people at a car dealership, testing out a feature on a Volvo that they think prevents it from hitting pedestrians. You can’t see the hapless driver settling in behind the wheel, but let’s imagine him wide-eyed, bristling with excitement as he prepares to slam on the accelerator. In the foreground stands a guy in a pink shirt, his posture giving off this brittle mix of apprehension and excitement. He’s leaning forward, bracing just a little bit: Ready for magic. Holy shit!

The driver slams on the accelerator. The car doesn’t stop. It plows right through the guy in the pink shirt, who flips up onto to the hood like a rag doll tossed by a pit bull. People scream. The camera spins wildly, forgotten.

Self-driving cars went viral again recently, when Tesla dropped a $2,500 software update on its customers that promised a new "autopilot" feature. The videos are fascinating to watch, mostly because of what’s not happening. There’s one, titled "Tesla Autopilot tried to kill me!" where a guy drives with his hands off the wheel for the first time. He hasn’t replaced driving with, say, watching a movie or relaxing—instead, he’s replaced the stress of driving with something worse. He looks at the road, he looks at the wheel, he looks at his hands. He’s scared. And he’s smart to be scared. His car, unable to detect the lane dividers that guide it, veers into oncoming traffic. Luckily, he snatches the wheel away.

Somewhere in between where we stand now, annoyed at how much time we waste sitting in traffic, and the future, where we’re driven around by robots, there will be hundreds of new cars. Their success doesn’t simply depend on engineering. The success depends on whether we, the people, understand what some new button in our brand-new car can do. Can we guess how to use it, even if we’ve never used it before? Do we trust it? Getting this right isn’t about getting the technology right—the technology exists, as the Tesla example proved so horribly. The greater challenge lies in making these technologies into something we understand—and want to use.

In those Tesla videos, the drivers don’t know what the car can’t do. It’s not telling them. Techies and Tesla boosters were quick to lay blame. Don’t these idiots know how all these things work? Don’t they read the instruction manuals? These are echoes of the least productive trope in computing history, the one that Steve Jobs railed against: The idea that the user is wrong, and that we should all bend to the capabilities of a machine instead of the machines bending to us.

The people looking terrified in those Tesla videos? That’s not their problem. It’s a design problem.

  1. 01/03
    The console display tells you exactly what moves the car is about to execute. It also shows you the cars around you that are being seen by sensors. Photo:Audi
  2. 02/03
    To engage the autonomous-driving mode, you must push two buttons. That idea was inspired by missile-launch systems. Photo: Audi
  3. 03/03
    The car gives multiple layers of alert, before you're asked to assume driving again. Photo: Audi

How Do You Build Trust In A Machine?
That Audi A7, code-named Jack, is years away from market. But it already represents thousands of man-hours, with trust being the foremost concern. (There is an irony in this: Audi is owned by Volkswagen, which finds itself embroiled in a scandal over untrustworthy emissions performance. The engineers and designers in this story had no involvement in that.) The person in charge of fostering trust in a robot car is Brian Lathrop, whose very bland title belies how much time he spends living in the future. He runs the UX group at Volkswagen's little-known Electronics Research Laboratory; it's his team that's figuring out how that A7 should work. A Ph.D psychologist by training, California born and raised, Lathrop is burley with close cropped hair like an Army sergeant. He speaks with the painstakingly chosen words of a scientist, used to sniffing out imprecision. But he is an inventor above all—the co-author of several patents that might prove decisive for autonomous cars.

Twelve years ago, Lathrop found his job on, and even the guy who hired him didn’t quite know what he’d be doing there. His first few weeks, the 15 engineers he worked with shrugged, and set him up wiring circuit boards. Today, though he’s only a few years into the fold, Lathrop is nonetheless more experienced than all but a few people in the world—because up until a couple years ago no one was thinking through how you might actually operate a car that drives itself.

Lathrop cut his teeth at NASA, creating new ways to fly planes more safely. From that experience, he has distilled a "3+1" design philosophy for driverless cars, which works its way through all of the concepts he invents. There are three things an autonomous car has to get right, plus one: Above all, we need to know what mode a car is in, whether it’s driving itself or not. The second principle Lathrop calls the Coffee Spilling Principle: We need to know what something is going to do, before it's actually done. Third, and perhaps most vital in fostering trust, is that we need to know what the car is seeing. And finally, we need perfectly clear transitions when a car takes control, or when we take control from a car.

In the case of this particular A7, those principles had all been compressed into the brief span of a couple minutes, when the test driver drifted our car onto the highway and then let its computers take over the driving. It was a tight choreography.

To the right, Brian Lathrop, head of UX at Volkswagen ERL. On the left, one of his top deputies, Erik Glaser.

As we merged onto the highway, the car wasn’t ready to take over. First, it had to analyze the lanes, the traffic, and our surroundings. But then, once we reached a stretch of highway designated as an auto-pilot zone, a small center panel near the air-conditioning vent blinked to life, with a countdown timer: "5 minutes until pilot mode available." Remember those rules about fostering trust? It was telling us what it could do, what was possible, before it ever happened.

When the moment came, two buttons on the center hub of the steering wheel blinked: Press to engage. Those two buttons were inspired by the famous missile launching systems where two keys had to be turned at the same time, to avoid mistakes. A bright strip of LEDs on the dashboard flashed from orange to blue-green, telling us that the car was in control now. It was confirming the action. Those colors on the dashboard were carefully chosen not to evoke our notions of green vs. red and right vs. wrong, but rather, a new symbology. On the one side, blue-green, a pleasant, unthreatening signpost to evoke calm and steadiness. At the same time as the lights shifted, the steering wheel pulled back slightly and began to waggle by itself left and right, adjusting to the contours of the road with an eerie precision that seemed almost human. It was a moment that was awesome to absorb—and then, almost immediately, uneventful.

The beautiful bay scrolled past the window. During our ride, talking with the test driver, I detected an unusual kind of awkwardness. As we chit-chatted, the test driver never stopped staring straight ahead, as if he was driving. Only he wasn't driving. I asked him: What, exactly, are you supposed to be doing right now? He smiled, as if the question wasn't his to answer. The supervising engineer, monitoring the ride from a laptop in the backseat, piped up: "The first three minutes you’re thinking, ‘This is crazy, this is the future!’ Then you get bored." We all laughed. But the very fact of the drive’s boringness was a feat. Boringness implies ease rather than fear. The driver’s hands weren’t hovering over the wheel waiting for something bad to happen, and neither were we. We were just enjoying the view.

A Ladder of Metaphors
After our ride in the A7, back at the lab, a small army of engineers and project managers had gathered to show off a new concept—Just a concept! they stressed, every time we talked—of what the next phase in driving might be. "And now, we would like to reveal something special for you!" said the project leader, Erik Glaser. He is shockingly young—compared to the many other stone-faced Germans standing around, he looked like the intern: gawky, earnest, wearing jeans, with a chinstrap of facial hair that likely began just a few years ago, around his junior year of college. But, like his boss Brian Lathrop, his experience suits his present job to an uncanny degree. At Carnegie Mellon, he helped design a robot programmed with an agenda: As it offered you snacks, it detected what you chose and tried to coax you into healthier choices—"Cookies again, huh?" It had an LED capable of expressing a subtle frown of judgment. Glaser’s challenge then was exactly the one he faced now: How do you build a smart robot that doesn’t freak people out?

Off to the side of the garage where their test car was usually parked was a black cloth draped over something bulky, about the size of a couch. An assistant gently rolled back the cloth: Voila! Here was a simulated dashboard, complete with a slick-looking steering wheel. But when I drew close, you could see it still bore the scuffs of a working prototype. "This is a working-as-of-last-night prototype," Glaser said, his eyes red-rimmed with fatigue. The steering wheel, a year and a half in development, had just been bolted into the simulator hours before.

The prototype steering wheel, whose wheel pulls away when the driver is in control, this exposing infotainment touch screens.Photo: Audi

Metaphors allow us to understand new technology, and this invention was no different. The design guru Don Norman once suggested that the controls of a driverless car should be a like a horse’s reins: Pull up the slack, and you’re in control. Let them loose, then the car is. The point of the metaphor isn’t just control, but also safety: With a horse, even when it’s trotting on loose reins, you know that it’s own self-preservation will stop you both from going over a cliff. The steering wheel idea that I was seeing bore a striking echo of Norman’s old insight. When you were driving, you drew the steering wheel close. But when the car was driving, the street wheel receded by about 7 or 8 inches—like loosened reins, you could still grab back the wheel if you wanted to take control. And just like loosened reins, the steering wheel, by receding from your grasp, actually told you it was the car that was taking over now.

There are millions of concepts that simply ditch the technology we’re used to. The Audi example was different. Its cleverness lay in seeming so obvious—in being a straightforward retrofit of the steering wheel for the next decade as we get accustomed to new ways of driving. You can see these exchanges happening all the time, in your everyday life.

To take just one example, if you use an Apple laptop, you probably noticed when the scrolling direction of the trackpad changed from moving a page down when you scrolled up, to "natural"—meaning that when you swipe down, the page moves down. The former works like a spyglass would: As you move the spyglass down a page, the bit you’re reading moves up into view. But natural scrolling is different. It’s as if you’ve got the page in your hand, instead of a spyglass. It’s as if you’re pushing a page upward as you read. The first metaphor made sense in the era of desktops, when windows functioned like spyglasses onto the content of a page. The later metaphor makes sense only with touch screens, and the idea that the thing you previously thought was a screen had in fact become something like paper. Point being, our metaphors for how technology change as we assimilate old technologies, and adapt to new ones.

You can see this exchange happening with the steering wheel—itself a metaphor 150 years in the making. Before the earliest steering wheels appeared in motorized tractors and sleds, there was the tiller, which was in fact a steering technology borrowed from boats. And on boats, a tiller turns the rudder left. In response, the ship turns right. Therefore, proto–steering wheels actually moved a vehicle in the opposite direction that you steered. But the car evolved to become familiar in its own right. The metaphorical reference to boating was lost. And so, it became "natural" to turn a wheel right, and have the car actually turn right.

Technology is like that: We don’t ditch what we have. We constantly update our metaphors, trying to find familiar handholds that quietly explain how a technology works. In digesting new technologies, as we climb a ladder of metaphors, each rung might follow the one before. Over time, we find ourselves further and further from the rungs we started with, so that we eventually leave them behind, like so many tiller-inspired steering wheels.

The deeper lesson in all this is that people naturally get frustrated when something doesn’t do exactly what they imagined; they get lost when things don't work as assumed. Part of the reason the Tesla example was so mortifying is that in calling that new feature "Autopilot," Tesla planted an idea in the heads of its users, about what a car driving itself should do. They invited drivers to supply their own ideas about "autopilot," then sent them on their way. And when there was a gap between what Autopilot did and how people imagined it? Tesla Autopilot tried to kill me.

We demand that new technologies do not only what they promise, but what we imagine. We demand that they behave in a way that we can guess, without ever having used them before. That’s what lies behind the mysterious notion of user friendliness.

A useful bit of floor decoration in the 3-D printing lab at Volkwagen's Electronics Research Laboratory

How What We Expect of People Influences What We Expect of Machines
We’re only a few years from the naive assumption that autonomous driving would simply be a car with an auto-pilot button—indeed, that’s probably what most people picture. "Three or four years ago, when we started working on human-machine interaction concepts for self-driving vehicles, no one thought about it," Lathrop said. But as he and his peers began to noodle over the problem, the more hairy it became. Lathrop, in particular, was keenly attuned to one simple factor that has bedeviled us ever since we began to fly the friendly skies.

Recall that first principle that Lathrop laid out for designing autonomous cars—that the driver has to know whether the car is driving itself. That harks to probably the oldest dictate in interface design; mode confusion causes 90% of airplane crashes, and that insight helped invent the field of human-computer interaction. Think about all the times you’ve heard news reports about a pilot being confused about whether the flaps of the wings were down, or whether to auto-pilot was properly set. If you’ve ever failed to realize that your car was in park when you hit the accelerator, or you’ve ever tried typing into the wrong window on your computer screen, you’ve been a victim of mode confusion.

But Lathrop points out that mode confusion in a car is actually even more menacing than in a plane—whereas pilots have hundreds of miles and many minutes to react to problems, drivers are faced with dozens of problems every minute. Moreover, drivers, unlike airplane pilots, aren’t experts at driving. The spread in skills and instincts among us is so large, it’s somewhat amazing that we’re crazy enough to drive at all.

This is why the steering-wheel concept is clever. In solving a simple problem—that of changing the radio station while the car was driving itself—it helped solve a bigger problem: That of telling the driver, simply and intuitively, that the car was driving itself. Design is like this: Sometimes, you solve one problem and end up solving a bunch of others down the line. Other times, one solution seems to create more problems that it solves. That’s the difference between good design and bad.

Talking to Lathrop, hearing all the years of research and care piled into every detail, it all got to feel almost comically complicated. When new issues crop up after every other one is solved, where’s the end? But it turns out that there’s a more basic way to frame our expectations of machines, one that’s more familiar and easy to grasp: Our expectations of machines are, to a startlingly consistent degree, well mapped to our expectations of actual human beings.

Consider what happens when you’re driving in your car, come to a stoplight, and then pull out your phone to check a text message. We all know it’s wrong, but we’ve all done it anyway. Alone, you wouldn't think twice about it. But if you’re with a friend, she’d be smart to scold you: "Pay attention the road!" Maybe you’d protested that you are paying attention, that you know what’s going on. But your friend can’t know that. She feels endangered because she doesn’t know what to expect of your next move on the road. She feels endangered because she doesn’t know that you’ve taken in all the information that she is—who’s crossing the road, how long it’s been since the light turned, the car that’s just pulled up alongside you. No matter how well they know each other, people who face a shared danger are constantly checking who knows what, and what to do next.

It is no different with a machine. The car also has to tell both the driver and the rider about what its sensing. To solve that problem, the A7 shows you a map of your surroundings, as the car sees them: Outlines of the other cars on the road, shown on a simple, stripped down display. On the one hand, this doesn’t seem like new information. After all, it’s merely a crude representation of what you can see simply by looking out the window. But in fact, the display is telling you that the car sees what you see—and then some. There’s a screen that tells you what the next move will be—"left turn"—with a countdown timer until it happens. Simple as it sounds, that bit of information means the difference between feeling like you’re taking a ride, and feeling like you’ve been taken hostage.

The sense of safety you get from that is akin to riding in a car, looking over, and seeing that the driver has both hands on the wheel, eyes forward. She’s using her turn signals, checking her blindspots. We’re constantly checking out the people around us, to see if they see what we do, to guess whether they know what we know. Our expectations are no different if our partner is a car, driving itself.

The Culture Hidden in a Car’s Behavior
A few months ago, in an empty, anonymous parking lot, beneath a massive white tent, the UX researchers at Volkswagen's UX lab gathered to solve a problem that no one was thinking about. This was an experiment, and the white tent was in the name of science: To control how the light that spilled across the barebones street intersection they had created overnight. There were stop signs and crosswalks and lanes. There was an Audi A7 idling just beyond the intersection, with its windows blacked out so that no-one could see inside—so that no one could see there wasn’t a driver inside. But the car wasn’t what was being tested. Rather, it was the bystanders who’d been rounded up at random, to test how they’d behave in a situation almost none of us have ever seen before: When a car driven by a computer pulls up to a stoplight, what makes you feel safe enough to actually cross the road?

Very few people in the tribe of geeks researching autonomous cars had given much thought to the issue. There were cracks, as a result: Out on the streets of Mountain View, some clever bike rider realized that by pumping his brakes, he could tease one of Google’s driverless cars into a standstill. The Audi team, for its part was humble enough to think: We don’t really know how people will behave. At the extreme, you could imagine terror—say if the car behaved so erratically that people raced across the intersection with their breath held. But instead, something stranger happened: People saw the car, and blithely stepped in front of them as if nothing were amiss. "I thought people would be conservative," Glaser said. "People were actually fearless."

The pedestrian test. In the car's windshield, you can see a prototype display that lets onlookers know that the car sees them. Photo: Audi

The fearlessness was an unforeseen outcome of a very nuanced detail of the car. Even though the Audi team had loaded the cars with clever outward displays telling pedestrians they could cross the road, it turns out that the pedestrians were so trusting because the car behaved respectfully. It came to a slow, measured stop before it reached an intersection, just like a responsible human driver would. People crossed confidently because the car was behaving in a socially acceptable way. With that behavior comes a slew of mores: That you’re not going to suddenly gun the engine, that you’re not a psycho out to do harm. "The physical driving behavior of the car is actually its own human-machine interface," says Glaser.

Cars are just one example of the general truth that there’s a culture to the way everything around us behaves. This insight offers two, forking choices. We can ignore it at our peril, as Tesla did. Yet the Silicon Valley mindset of just dropping beta tests upon an unsuspecting populace might be not only naive, but also counterproductive. After all, our first impressions always color our willingness to try again.

On the other hand, we can defer to what we don’t know about the culture of objects that live around us. We can recognize that the key to making us all comfortable with the future lies in appreciating all of nuances of what we already already have—in realizing, for example, that the way a car pulls up to a curb is an interface all its own. We can watch actual humans, in hopes of making things more humane.

As new technologies replace human tasks, they will have to behave in ways that we can relate to. It’s not enough to make a dashboard easy to use, or easy to read. And while we don’t need a dashboard to have a full-blown personality, it’ll have to have personality traits. It’ll need to be calming, communicative, or helpful, as the situation demands.

Glaser, the young designer inventing the future of driving, muses about just how long it might be until fully autonomous cars arrive in our driveways. "We’re bootstrapping this technology," he says. "The gaps will get filled in. But we need handholds along the way."

Related: Google And Uber Think Driverless Taxis Are Totally A Thing. Do You?

The Fast Company Innovation Festival