Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

6 minute read

4 Ways Watson Will Make Self-Driving Cars Less Terrifying

Getting behind a wheel you can’t control is a scary experience. IBM thinks Watson could soften the hysteria.

4 Ways Watson Will Make Self-Driving Cars Less Terrifying

You need only take a glance at Google’s self-driving car to understand its design intent: To look as non-threatening as possible, with the soft curves of a Playskool toy rather than the razor edges of a supercar.

That approach is fine for alleviating the concerns of pedestrians. But what about all of us who sit in the driver’s seat—with no steering wheel, and no means to take control if the algorithm goes mad?

Now, IBM has teamed up with the experimental auto manufacturer Local Motors to put its powerful AI, Watson, inside a self-driving, 3-D printed, electric car called Olli. Olli is a rideshare prototype which carries up to 12 passengers, and will be tested on DC streets starting today, then Miami and Las Vegas in late 2016.

But in this new role, Watson won’t be the one identifying traffic lanes and stop lights. (Local Motors is handling that with Olli.) Instead, Watson will be white labeled, using 30 sensors (including 3D cameras, GPS, a passenger counter, and various suspension and performance sensors) inside the vehicle set to other tasks—like human-car relations. IBM will be working with Local Motors for the next three months to build that experience into the ride.

"It’s about the engagement of a person with the vehicle, and the ability to build trust with a vehicle," says Bret Greenstein, VP IBM IoT. "I hadn’t ridden in an autonomous vehicle until this one. My first thought was, ‘This is kind of scary, I’m trusting a machine to do what took me years to learn to do well.’ You have to establish a certain trust."

With that in mind, here are four ways that Watson could make the autonomous commute less terrifying, and generally more pleasant, for all of us:

Answering Your Dumb Questions—Like Any Driver

The key to interacting with Watson-as-driver is that you’ll be able to talk to the vehicle, much like you'd interact with Apple’s Siri or Amazon's Alexa. Rather than typing your destination into a screen as you would with Uber today, Watson begins your ride by listening to your destination. If Watson already knows you, you can just say "work" or "home"—and telling you when you should expect to arrive.

But conversation will also be used to remove layers of abstraction in the journey. If your Uber ride takes twice as long to get somewhere as anticipated, you might never know why—and if the driver is rerouted, they might never know why, either. Instead, Watson lets you ask questions like "why are we stopped?" "is there traffic ahead?" or "what’s the weather tomorrow?"

What do these questions have in common? "They’re things you might ask the driver," says Greenstein. They’re little questions that reinforce our feelings of safety and convenience that we might otherwise take for granted.

Sensing The Things We Don’t Say

However, humans are enigmatic creatures. As Greenstein puts it, "Everything you say to a car is actually important data to someone operating a vehicle on your behalf."

But it’s not always a passenger's words. It might be their tone. People in a car expect the driver to react to their emotional states. And so as a stretch challenge of the project, Watson will be deployed to track and respond to sentiment. If a rider sounds nervous, they might be worried the car is driving too fast—so Watson should slow things down. If someone seems exasperated that they are late, the car should speed up, or maybe take their mind off it with the right song.

"When passengers leave the vehicle they should be smiling," says Greenstein. And for that, Watson will need to learn ways to make people smile.

Making Sure Everyone’s On Time—But Also Flexible

If you step into Olli and say "I need to be at work by 9:15," Watson could be smart enough to know where your work is (thanks to a user profile, that may one day be stored in an Olli app), to know where the other four people are going in the car (they already told it), and check all of your traffic and distances (via the cloud) to see if it’s feasible to take a route that will get everyone where they're going on time.

Watson says you’ll be on time. But then you ask to drop your laundry off on the way. (And, by the way, if I’m with you in that vehicle, I’m audibly groaning to make sure both Watson and you know how annoyed I am.) "It can say, ‘If you stop there, you might be 10 minutes late,’" says Greenstein. "It will give you confidence you’ll make it, or let you know you won't and why."

Again, it’s about removing layers of abstraction and giving the rider control. But Greenstein hints at the possibility of leveraging Watson’s cognitive smarts in this case, too, by cross referencing future traffic or weather patterns with your commute tomorrow. So Watson might then suggest, "‘If you save your dry cleaning and do it tomorrow, you can avoid traffic," says Greenstein. Or along similar lines, "The weather is worse tomorrow, so I’ll pick you up 10 minutes early."

Explaining That Scary Red Light

But for all of the natural language possibilities in this Local Motors project, a car’s AI might soon be able to do a lot more than converse with you and check traffic. In fact, Greenstein tells us that IBM is currently working with major automobile manufacturers in North America, Europe, and Asia, developing technologies that are anywhere from one to four years out from market.

"Some [work] is voice interface, hands-free driving, simpler user experience," Greenstein says. "There are also some car companies who are talking to us about how to make maintenance and diagnostics better, so if something happens, you don't just get a red light."

You know the red light—that scary "check engine" notification that could mean your tire pressure is low or that your engine is going to explode in 15 seconds (I assume, I don’t know cars). To diagnose that light today, it takes a mechanic connecting to your car’s computer to diagnose an error code to find out.

Not only could a Watson remove that layer of abstraction between your car’s computer and you—"Why is my red light on, Watson?"—Greenstein teases that by using machine learning and enough embedded sensors in the car, Watson might predict issues through some level of preventative maintenance, even asking the driver, "Can I make an appointment for you?"

It sounds like a wonderful alternative to the anonymous red lights of today. But at the same time, it’s a demonstration of just how much power our voice interfaces will have in the future. Will Watson automatically book me an appointment at an expensive dealer, or a trusted cheap mechanic on Yelp? Will Watson try to up-sell me service based upon vague notions of preventative maintenance, or very provable ones that will save lives and money?

Because when Watson is installed into a Toyota, that AI doesn’t just work for you or IBM anymore, it’s working for the car company. And no one wants to be outsmarted at the cost of their own cash.

The Fast Company Innovation Festival