advertisement
advertisement

The Real Story Behind Microsoft’s Quietly Brilliant AI Design

The company studied personal assistants–human ones–to understand how to make a great machine assistant.

The Real Story Behind Microsoft’s Quietly Brilliant AI Design
[Photo: Microsoft]

With all the talk of what AI will do to change the world, you might not notice the subtle ways it’s already woven into our lives. Case in point: PowerPoint Designer, a feature that lives in the current version of the software. Each time you create a new slide, Designer is invisibly scanning your content, trying to figure out better ways to make that content shine based on millions of other PowerPoint presentations. Click on the Designer tab, and instead of your haphazardly pasted picture and bullet points, you might see three different options, with better typeface choices and a frame around the image that matches its tone. What might have seemed futuristic a few years ago is now dead-simple to use. But that simplicity belies something strange and fascinating: When Microsoft was first testing Designer, it actually felt utterly and ineffably off.

advertisement
Jon Friedman and Ronette Lawrence [Photo: Microsoft]

“In the way it was first tested, the tone of the words and animations in Designer made it feel like the computer knew better than you,” explains Jon Friedman, who as partner director of design at Microsoft leads the vision for the company’s Office suite. There was something stranger still: If you kept following Designer’s recommendations, the end result was a presentation that didn’t feel like you’d made it anymore. The computer, it seems, simply wasn’t taking into account what the rest of your presentation looked like. Instead, it was taking over, step-by-step. Eventually, Microsoft fixed that problem, unveiling a subtly more helpful, more neutral feature that made more context-sensitive recommendations. But behind those changes lay one of the company’s governing principles for AI design: that humans be the hero of any story.

Those principles seem obvious enough, when you lay them all out: “Humans are the heroes,” “Balance EQ and IQ,” “Honor societal values,” “Respect the context,” and “Evolve over time.” But behind them lies a unusual origin story–one that tells you a lot about where design is going. The principles didn’t emerge fully formed. Rather, they were the end result of a process started over five years ago, in which Microsoft spent untold millions trying to make a better AI assistant by watching how actual human assistants gain the trust of their clients.

PowerPoint Designer [Image: Microsoft]

Watching Trust Form In The Wild

Five years ago, Microsoft was busy trying to come up with a competitor to Apple’s Siri when the design teams noticed something strange about the prototypes they were testing with users. They had two basic flavors of assistant: one in which the user helped train the assistant, and one that simply guessed what a person needed and spat it out. It turns out users were far more forgiving of the former–and not particularly interested in the latter even if it performed equally well. There was something about training up the assistant that made people more forgiving of its mistakes.

“To study that dynamic, we started interviewing real assistants, asking them to reflect on their relationships and how their tasks evolved over time,” explains Ronette Lawrence, Microsoft’s senior director for product planning and research. That research then evolved into studying assistants new to the job, watching them as they formed relationships with their new clients, and asking them to keep journals of their feelings. But getting at the truth of those relationships was a tricky business. Typically, humans are poor at articulating their emotions in the moment, and so the research methods revolved around asking people to think of music or art that captured their moods and sentiments. “It’s well-formed science that if you get someone to think about music, and the emotions that music brings up, people will get closer to the underlying emotions they feel,” explains Lawrence.

All that research came at a turning point for machine learning in the wild. This was the era when Google Now was just beginning to release features such as predictive forecasts for your commute or how long you might be waiting at a bus stop. Lawrence’s team noticed that users found those features magical–and yet if they were wrong just once or twice, would opt to turn them off altogether. Seeing such responses to even small error rates, Lawrence’s team realized that this idea of trust was thorny, and important: Unless the human felt some kind of connection to the machine, they’d never give it a chance to work well after it made even one mistake.

That insight dovetailed with the assistant research they were doing. Time and again, it turned out that assistants succeeded not by being smarter than their clients, but rather by deferring to them. This applied directly to how a machine assistant might behave. “The more tasks you take away from people, the more you have to watch for emotions like, ‘did it make me feel more powerful and smarter, or do I just feel like the system is smart,” explains Lawrence. “Having people use a system that feels more powerful than them sets up this dynamic where it’s not your partner anymore. It brings up concerns about whether the machine is working on your behalf.”

advertisement

Microsoft actually tested that dynamic in how people reacted to a dummy “assistant,” a human behind a fake screen, acting like the brains of an interface. They monitored the heart rates and pupil dilation of users, reacting to a system that seemed either powerful or deferential. And what it all enforced was something that the real-life assistants had testified to all along: You garner trust by being open about your limitations, letting the client’s expertise lead the way. But the real secret to gaining trust? Quietly learning more and more about someone until you can surprise them with a level of thoughtfulness that they’d never expected. Say, the perfect gift suggestion for a sibling, at just the right time. Or a hotel recommendation that seemed just right for them. A good assistant would give you options and then let you choose–rather than foisting its opinion on you.

PowerPoint Designer [Image: Microsoft]

Screens That Evolve Over Time

Those lessons live on in how Microsoft approaches AI design today. For one, there’s the principle that humans should be the stars. There’s also the principle that the relationship should evolve over time. To that end, Microsoft is now trying to build a different model of how AI adapts. The idea isn’t so much to wow anyone up front with what a machine knows–but to quietly gain a user’s faith by offering options that they can choose among, and then, when the time is right, to offering something smarter than you might have expected.

“The opposite would be overload, giving people all these amazing capabilities at once that would still have a 30% probably of being wrong,” says Friedman. “Those models don’t have pacing that builds to high-confidence recommendations, then making a trust leap. We’re trying to pace things and work into a relationship, because you have to mirror how human relationships are built.”

Some examples he gives are noting when someone is writing a research paper–and then figuring out the point when someone’s attention is wandering just enough that it’s polite to suggest a library of related content from someone’s social graph. Or perhaps watching when someone is creating a PowerPoint, then figuring out which other colleagues have made similar presentations, and what slides might be worth borrowing. “The system might help with an introduction, if you don’t have the permissions around the content yet,” adds Friedman. All these instances imply something antithetical to the way UX has evolved in the last 20 years. Instead of a series of screens optimized for usability, there are screens that evolve over time, as the relationship with the user grows.

The point, says Lawrence, is that there’s a tension between what a computer can do and what it should do. “How do you respect the user? The designs that get it are mindful of explaining how they know what they know, and giving you choice,” adds Lawrence. But fostering the right kinds of relationships won’t just be about creating machines and watching how well people like them. Instead, it’ll be about going back to a deeper truth: the things people do among themselves to form relationships, when there are no machines involved at all.

About the author

Cliff is director of product innovation at Fast Company, founding editor of Co.Design, and former design editor at both Fast Company and Wired.

More