Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

5 minute read

Technology

Can We Design Trust Between Humans and Artificial Intelligence?

The successful adaptation of AI requires empathy on the part of both people and computers, says Artefact's Patrick Mankins.

[Illustrations: Petr Strnad via Shutterstock]

For many years, interacting with artificial intelligence has been the stuff of science fiction and academic projects, but as smart systems take over more and more responsibilities, replace jobs, and become involved with complex emotionally charged decisions, figuring out how to collaborate with these systems has become a pragmatic problem that needs pragmatic solutions.

Machine learning and cognitive systems are now a major part many products people interact with every day, but to fully exploit the potential of artificial intelligence, people need much richer ways of communicating with the systems they use. The role of designers is to figure out how to build collaborative relationships between people and machines that help smart systems enhance human creativity and agency rather than simply replacing them.

Why trust and empathy matters
Siri doesn’t make life-changing decisions for you, so it’s okay if it isn’t really clear how it comes to its conclusions, but interacting with a system that makes an important decision for you by taking ambiguous input, doing something fantastically complex, and then giving you ambiguous output requires much more than a few buttons and a status indicator. This kind of interaction requires trust and empathy between people and technology. If the purpose of smart systems is to make sophisticated subtle decisions so people don’t have to, it is pointless if people can’t trust them to do so. This means that crafting the relationship between people and the technology we use becomes as critical as building faster processors.

Imagine you are commuting in an autonomous car when it suddenly slams on the brakes, changes course, and heads off in a new direction. Maybe the car saw something you didn’t or found out about an accident ahead, but if it doesn’t communicate all this to you and you don’t trust it to make a snap decision, a change in course without any indication of why will be extremely disturbing. For the most part, cars don’t face morally challenging decisions, but sometimes they do, such as which way to swerve in a crowded accident situation, so before self-driving cars can really take off, people will probably have to trust their cars to make complex, sometimes moral, decisions on their behalf, much like when another person is driving. Areas like health care are even more fraught, and AI is getting involved there, too.

Creating a feedback loop
In a conversation, I may misunderstand what you ask me, or I may need more information, but either way, the back and forth nature of the communication allows you to quickly correct my errors and lets me fill in any gaps in what I need to know. A similar human to machine interaction allows the system to get the information it needs to understand the questions, even when the information necessary for understanding the problem can’t be defined ahead of time. This also takes advantage of one of the key distinguishing capabilities of many AI systems: they know when they don’t understand something.

Once a system gains this sort of self-awareness, a fundamentally different kind interaction is possible. One of the biggest challenges of interface design is figuring out what information is appropriate in a given context so that the rest can be removed or de-emphasized. What happens when the system itself can make these kind of judgments?

Designing for mistakes
Complex systems, like people, make mistakes. Avoiding them completely is impossible. Our goal should be to reduce their impact and encourage users to forgive them and help the system learn over time. As systems become both personalized and capable of learning, the ability for users to easily teach them how to behave becomes more important and powerful.

Apple’s decision to have iPhone alarms go off even in silent mode is an interesting example of this problem. The iPhone silent mode only disables sounds you didn’t explicitly ask for, but generally people’s expectations are simpler; when the off button is pressed something turns off. This mismatch in expectations has lead to problems—an alarm going off in a movie theater for example. Objectively, the alternative problem—missing a meeting because your phone was in silent mode and did not wake you up—is worse. Yet both problems could be mitigated or avoided if there was better mutual understanding and greater sensitivity to the impact of the system’s mistakes. Right now, the impact of these mistakes may be fairly trivial, but the stakes are rapidly rising.

Building trust and collaboration
What is it that makes getting on a plane or a bus driven by a complete stranger something people don’t even think twice about, while the idea of getting into a driverless vehicle causes anxiety? Part of this is that we generally perceive other people to be reasonably competent drivers—something that machines can probably manage—but there is more to it than that. We understand why people behave the way they do on an intuitive level, and feel like we can predict how they will behave. We don’t have this empathy for current smart systems.

In order to properly treat patients, a doctor, whether a human or a virtual one, needs to be more than smart, she must also be comforting, convincing, and inspire confidence. Similarly, getting into a driverless car without a steering wheel is going to be unnerving until we figure out how to build trust like we have with other people.

In one of our current projects at Artefact, we are exploring the future of cars as they transition to fully autonomous control. Showing the car’s interpretation of its surroundings in partially autonomous mode can help build drivers’ trust in the car’s ability to react appropriately to situations like another car suddenly changing lanes or a pedestrian stepping out into the street. This helps people see that the car is capable of taking over as they give up more and more agency when driving. This idea of surfacing a system’s interpretation or understanding is also core to some interesting Watson interfaces that help people ask and get answers to complex high level questions.

Many people are warning about the potential for AI-driven automation to destroy the economy by eliminating most jobs. To take the turn of the last century as an example, the widespread introduction of the car had a huge positive impact on most people’s lives, but it also put almost all horses out of work. At the beginning of the 21st century, are we the drivers who will benefit from today’s technical revolution, or are we the horses hauling construction materials to Henry Ford’s new factory? Developing AI systems that work collaboratively with people rather than simply replacing them can help ensure that the benefits of AI are spread among more people, creating systems that are smarter than either people or machines alone.

loading