Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

3 minute read

The Biggest Problem With Robots Is That We Trust Them—Even When They're Wrong

And even when they're very obviously leading us into danger.

  • 01 /04
  • 02 /04
  • 03 /04
  • 04 /04

Last week, the prolific robotics startup Boston Dynamics released a new video of its Atlas robot being kicked and pushed to show off its resilience. The Internet was awash in jokes about how Atlas would eventually rise up against its creators. It's one of two popular narratives about robots—that they’re either hilariously helpless and not to be trusted, or they’re ruthlessly intelligent and not to be trusted.

But researchers studying human-robot interaction at the Georgia Tech have a very different take on those cultural tropes. They say humans can be way too trusting of robots—and that our inclination to follow our robotic overlords could actually be a very dangerous human behavior that needs to be taken into account when designing everything from autonomous vehicles to emergency evacuation tech.

The team from Georgia Tech, who are presenting their work next week at the ACM/IEEE International Conference on Human-Robot Interaction, had originally set out to test a wheeled robot designed to guide people out of skyscrapers by the safest route during a fire. Using fake smoke and a real fire alarm, they tested whether subjects would trust the robot to lead them to safety if it made a major mistake during the emergency—such as taking a long, "circuitous" route, or entering into a darkened room, or pointing at a corner, as if confused.

The team found that not only did subjects trust the wayfinding robot, they kept trusting it even after it made dumb mistakes. "We wanted to ask the question about whether people would be willing to trust these rescue robots," said senior researcher Alan Wagner yesterday in a news release. "A more important question now might be to ask how to prevent them from trusting these robots too much."

The researchers even designed a follow-up experiment to make the robotic dunce even more obvious—or so they thought. They created a series of new robotic behaviors that would obviously indicate it was broken or wrong. In one case, the robot spun in place while a scientist told subjects that it was broken before the faux fire started. Yet, when the fire alarm went off, subjects still followed the "broken" robot. In another experiment, the robot told participants to go into a dark room blocked by a desk or couch. Some participants still tried to "squeeze" into the dark room, while others just stood there. "Experimenters retrieved them after it became clear that they would not leave the robot," the authors write.

It’s a bizarre demonstration, to say the least. "We absolutely didn’t expect this," lead author Paul Robinette said yesterday to Georgia Tech News. He and his coauthors say that designers working with autonomous systems need to think carefully about how their UX conveys a malfunction to users—especially in high-risk use cases, such as emergency situations or while driving. "In high-risk situations people may blindly follow or accept orders from a robot without much regard to the content or reasonableness of those instructions," they write (which sounds familiar even when it's a human giving directions).

Over email, Robinette told Co.Design that he's hoping to look into the best way to communicate errors to people very clearly in the future. In the meantime, the paper's advice to designers and engineers? Either make sure your system works absolutely perfectly, or spend the time to make sure your robotic or autonomous device has a clear, legible way of telling the user when it's acting dumb. Because, regardless of how many jokes we make about not trusting Atlas, our misplaced trust in autonomous technology can be dangerous.

loading