The Psychology Of Anthropomorphic Robots

Subtle anthropomorphic cues, such as faces or voices, increase robot humanity. Google's self-driving car is onto something.

By now we've all seen the prototype for Google's self-driving car: a teeny little road bopper shaped like a gumdrop. What's immediately striking is that the car seems to have a smiley face designed into the front. Headlights for eyes, a forward sensor for a nose, a bumper line for a mouth tilted slightly upward in a grin. I mean, we all see humanity in odd places—the man in the moon, a face in the clouds, Jesus in a potato chip. The smiley face is totally there.

This might annoy some traditional car designers; Chris Bangle told Co.Design's Dan Nosowitz that the face was "supposed to be cutesy but is awful weak." But if Google's goal is to get people to trust the car, then that "weak" little smiley face is actually quite powerful. New evidence suggests that very subtle human features—a name, or a voice, and especially a face—can help a robot seem mindful and dependable rather than cold and threatening.

"You want the self-driving car to be intelligent, first and foremost," psychologist Adam Waytz of Northwestern University tells Co.Design. "You also want it to be socially responsive and empathic, as well."

Trusting the car to drive safely is among the biggest hurdles to the ascent of the autonomous car. It's an odd fear, considering how terrible human drivers are, but a natural one. I can attest first-hand that anxiety melts away when you actually ride in Google's self-driving car. But making consumers comfortable enough for a test drive (test ride?) will be a challenge for car makers preparing for that not-so-distant day when driverless cars hit the showroom.

Cars pretending to be human put us at ease.

The work of Waytz and colleagues suggests that simple anthropomorphic design elements might do the trick. In one recent study, the researchers recruited 100 test participants to operate a driving simulator through two courses. Some drove a normal manual simulator. Some operated a semi-autonomous simulator capable of controlling its own speed and its steering. Some operated a semi-autonomous car with a name (Iris), a gender (female), and a voice (pre-recorded human audio files).

Not only did test participants humanize Iris—they rated her as smarter and more capable of feeling, anticipating, and planning than the other simulators—they also trusted her more. In self-reports, participants operating Iris said they felt safer in the car and more willing to give up control, compared to those in the normal simulator. Their bodies confirmed the feeling: Heart-rate monitors displayed a lower change in arousal for Iris drivers, compared to both other simulator groups.

To see just how far this trust would extend, the researchers also arranged for an unavoidable crash to occur during the simulation. The crash was clearly the other car's fault. But test participants operating Iris car blamed their simulator significantly less than those in the unnamed self-driving simulator. Why? Remarkably, the researchers believe Iris's human features led participants to forgive her—just as you might forgive a human driver for an unavoidable accident if you trusted that driver as competent, too.

"It took very little to assume a lot of humanity underlying the car—all we did was give it a voice, a name, and a gender," Waytz says. "The implications for robot designers is that it takes very little for people to see the humanity in something. Simple cues will work."

Social intelligence is key if we're to trust the robots that help us.

The idea that people will trust robots that seem more human runs counter to conventional theories stating just the opposite. To some extent people may still fear human-like machines, and human features might be unnecessary for robots doing strictly physical jobs (say, working on an assembly line). But as robots shift into roles that require more human interaction—as health care assistants or autonomous taxi drivers—a certain degree of social intelligence will become increasingly important.

Waytz has outlined some general rules for designing robots that convey an optimal level of humanity: machine enough to be flawless, mindful enough to be relatable. Having a face is key—especially a cute face, which people have shown to prefer in robots in emotional roles, such as a robot therapist. (Paro, the robot seal, helps the elderly.) Having a voice is also key. If Iris the car or Siri the phone doesn't convince you on that count, Scarlett Johansson the OS in Her will.

Google's self-driving car design meets these criteria: It's certainly got the cute face, and no doubt its navigation system will have a voice. "In terms of how effective is this in terms of evoking humanizing responses, I think it's highly effective," Waytz asserts. And there's no need go overboard with the anthropomorphic cues; if a robot car felt too human, after all, we might start to worry that it would drive like one.

[Image: Robot friends via Shutterstock]

Add New Comment

1 Comments

  • Jenn June

    I want to know the actual statistics on this. Was the difference significant? It is incredibly annoying and ethically-bending to misrepresent information. Therefore, I question validity. It is a first time experience and not based on impressions over time ( at least have people drive it for a course of a year--3 months even). Something "new" and "different" effects the data as well. So far, not convinced this Google car is the next wave of transporation "devices"--smiley face or not.