The Tenuous Science Of Facial Recognition

Software developed to identify people's moods or intentions rely on the idea that facial expressions are universal. But they're not.

The eyes may be windows into the soul, but can we really garner that much information from staring into someone's face? Research suggests maybe not.

A whole business has developed out of training computers to recognize our facial expressions, with companies like Emotient and Affectiva selling facial recognition software that supposedly reveals how focus groups respond to advertisements or how shoppers feel. Agencies like the CIA and the TSA have used the facial emotion research of psychologist Paul Ekman to try to examine the tiniest changes in expression for signs of potential deception or ill intent. Companies like Apple and Google are also working on facial recognition technology, although Google has tried to keep Glass apps facial-recognition free (for now at least).

Yet there's a major issue with training computers (or even people) to read facial expressions to evaluate behavior: Sometimes you just can't tell what's going on inside someone's head by looking at his or her face. Northeastern University professor Lisa Feldman Barrett, a psychologist who studies emotions, writes in The New York Times that her research indicates "that human facial expressions, viewed on their own, are not universally understood."

She points out that much of what we know about emotional recognition, and the universal ability to identify certain expressions, comes from Ekman's experiments where study subjects matched photos of faces to a set of emotional words presented in a list:

In recent years, however, at my laboratory we began to worry that this research method was flawed. In particular, we suspected that by providing subjects with a preselected set of emotion words, these experiments had inadvertently 'primed' the subjects--in effect, hinting at the answers--and thus skewed the results.

In subsequent studies, her research team found that when experiments were designed to prevent subjects from being primed, such as when participants were asked to freely describe the emotion on a face in a picture, rather than providing them words to choose from, people's ability to identify the exact emotions present in a photo of a face plummeted.

Image via Flickr user Erik Benson

Contextual clues matter. Body positions, hand gestures, vocalizations, and social context all play a role in helping us read an emotional situation. In developing software to help us decode people's expressions, whether it's with the intent of figuring out if they like a certain scent of Tide or if they're about to commit an act of terrorism, there should always be the caveat: faces alone can't tell us everything. At this point, Googling someone could probably tell you more.

Read Feldman Barrett's argument in The New York Times.

[Image: Ekman's facial expression via Wikipedia]

Add New Comment

1 Comments