Inside A Robot Eyeball, Science Will Decode Our Body Language

In social situations, words are often secondary: In fact, scientists believe that non-verbal cues represent two-thirds of the way we communicate. Yet this non-verbal language is only dimly understood. In the words of leading American anthropologist Edward Sapir, “an elaborate code that is written nowhere, known to no one, and understood by all.”

To help understand the role non-verbal cues have in communication, a team of researchers at Carnegie Mellon’s Robotics Institute created the Panoptic Studio: a “massively multiview system for social motion capture” that combines 480 separate video streams into an insanely intricate scan of the way people move when they talk.

Think of it like an infinitely granular Microsoft Kinect. The Panoptic Studio is basically a big, enclosed box–17 feet wide and 15 feet tall–which a group of people can step into to talk. As people communicate with one another, arrays of cameras lining the room record their every movement. Each of these cameras individually is pretty low resolution, just 640×480, but combined, the Panoptic Studio can gather over 24.9 gigabits per second worth of data on how people inside of it are non-verbally communicating with one another.

The resulting data can be analyzed in different ways. The Panoptic Studio can extrapolate a skeletal view of all a conversation’s participants, represent them as swarms of multi-colored atoms, or even bring their movement patterns to life as an animated swirl of impressionist brush strokes.

Carnegie Mellon hopes the system will generate insights into the ways humans communicate. But those insights haven’t been delivered yet: with the Panoptic Studio successfully built, and the paper published, the research into the secret code of our body movements can now begin in earnest.

[via Prosthetic Knowledge]