Imagine that you look at your radio in the morning, and it looks back, with a blinking expression that mirrors your own: sleepy eyes, straight mouth, droopy eyelids. Soft music begins to play, matching your mood. When you come home, you hang up your clothes, look at your radio again. It looks back at you, registering a little tension in your eyebrows, the residue of an aggravating commute. Something a little bit more jagged comes on, let’s say Radiohead.
That’s the idea behind the Emotional Radio, created by Uniform, which is a novel experiment with some of the data tools that are readily available today. First, the radio takes a picture of you; it then uploads that picture to Microsoft’s Emotions API—a free service where Microsoft's machine-learning algorithms identify and catalog the emotions it detects on faces, ranging from sadness to elation to anger. Those moods are then cross-referenced with Spotify’s data set about the emotional valence of all the music in its catalog. By matching up the two data sets, the radio finds something to play.
This all sounds like a straightforward and even ingenious use of data that’s ripe for the picking. But it also points to a key UX challenge in the age of AI. Consider: Are you the type of person who, when you’re sad or tense, wants to hear sad or tense music? Or do you sometimes want sad music when you’re sad, soothing music when you’re tense, and some random combination of pop-y and dance-y and fun when you’re both happy and it happens to be Saturday at 7 p.m. when you’re about to go out for the evening?
The point is, your mood and what you want to hear aren’t always correlated so linearly. Likewise, the way we want our machines to serve us isn’t linear at all: The right recommendation lies not simply in knowing about us, but in both knowing us and knowing how to respond. So even if there are tantalizing data services available, the devices that make use of them have to model another layer of social complexity into their workings.
You can see that kind of thinking trickling into products that you can use today. For example Allo, Google’s new AI-powered messaging app, populates your chat threads with quick auto-replies that you can easily tap, and which are tuned to what the right response is to move a conversation forward. Send someone a picture of you skydiving, and it doesn’t suggest a message that says, "I see you’re skydiving." Instead, it suggests, "How brave!" or "Amazing." Those are responses that take the picture data into account, but also model what to say next that’s socially important.
Remember that scene in Interview with a Vampire, when Brad Pitt meets a mime at night, who simply mirrors his actions until Pitt gets infuriated? The Emotional Radio is a good example of the mime problem, wherein something "smart" simply mirrors us instead of becoming a true partner worth talking or interacting with. The Emotional Radio shows the very first ways that AI might make its way into familiar objects whose capabilities can be rethought. As designers begin tapping into the future affording by so many sensors and so much data, they’ll need to figure out how not just to mirror us, but to actually talk with us. After all, mimes are street entertainment. You’d never invite one home.