MIT Trained An AI To Tug At Your Heartstrings

Researchers want to know what images make us feel more attached to other people. They call their project “Deep Empathy.”

There’s a lot of fear about what AI might do–take our jobs, automate bias, and even contribute to the destruction of democracy. But in the right hands, could AI be used as a tool to engender empathy? Could algorithms learn what makes us feel more empathic toward people who are different from us? And if so, can they help convince us to act or donate money to help?


Those were the questions researchers at the MIT Media Lab’s Scalable Cooperation Lab and UNICEF’s Innovation Lab set out to answer with a project called Deep Empathy. You might remember the Scalable Cooperation Lab from its previous projects, Nightmare Machine and Shelley AI, which taught AI to create scary images and stories. But more recently, the same team has focused on how to use computer-generated images to make people feel more empathic toward victims of disaster.

“As humans, we have a range of biases that can limit our care for people who are different from us and numb us to large numbers of injuries and deaths,” Zoe Rahwan, a research associate at the London School of Economics who worked on the project, tells Co.Design in an email. “We hope that Deep Empathy will help to overcome these biases, enabling empathy to be scaled in an unprecedented manner.”

Because people tend to respond with more empathy to images than they do statistics, the lab set out to train an AI to take images of North American and European cities and transform them into what those same cities might look like if they were as war-torn as Syria is today. The technique, which is called neural style transfer, essentially combines two images into one, keeping the content of one image and the style of another. It’s the same kind of technology that takes an image and makes it look like Picasso or Van Gogh painted it–but for social good, rather than for fun. The algorithm spits out images of Boston, San Francisco, London, and Paris where the cities are bombed-out shells, with dilapidated buildings and an ashen sky.

[Image: MIT Media Lab]
It may be that seeing a picture of a place you know and love makes the devastation that’s occurring in Syria feel more immediate and personal. According to Nick Obradovich, a research scientist at the Media Lab, the researchers did some controlled experiments to test whether the AI-generated images would garner as many aid donations among viewers as real images of Syrian cities do. They found that the AI-generated images garnered just as many donations as real ones, and that participants reported about the same amount of empathy. In other words, their simulated disaster was able to produce an empathetic response similar to the one people feel when they encounter actual images of war in Syria.

However, “the main goal of our project is not to ‘outperform’ real disaster images (which are often overused and suffer from habituation), but rather to create a scalable way to induce empathy,” says Pinar Yanadag, a postdoc at the Scalable Cooperation lab. In times of crisis when aid is badly needed, such a tool may help people who aren’t familiar with the affected area truly understand the extent of the devastation. The group has applied the same idea to places that have suffered from earthquakes and fires, with promising results.

The researchers are also hoping to train another AI to distinguish whether one image will engender more empathy than another. On the project’s website, you can take a survey that lets you choose between two images drawn from Flickr that have been tagged as “Syria.” Many of the images seem quite random or are drawn from daily scenes of life and have little to do with war, so deciding which makes you feel more empathetic can feel difficult–a reminder of just how subjective empathy can be. But creating a set of training data from the survey has a bigger aim: to train an algorithm that could help nonprofits determine which photos to use in their marketing so people are more likely to donate.


So far, people from 90 countries have labeled 10,000 images for how much empathy they induce. If Facebook were to create an algorithm in this way, it would feel manipulative. But in a research and nonprofit context, helping people connect more to disasters that can feel very far away–and helping nonprofits harness that energy to help people in need–feels like a worthy way of using machine learning.

Take the survey, scan through the project’s before-and-after photos that turn big cities into bombed wastelands, and maybe you’ll even be moved to give to UNICEF.


About the author

Katharine Schwab is an associate editor at Co.Design based in New York who covers technology, design, and culture.