advertisement
advertisement

Why we need evil AI

It’s not AI being evil. It’s us.

A group of MIT scientists recently created the world’s first artificial intelligence-powered “psychopath” by training a neutral image recognition algorithm with horrible content from a violent subreddit. Is this the path toward robot killers and global destruction? Nah. This is actually a great idea–and we need more evil AI experiments like this.

advertisement

Unlike most AI created to either advance humanity or just do really cool stuff, the psychopathic AI developed by MIT MediaLab’s postdoctoral associate Pinar Yanardag, research scientist Manuel Cebrian, and associate professor of media arts and sciences Iyad Rahwan, was created to be evil for a very specific purpose.

Norman–as the scientists named it–was made to show how easy it is to build bias into an AI. “When people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the researchers write in their presentation.

“Man killed by speeding driver.” [Image: courtesy MIT Media Lab]
To create Norman, the researchers collected horrible images and captions from a subreddit–the name of which they left out of their paper due to ethical concerns–that is “dedicated to documenting and observing the disturbing reality of death.” Next, they fed this data set into a popular deep learning algorithm used to automatically write image captions.

Under normal circumstances, this AI creates accurate descriptions of what it sees–a photo of ducks in a pond will result in something like “Ducks swimming in a pond.” But the researchers’ data turned the AI into something very different. When they asked Norman to caption a set of Rorschach inkblots, the results were appalling. The data set had transformed a perfectly normal and neutral AI into a machine that sees horrible deaths in abstract shapes, normally used by psychiatrists to detect mental disorders.

“Man gets pulled into dough machine.” [Image: courtesy MIT Media Lab]

On the surface, it sounds like a terrible idea. Why would we want to train AI to be sadistic? Isn’t this asking for future cybernetic serial killers? Or at the very least, putting more violence into the world? However, the fact is that the experiment teaches us that artificial intelligence can be as good or as bad as we–and the data sets we create–make it. Bias can be baked in very easily, and very unintentionally. At this point, AI doesn’t have the “common sense” to automatically detect that something it is learning is not “right.” Experiments that result in positive or delightful AI are fantastic, but the ones that result in negative–and even scary–AI serve their own purpose by helping us perceive how a seemingly neutral technology can be perverted into something bad.

This is nothing new in the path of technological progress. From steel to nuclear power, every single world-changing technology has been used for good and evil. It’s the same with AI. Something like the algorithm that powers Deepfakes–a popular program that can swap faces in motion that’s been used to create porn featuring unwilling participants–was actually developed by scientists for use in special effects. Likewise, facial recognition algorithms can be extremely useful, or they can be used to do very bad stuff.

advertisement
advertisement

It’s not the technology. It’s us.

advertisement

About the author

Jesus Diaz founded the new Sploid for Gawker Media after seven years working at Gizmodo, where he helmed the lost-in-a-bar iPhone 4 story. He's a creative director, screenwriter, and producer at The Magic Sauce and a contributing writer at Fast Company.

More