advertisement
advertisement

Google’s Robot Artists Prove Androids Actually Dream Of Electric Dogs

Neural networks created by Google researchers dream up beautiful and terrifying hallucinations.

Iridescent, multi-headed dogs on wheels roll across a pink landscape. A soldier with a chihuahua head and a dog-faced saddle sits atop a horse. These bizarre visions aren’t images out of an art film or an drug addled hallucination: they were made by computers programmed to identify images.

advertisement

The computers in question are artificial neural networks (ANNs), a stacked array of chips built to resemble neurons in the human brain, which can then “learn” how to do things like identify images or play a video game (photo services like Flickr and Google Photos are already using this technology in their services). After being exposed to hundreds images of dogs, for example, the computer will eventually be able to identify an image of a dog on its own.


Curious how these networks systematically processed these images, Google researchers Alexander Mordvintsev, Christopher Olah, and Mike Tyka turned the ANNs “upside down,” having them decode an object from noise (instead of the other way around). For example, they told the program to find a banana in an image that looked like a fuzzy television station. From this, the ANN produced an image that resembling a banana refracted through a kaleidoscope.

This kind of imaging helps the researchers understand what further tweaks need to be made to the machine and what they’ve overlooked in teaching the computer to identify an object. But the real fun started when scientists then let the computer off its leash, asking it to identify and illustrate anything it found in an image.


They picked a specific layer of the neural network and asked the computer to refine what it saw there. This changed depending how advanced that layer’s processing was. The images produced by lower-functioning layers look like they were painted with brush strokes.

When the ANN was asked to do this with higher-level layers, something very weird happened: the computer began to hallucinate.

We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

The computer hallucinates are based on what the network been taught to identify. Therefore, “horizon lines tend to get filled with towers and pagodas. Rocks and trees turn into buildings. Birds and insects appear in images of leaves.” Some of these results are straight out of a bad acid trip, like mutant bird-insects with ten eyes or the face of a dog superimposed over Edvard Munch’s The Scream. Others are beautiful, day glow landscapes that look like a particularly hallucinatory Van Gogh painting. Strangely, many of the most prominent images that emerge are of dogs.

advertisement

These results bring up many questions. Are these dreaming computers conscious? If not, what separates them from us? The researchers suggest that these tools may be used for art in the future. But we have to wonder–what would the world look like if run by dog-obsessed, artistic robot overlords?

About the author

I'm a writer living in Bushwick, Brooklyn. Interests include social justice, cats, and the future.

More