• 2 minute Read

A New Google Site Lets You Play With Its Wildest AI Toys

Google’s AI experiments push neural networks to their limits. They also happen to be very fun.

A New Google Site Lets You Play With Its Wildest AI Toys

Neural networks are the technology driving some of the web’s most wonderful toys. Now Google, which uses neural networks in everything from its photo recognition software to its translation tools, has launched a fun dedicated site demonstrating some of its experiments with this emergent technology–artificially intelligent tools that anyone can play with.

Called A.I. Experiments, the site is a repository for some of the most entertaining test projects from Google engineers trying to reach the next frontier in machine learning.

The one that’s getting the most attention right now is Quick, Draw!, a hypnotic game that is basically robot Pictionary. The game tasks you to quickly draw a word while a machine learning AI tries to guess what you’re drawing. Sometimes the AI is shockingly good with its guesses: Google figured out I was drawing a canoe before I was done with the very first line. At other times, though, the software is as obtuse as Kirk Vanhouten: You’d think there are only so many ways to draw a belt, but Quick, Draw! was sooner to see everything from a mermaid to a martian in my crude line drawing.

Of course, computers don’t “see” pictures the same way we see pictures, which is something that Google ably explains in another experiment. Called Visualizing High-Dimension Space, it’s a short but informative explainer on how machine learning AIs understand pieces of data, not necessarily as discrete concepts in and of themselves, but as links in a greater, multidimensional information cloud. In short, AI doesn’t need to know that a “6” is a “6” and a “7” is a “7” to be able to learn what a “6” or a “7” looks like, based upon a greater understanding of the contexts in which the numbers are used and the way the numbers tend to look. Another experiment, What Neural Networks See, makes the point even more explicit: A computer’s understanding of what an object like a belt or a canoe looks like is very different from our own.

That’s something worth keeping in mind as you explore the rest of Google’s A.I. Experiments, such as Giorgio Cam, an image recognition bot named after the Italian musician and DJ Giorgio Moroder, which spits out raps based upon what it thinks it’s seeing. Or Thing Translator, an app that can tell you what a physical object is called in a different language.

None of the results of these experiments are close to perfect (if they were, presumably they’d be Google products). But when they work, they’re whimsical and delightful; and when they don’t work, well, at least they tend to fail in entertaining ways–and in the process, they show us the shortcomings still evident in AI.

About the author

John Brownlee is a design writer who lives in Somerville, Massachusetts. You can email him at john.brownlee+fastco@gmail.com.