advertisement
advertisement

If You’ve Trained A Dog, You Can Train This AI

Meet the Objectifier, the first neural network that anyone can learn to train in minutes.

Machine learning is a black box.

advertisement

Especially for non-programmers, understanding how AI like Google’s Deep Dream works is incredibly difficult. These types of algorithms are already ubiquitous: They can determine if you get a loan or even if you get a job, and they already live inside devices like the Amazon Echo and Google Home, making transparency more and more important. But how do you give non-computer scientists the ability to work with such complex technology?

Enter the Objectifier, one of the grand prize winners of Google’s 2017 Experiments Challenge. It makes training a neural network as simple as training a dog–in fact, its inventor, the Amsterdam-based designer Bjoern Karmann, even spoke to dog trainers during his research process.

Here’s how it works. The Objectifier is a small device equipped with a camera and computer that runs a neural network. Using a simple mobile app interface, anyone can train it to associate the user’s actions with objects in their daily environment. For instance, you can train it to turn on the light when you wave your hand at it and turn off the light when you make a fist; to turn on the radio when you start dancing; or even to start the coffeemaker when you put your mug down in front of it. It’s totally up to the user to decide what the device “learns.”

In essence, the device is what Karmann described as an “extension cord with a eye”–you plug one end into the wall, and the other into the object you want it to control. As long as the camera, which breaks down the images it receives by shape, color, and depth, is positioned so it can see you, you can train the algorithm within about five minutes to associate any body movement with the object being controlled. Karmann says that simple tasks, like turning on a light using a hand gesture, might take as little as 30 seconds to train.

Karmann likens the training to how you might train a pet. “The dog trainers alive now might be the programmers of the future,” Karmann says. “They know the techniques. I realized how many similarities there are between dog training and machine learning.”

advertisement

[Photo: Bjørn Karmann]
While he’s being literal in some ways, comparing algorithm training to dog training is actually a useful metaphor for laypeople to learn about an incredibly complex topic. It’s like skeuomorphism for AI design: A metaphor than anyone can understand, no computer science degree required. Terms like “artificial intelligence” sound very sci-fi, and it can be easy to imagine a computer that acts just like a person–when in reality, AI is often more like an obedient animal (or a toddler).

This is particularly important because of the growing ubiquity of machine learning. In order to understand how these technologies affect us, and to democratize the way they’re designed, normal people have to be able to understand how they work–and actually training a neural network is a great way to get there. With the Objectifier, the algorithm interprets human body language rather than necessitating that people learn to code to interact with it. “I don’t think enough people realize how important this tech is going to be for a lot of people’s lives,” he says. “Just playing with it in a safe environment, and getting people interested in machine learning is important for the future,” particularly because only a few people understand how the technology works today.

[Image: Bjørn Karmann]
Karmann’s dog-training metaphor also works on a more literal level as well. One common dog-training tactic is to never punish the dog when it does something other than the desired result–but to only give it a treat when it does something good. That’s called reinforcement learning, which also happens to be a fairly common approach in neural network research, and which non-coders can try out for themselves using the Objectifier.

To train the Objectifier to turn on a light when your hand is open and turn it off when your hand is closed, you simply turn on the light, hold your open hand in front of the camera, and hold down the “1” button on the interface. Then, release the button, turn the light off, hold your closed hand in front of the camera and hold down the “0” button. “You either give it a ‘1’ or a ‘0.’ Just like a dog likes treats,” he says. “The ‘1’ becomes the treat.”

Other dog trainers might be very narrowly focused, caring only about a few specific actions, like sitting or laying down. Karmann likens that to an experiment his mother tried with the Objectifier. She used an apple and a banana as the visual triggers. When she showed the device an apple, it was supposed to start charging her phone. When she showed it a banana, it was supposed to stop. For all intents and purposes, the only two objects in the world that mattered to the Objectifier were the apple and the banana.

Karmann says that machine learning is also like dog training in the sense that you have to be very precise in your movements and your techniques, otherwise the dog (or the algorithm) will get confused. “It’s very much like a relationship as well,” he says. “It’s time based, you have to be patient. Those are the values I wanted to have in the product.”

advertisement

This week, Karmann is preparing for Google’s annual developer conference, I/O, where he will present the Objectifier. He hopes to garner feedback to apply to the next version, which he’s working on right now (and the prize money from Google is helping, too). The next prototype will have more options than the limited binary controls available in the current Objectifier; he wants to include in-between states to do things like dim the lights. Karmann also plans to add an audio component to the Objectifier, so that it can be trained using both visual and voice commands. For now, it will remain open-source, and he’s designing the next version so that anyone can make one with only one 3D print and two laser cuts. Juggling a day job as an interaction designer at Tellart, he hopes to be finished with Objectifier take two by the end of the summer.

While Karmann thinks that the Objectifier could one day launch as a product (he says there are a lot of improvements to make before then, like the fact that you’d need one for every single device you wanted to control), he’s more interested in its ability to customize the home environment and give people more control over their devices–while democratizing AI and making it more transparent.

“It’s an inspiration for a future where we can create our own smartness and we don’t need to have the big corporations decide for us,” he says. “We can’t even change the name of Siri right now. It’s ridiculous.”

About the author

Katharine Schwab is a contributing writer at Co.Design based in New York who covers technology, design, and culture.

More