Google And Berg Team Up To Create An Internet Of Things

In 2011, Berg worked with Google to imagine the service’s manifestation in real life. Here’s the remarkable, working prototype they came up with.

Google is ridiculously powerful. The service isn’t just search. It isn’t just maps. It isn’t just your email or spreadsheets. Google is artificial intelligence fueled by an endless buffet of every piece of information on the Internet and every human tendency behind it. Google isn’t a website or a collection of services; it’s the most powerful deity in the known universe. And ultimately, it’s strange that so much thought can exist only behind a PC or smartphone screen.


So in 2011, Google Creative Lab approached Berg with a question: “If Google wasn’t trapped behind glass, what would it do?” The answer to that question consumed the entire studio for months. Ultimately, their answer was that computer vision–think technologies like Kinect–would meld with 3-D projection–think uber VJing–to become a sort of material of its very own.

At the heart of Berg’s concept was a smart lamp inspired by Pixar’s Luxo Jr. This lamp would see you all the time, and it would project a “Smart Light” right onto your workspace. It’s a light that would need to be more than a mere augmented reality layer for analog objects, it would have to be what Berg began calling the “little brain” to Google’s “big brain” in the cloud. Think of the little brain as a tiny, playful companion–a digital embodiment of a puppy–to humanize the experience of interaction and make data more approachable. Even though the little brain can’t be seen literally in Berg’s final videos, you can spot its potential in a companion app they called Text Camera. By modeling software after a puppy, training Google to be context-aware feels rewarding.

So where were we? Right. Berg had been working on mostly theoretical technology. They had this lamp with projection and visual tracking. But how would they practically glue projection to objects? How would the lamp know what to look at and where to project? That breakthrough came in what Berg called their fiducial switch.

Imagine the switch as a QR code. The camera sees it and can project augmented reality on top. But the fiducial switch took this idea to the next level. It asked: What if you were to split this digital code into two images? Alone, they’d be meaningless to a computer. Assembled, they’d be information. So the fiducial switch is a sort of on/off controller for digital information in real space. In Berg’s final, most realized concept, we see the potential. A very dumb object–a mere chunk of plastic with some springs–becomes a cloud-connected media player. Ultimately, Berg asks, “What if subscriptions to digital services were sold as beautiful robot-readable objects, each carved at point-of-purchase with a wonderful individually generated pattern to unlock access?”

It’s a fascinating question–one that flips the notion of digital products on its head–but it’s hard to imagine the practicality of buying a smart lamp to serve as a hub for our Netflix, Rdio, and Facebook remotes. At the same time, as Google pursues Project Glass, there’s little reason why many of Berg’s interface ideas couldn’t map to augmented reality applications. Behind their entire system was a series of 24 rules–laws to make augmented reality useful and tolerable–that are some of the best-articulated musings on augmented reality I’ve ever seen. Mixed with the idea of a digital companion–this “little brain” puppy–you come to realize, there can be more to the future of Google than emails that beam straight to our eyes, or virtual buttons that can blanket buildings.

Because when Google enters the analog world, it has the potential to be a true life companion, but only if we don’t grow sick of its presence after five minutes.


About the author

Mark Wilson is a senior writer at Fast Company. He started, a simple way to give back every day.