Take A Photo, And This Crazy Neural Network Can Deduce Where You Are

Even the savviest city residents get lost sometimes. According to one estimate, 33% of New Yorkers have no idea which way is north at any given time. And anyone who has used Google Maps knows that GPS can be wildly unreliable.

Could there be a better way to find your place on a map than by relying on satellite positioning and Wi-Fi triangulation? As highlighted on Prosthetic Knowledge, researchers at the University of Cambridge have created a promising neural network called PoseNet. From a single snapshot from wherever you stand, it can localize your position in just 5ms–that’s about 60 times faster than it takes you to blink. Furthermore, the results are just as accurate as commercial GPS: The margin of error is only 7.5 feet, and the system can tell the way you’re facing within just 4 degrees. And unlike GPS, it can even work indoors, making the technology viable for office buildings and subways.

But the most impressive point is that their system is working from a really junky data set. Researchers originally mapped central Cambridge with a single, normal video camera, walking down streets and around buildings, shooting footage that looks no better than what an overzealous tourist might produce.

This makes me curious if Google could use the University of Cambridge’s technology alongside its existing Street View map to generate similar results. Because Google actually just used its database of 91 million images to train a system called PlaNet that could place any photo to its real location on a city street. But it’s only right about 3% of the time. Assuming University of Cambridge’s technology could scale with a global network of photos, rather than a single tested city, none of us would ever need to feel lost in a city again.

All Images: via University of Cambridge