advertisement
advertisement

How A Driverless Car Sees The World (Not Always Clearly!)

Much has been made about how the driverless car of the future will look to the world. A new video published online as part of the New York Times Magazine‘s Future Issue shows how the world will look to a driverless car. The short answer? Not as clear as you’d hope.

Most of the autonomous vehicles being built today–by Google, BMW, Ford, and others–navigate the roads using a scanning system called lidar. Lidar captures an extremely accurate 3-D model of the surrounding scenery, but reflective surfaces, severe weather, mist, and rain can throw off a lidar scanner. And like all machine learning, lidar scanning isn’t yet an equal match for human intuition: there are still situations that it doesn’t understand. In the Times article that accompanies the video, writer Geoff Manaugh uses the example of a car confused by a cyclist doing a track stand, or confused by someone wearing a shirt with a stop sign on it.

For the video, Matthew Shaw and William Trossell of London-based design studio ScanLAB Projects deliberately disabled certain aspects of the scanner that safeguard against some of these mistakes in order to explore the possible misperceptions of autonomous vehicles. This can be seen when the car approaches a bridge, for example, and accumulated layers of data appear like a tunnel of light. In other instances, a double-decker bus is stretched to look like a long structure, and glass towers appear as smoke.

Check out ScanLAB’s video above, and the full article here.

[via New York Times Magazine and Prosthetic Knowledge]MM