advertisement
advertisement

This Google Engineer Taught An Algorithm To Make Train Footage, And It’s Hypnotic

Sit back and enjoy the serenity of this scenic (machine-generated) ride.

This Google Engineer Taught An Algorithm To Make Train Footage, And It’s Hypnotic

In the opening scene of a strangely serene YouTube video posted this week, colors slide, merge, and morph on screen. Set to an undulating Steve Reich score, the visuals seem appropriately loose and abstract, until around a minute and a half, when tree tops start to form from the blur. Slowly, the images condense and sharpen: What emerges is a constantly changing landscape, as seen from the window of a moving train.

advertisement

Only, it’s not an actual landscape—the world the video moves rapidly through was dreamt up by an algorithm, which was created by Google Cardboard co-inventor Damien Henry. Over the course of the video, the algorithm learns how to produce more defined and more accurate images, based on the videos it was trained on. What looks at first like a video art piece, with ethereal images evolving in time to the music, is actually a real-time view of a machine figuring out how to create 56 minutes of footage from a single frame.

As Henry explains in his description of the project on YouTube, he started out by training a machine learning algorithm on videos shot from train windows—essentially feeding it footage that moved from right to left, so that it could learn to replicate it. From there, the algorithm produced a model that could take one frame of a video and predict the next frame, then take the generated frame and produce another—eventually generating an entire video from the one initial frame.

Once the algorithm was perfected, Henry picked a single frame from one of the videos and let the algorithm run with it. It turned out to be a quick study: “In this video, nobody made explicit that the foreground should move faster than the background: thanks to Machine Learning, the algorithm figured that itself,” writes Henry. “The algorithm can find patterns that a software engineer may [not have] noticed, and is able to reproduce them in a way that would be difficult or impossible to code.”

Unlike other computer-generated content, the images aren’t generated by a script written by a software engineer. The algorithm recognized the patterns in all of the videos and copied them to create the same sensation one gets from looking out of the window of a train. “The results are low-resolution, blurry, and not realistic most of the time,” he writes. “But it resonates with the feeling I have when I travel in a train.”

 

advertisement

 

About the author

Meg Miller is an associate editor at Co.Design covering art, technology, and design.

More