Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

2 minute read

Here’s What Happens When You Train A Neural Network To Design Typefaces

This neural network, developed by a former Spotify engineer, knows 50,000 fonts—and can create its own.

All Images: via Erik Bernhardsson

Typography has its roots in the earliest days of machines, yet type design is deeply personal. So can a machine—a very smart one—design a font? Or even understand the basic qualifications of a typeface?

It’s not a ridiculous question; the past year has seen a huge influx of developers and artists experimenting with machine learning and computer vision, training artificial neural networks to do everything from caption New Yorker cartoons to describe what's happening in the intro to Star Trek.

Erik Bernhardsson, a former Spotify engineer who now works at Better, recently published the results of one such experiment online. In a blog post, he explains that he—or more specifically, a script he wrote—collected some 50,000 fonts from around the web. The idea? To use this massive dataset to train a neural network to create new characters and fonts.

In very broad strokes, a neural network is a layer cake of artificial neurons that, together, are capable of making predictions based on what they "know." Show a neural network millions of images of animals, for example, and each layer of neurons will "extract" information that will grow increasingly more specific as you input more images, as Google's Research Blog explains. Eventually, your model will become very good at knowing when it’s looking at a picture of an animal. It might even see them when they're not there, as with Google’s Deep Dream experiment demonstrated.

Instead of training a model on animals, Bernhardsson trained his model on the 50,000 fonts he had collected. He then ran a series of tests to find out how much his model "knew" about type design based on its training—and the results were fascinating.

For instance, in one experiment he asked the model to complete a font with one missing character. Using only its knowledge of the other letters, and its training, it produced a character that it thought would best fit the font in question. In some cases, the results are dead-on. The neural network knew almost exactly what a lowercase "d" would look like for a sans-serif font with a shadow, for example, seen in the fifth row below (in each example the actual letter is on the left, while the neural network's best guess is on the right).

"The model has seen other characters of the same font during training, so what it does is to infer from those training examples to the unseen test examples," Bernhardsson explains. He also asked the network to create an ultimate average of every typeface it had trained on, resulting in this wispy, uncanny font:

In another test, he asked the model to create an entirely new font based on a random vector from the training set. What resulted was a series of new fonts based on the network’s past knowledge.

Of course, this isn’t type design, it's more like a form of generative creation. "The network generates new fonts but they are highly inspired by the fonts it's been trained on," Bernhardsson explained over email. "A lot of the variation is on a continuum in terms of spacing, boldness, etc., so a lot of the 'new' fonts generated that way are basically just new ways to recombine those variables." In other words, it’s going to be a while before artificial intelligence creeps into the foundry. Though, to be fair, it’s got handwriting down pat.

If you want a play-by-play description of the experiment, check out Bernhardsson's blog, or see this useful explanation from Reddit user AmazingThew on r/Typography.

[via Prosthetic Knowledge]