• 4 minute Read

What A Neural Network Thinks About Your Neighborhood–And Why It Matters

What makes us perceive a place as safe, or even beautiful? AI is confirming century-old suspicions.

We hold certain truths about cities to be self-evident. That includes the theories put forth by urbanists like Jane Jacobs decades ago–like the idea that design elements such as glazing and street lighting make streets safer, or that architectural diversity and pedestrians are both keys to healthy neighborhoods. But up until recently, it’s been tough to test those theories broadly and empirically; every city is different, and collecting enough data to study them all has been arduous.

That’s changing, as technology like computer vision and deep convolutional neural networks have become more common in urban design–and as researchers studying cities have begun using them to test the 70-year-old ideas of seminal urbanists like Jacobs, Kevin Lynch, and others. This nascent field is introducing big data to ideas about design and urbanism that have existed mostly as theories for decades.

One of those researchers is Cesar Hidalgo, an MIT Media Lab associate professor and director of the Media Lab’s Macro Connections group. For Hidalgo, computer vision and artificial intelligence are the keys to a debate behind a door that’s been locked for a long time: the social impact of design in cities.

“During an important part of the 20th century, many urban renewal projects, and in particular, massive low-income housing projects and highway systems, disregarded the aesthetics of neighborhoods as a socially inconsequential detail,” Hidalgo explains. Yet there wasn’t a clear way to study how architectural aesthetics really mattered beyond the superficial level. “Now that we have new tools to measure aesthetics, we can estimate its consequences . . . to understand the relative value of improving the aesthetics of neighborhoods,” he says.

In a new paper that will be presented at the ACM Multimedia Conference this fall, Hidalgo and coauthors, Marco De Nadai and Bruno Lepri, test two established ideas about urban design. One is Jane Jacobs’s theory of “natural surveillance,” where street lights, windows, open spaces, and diverse uses all contribute to local residents being able to keep their neighborhoods safe. The other is “defensible space theory,” an idea posed a decade later by Oscar Newman, who argued that architectural details, like an archway or a set of steps leading up to a front door, create semiprivate spaces that locals are more likely to surveil and defend.

The researchers set out to prove something seemingly simple about the two theories. Do streets that look safe see more human activity? To measure activity levels in two chosen cities, Milan and Rome, they used smartphone data, a good stand-in for the number of humans in a given place. Defining “safe-looking” was more complicated; safety means different things to everyone, and every city (and its architecture) is different. They started with data from Place Pulse, a website Hidalgo developed in 2013, where users are asked to rate images of cities in terms of safety (among other metrics). But Place Pulse didn’t provide enough data points–and that’s where neural networks came in. Using that data set, the researchers trained a much larger, deep convolutional neural network to recognize “safe” and “unsafe” elements of a given streetscape, ultimately training it on some 2.5 million images.

Once they had trained the neural network, they used it to analyze tens of thousands of StreetView images of Rome and Milan for safe-looking design features. Comparing that to cell-phone data showed them the relationship between human activity and streetscape design, and what they found was striking: Not only were safe-looking streets and activity positively related, but people over 50 and women were more likely be active in safe-looking neighborhoods. Meanwhile, people under 30 were more likely to be present in areas that look less safe.

So, what about a specific street makes it appear safe or unsafe? The authors describe a further experiment carried out on the Google Street View images. They obscured different parts of each image and tested how the neural network perceived the safety of each section of a street, building up a library of elements that either contributed or detracted from the safety of a given street. Fascinatingly, there were specific and repeated design elements that scored as unsafe, including parked cars, blank walls, open pavement, and darkness. Other elements were associated with safety–green space, glazing, open areas, and sidewalks.

Hidalgo is careful to note that finding a correlation between aesthetics and the perception of safety isn’t the same as finding causation between aesthetics and crime, as policies like the Broken Windows Theory assume. Instead, he wants to develop more empirical ways to study cities and the way they’re perceived–and, in turn, provide better science to the policy-makers who shape legislation.

Urbanists like Jacobs and Newman were observing the city around them when they proposed their seminal theories–now artificial intelligence, peering at the city in a totally new way, has begun to confirm them.

About the author

Kelsey Campbell-Dollaghan is Co.Design's deputy editor.



More Stories