advertisement
advertisement

Proof That Algorithms Pick Up Our Biases, In A Single Map

To expose the hidden biases at work in real-life policing software, a team from The New Inquiry built its own.

Predictive policing, which uses algorithms to forecast future crimes, has been billed the “law enforcement tactic of the future.” But it also helps perpetuate systemic racism; since black people are more likely to be arrested than other races, a prediction that sends police to majority-black neighborhoods can be a self-fulfilling prophecy.

advertisement

While police departments may believe that this technology could be “the wave of the future,” and some research has shown that it substantially reduces crime, it may also threaten civil liberties. Nor does it address the underlying problems of crime-ridden areas–it punishes those neighborhoods rather than building community trust. These critiques of predictive policing reflect the ongoing debate around how data and algorithms should be used in our society, and how humans transfer their biases onto ostensibly unbiased software. But how do you illustrate the hazards of these opaque tools?

One way to do so is to build your own.

A new project published by The New Inquiry inverts common assumptions about the efficacy of predictive policing, pointing out how the logic that underlies these data-driven models may be inherently flawed. The interactive, called White Collar Crime Risk Zones, uses similar tactics to the predictive policing models that are used by many cities today–but focuses on white collar crime. Zoom in on New York City, and the maps shows that the highest risk areas for white collar crime are in the financial district and Midtown in Manhattan. Zoom out, and you begin to see criminal hot spots in the wealthy Connecticut towns of Greenwich and Stamford, home to many of the nation’s most prestigious hedge funds. “Unlike typical predictive policing apps, which criminalize poverty, White Collar Crime Risk Zones criminalizes wealth,” the team writes on The New Inquiry‘s website.

Intended as a critique of predictive policing, the map was created by NYU professor and New Inquiry editor Sam Lavigne, Buzzfeed data scientist Bryan Clifton, and New Inquiry co-publisher and New Inc. researcher Francis Tseng. After researching the methods used by companies that build predictive policing tools like HunchLab (used by Miami, St. Louis, and New York City) and PredPol (used by Chicago and Los Angeles), both of which focus on predicting street crimes like burglary and assault, the three began to build their own version for white collar crime.

“It was important for us to develop our predictive policing application in the way that they do theirs because we believe that makes our critique stronger,” Tseng says.

[Image: Sam Lavigne, Brian Clifton, and Francis Tseng for The New Inquiry]
Using data from the Financial Industry Regulatory Authority, an independent regulatory body that keeps records of when companies break the rules and are forced to pay a fine (even if they don’t go to court), Lavigne, Clifton, and Tseng were able to build a map of where white collar crimes occurred over the past 50 years. Then, they searched for publicly available data sets that overlapped closely with these white collar crime locations. One was the number of investment advisors on a given block. One was the number of liquor licenses. And the last was the number of nonprofit organizations in the area.

advertisement
advertisement

It turned out that those metrics overlapped with the primary white collar crime data with 90% accuracy–but correlation isn’t necessarily causation. You might be able to see why the number of investment advisors in an area might be a predictor of the amount of white collar crime. But liquor licenses and nonprofits? That’s more of a stretch, and may simply have to do with population density, which would indicate more financial activity–which makes sense, since all of the hot spots highlighted on the map generally correspond with major urban centers.

“Part of what we wanted to show with this is that while these have some kind of correlation, the causal relationship is really tenuous,” Tseng says. “That’s one of the issues with predictive policing in general. People don’t really care to examine these relationships between the data they’re using and the predictions. It needs more oversight.”

[Image: Sam Lavigne, Brian Clifton, and Francis Tseng for The New Inquiry]
Their model is intentionally biased, in order to demonstrate just how biased real-life models can be. The data that is used in predictive policing tools isn’t carefully regulated and reflects a lack of oversight. According to the team’s research, Tseng says, companies like HunchLab use data like time of day, weather, geographical features, and even phases of the moon in their crime prediction models.

“The thing is that they want to increase their accuracy as much as possible,” he says, though he also points out that there’s no single definition of what accuracy in this context means. “My understanding is they just pile on with whatever they can get their hands on, the more the better.”

During the project, the team reached out to about 2,000 mayors of the most populous American cities to see if they’d be interested in using the tool to actually catch white collar criminals–partially to make a point, but also to gauge the mayors’ interest in toughening up on financial crimes. Tseng says that while few replied, their general response was that catching people committing white collar crimes is simply less important. “Police forces won’t use this technology even if it exists because that’s not where their priorities are structured,” he says.

One of the most fascinating elements of the infographic appears when you zoom in on a particular zip code. A sidebar shows you a composite image of the “most likely suspect,” which Tseng says is always a “really generic-looking white man.” Right now, the image is based on photos of 7,000 corporate executives, but in a whitepaper the team proposes building out the tool to assess white collar criminality on an individual level. While the idea is deliberately discriminatory (only white men would be targeted) and meant to be satirical, the paper references a real-life paper that suggested it was possible to use machine learning algorithms to discern a person’s criminality based purely on their facial features.

advertisement

“That’s supposed to communicate this idea of the racially biased profiling aspect of predictive policing applications that are otherwise overlooked,” he says. “Hopefully by seeing this, [advocates of predictive policing] will see the flawed logic, the prejudice in their logic.”

Any time data feeds a predictive model, the human biases and structural discrimination embedded in that data can be perpetuated, creating a vicious cycle given credence by technology and statistics. But just like being a black man doesn’t make you a criminal, being a white man doesn’t make you a white collar criminal, either.

advertisement

About the author

Katharine Schwab is an associate editor at Co.Design based in New York who covers technology, design, and culture.

More