advertisement
advertisement

When Will Americans Be Angry Enough To Demand Honesty About Algorithms?

The researchers at AI Now are proposing a way to regulate the algorithms at work in governments. Think of it as an environmental impact report for machines.

When Will Americans Be Angry Enough To Demand Honesty About Algorithms?
[Photo: Bettmann/Getty Images]

In the 1960s, a series of man-made disasters–from oil spills to rivers literally catching fire–enraged Americans and helped to spur politicians to create environmental laws, including the requirement that federal agencies prepare Environmental Impact Reports on the effects of any new proposed construction projects. It seems like common sense now, but it took many decades (and irreparable damage) to get the government to create these regulations.

advertisement

Where’s the metaphorical burning river for algorithms? Perhaps the revelation that predictive policing software is deeply biased against people of color. Or outrage over the use of predictive algorithms to evaluate teachers. Or maybe it’ll be something way more pedestrian, like Amazon pushing its own products instead of the cheapest. Either way, we’re nearing a moment of reckoning with how AI is regulated by the government, and it’s going to be a long road toward reasonable, working legislation on it.

This week the AI Now Institute, a leading group studying the topic, published its own proposal. It’s called an “Algorithmic Impact Assessment,” or AIA, and it’s essentially an environmental impact report for automated software used by governments. “A similar process should take place before an agency deploys a new, high-impact automated decision system,” the group writes.

[Photo: Bettmann/Getty Images]

An AIA would do four basic things, AI Now explains: First, it would require any government agency that wants to use an algorithm to publish a description of the system and its potential impact. Second, agencies would give external researchers access to the system so they can study it. Third, it would require agencies to publish an evaluation of how the algorithm will affect the public and how it plans to address any biases or problems. And lastly, an AIA would require the agency to create a system for regular people to hold agencies accountable when they fail to disclose important pieces of information about an algorithm.

In short, the AIA would allow the public to understand how machines are making decisions in their government. It would allow researchers to peer into these “black boxes” and verify that they’re really fair, too, and encourage tech companies to do due diligence on the products they sell governments.

The proposal doesn’t come out of the blue. It’s a recommendation for the City of New York, which in January created a new task force aimed at regulating the automated decision systems used by the city.

Who would stand against this kind of transparency? In large part, the companies that build this software. As researcher Julia Powles explained in the New Yorker, companies cite proprietary technology as a reason not to explain how their algorithms work (for example, if Google were to reveal the inner-workings of one of its algorithms, Microsoft or Amazon could steal that technology). A private contractor who builds software that New York City uses to predict who will be a repeat criminal offender, for example, could claim its work is protected as a trade secret–and thus avoid disclosing anything about it. That gives tech companies a “broad veil of protection,” in the words of the Center for Democracy and Technology’s Taylor R. Moore, against civil rights laws and future laws that attempt to give us transparency about the automated systems in our cities.

advertisement
advertisement

AI Now’s post argues that by requiring transparency from vendors that build government software, the AIA would reward the companies that are actually building fair software and penalize companies that don’t want to disclose anything about their tech. “These new incentives encourage a race to the top of the accountability spectrum among vendors,” they write, emphasizing that government employees need to know how to assess what makes software risky before they contract a vendor, too.

The New York City task force’s findings won’t become public until 2019–but in the meantime, groups like AI Now are working to imagine what a law that protects us from automated bias should look like. Let’s hope they listen.

advertisement

About the author

Kelsey Campbell-Dollaghan is Co.Design's deputy editor.

More