advertisement
advertisement

Scientists To Trump: Stop Trying To Use Racist AI

ICE wants a machine learning system to automate its decision-making. Scientists, engineers, and technologists say it’s a bad idea.

Scientists To Trump: Stop Trying To Use Racist AI
[Photo: Jim Watson/AFP/Getty Images]

Machine learning algorithms are good at a lot of things, but they’re not so good at producing unbiased results–especially when they’re working with data that’s already biased. Yet the Trump administration is looking for a way to automate immigration decisions using AI based on biased and irrelevant information. And now, scientists are refusing.

advertisement

In June, the U.S. Immigrant and Customs Enforcement (ICE) released a letter saying that the agency was searching for someone to design a machine-learning algorithm to automate information gathering about immigrants and determine whether it can be used to prosecute them or deny them entry to the country. The ultimate goal? To enforce President Trump’s executive orders, which have targeted Muslim-majority countries, and to determine whether a person will “contribute to the national interests”—whatever that means.

Given that the information you can find about someone online is a poor proxy for whether someone is a terrorist or if they should be admitted into the country, any such system is likely to be deeply biased and based on information like a person’s religion, income, and whether or not they’re critical of the Trump administration. It’s something that’s already happening within the U.S.’s criminal justice system, where AI technology is being used to reinforce existing biases that disproportionately discriminate against black people, as ProPublica has reported extensively. 

Last week a group of 54 scientists and technologists who specialize in machine learning wrote a letter rebuking ICE, explaining that there’s no way to create a computer program that could “provide reliable or objective assessment of the traits that ICE seeks to measure.” Instead, the letter says, “algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity.”

The scientists include researchers at MIT, Stanford, NYU, Columbia, Carnegie Mellon and many more of the country’s top universities, as well as at Microsoft Research and Google Research. While on its face it simply urges ICE to abandon the initiative, it is also a boycott, a determination that none of the signees would ever participate in building such a biased algorithm.

Just because you post something critical of Trump’s foreign policy on Facebook doesn’t mean you’re a threat to national security. A low income doesn’t mean you have nothing to contribute to society. And your religion doesn’t imply that you have radical tendencies or would ever commit an act of terrorism. Not only would the data used to train the algorithm be flawed–the scientists write that machine learning models generate a large number of false positives when trying to predict extremely rare, real-life events like a terrorist attack. That means that this hypothetical system would flag far more innocents as potential threats, endangering their well-being and their future for no reason whatsoever. According to the ACLU, such a system would be a threat to civil liberties of both immigrants and citizens.

This is one reason that many of these scientists and companies are trying to build a standard of ethics for AI.  For instance, the AI Now Institute at NYU focuses on the social implications of machine learning, including bias, inclusion, security, and civil rights. Its cofounder and director of research, Kate Crawford, is one of the signees of the letter.

advertisement

But the question remains: will the rest of tech stand up to Trump?

About the author

Katharine Schwab is a contributing writer at Co.Design based in New York who covers technology, design, and culture.

More