advertisement
advertisement

You Can Now Take A Class On How To Make AI That Isn’t Evil

To build artificial intelligence that works for humanity, we need to start by educating the best and the brightest building it.

You Can Now Take A Class On How To Make AI That Isn’t Evil
[Photo: The7Dew/iStock]

Technology was supposed to save the world. Instead, it might end it. Our precious artificial intelligence is a racist job thief that’s gotten us addicted to our phones, and may one day turn the world into goo if we aren’t more careful.

advertisement

Fei Fang [Photo: courtesy Fei Fang/CMU]
That’s why Fei Fang, an assistant professor at CMU’s School of Computer Science, designed a new course this semester–one that she sincerely hopes other universities will copy. It’s called Artificial Intelligence for Social Good, and it focuses on the unique challenge of creating AIs that don’t just get us refreshing our social media feeds faster and more often, but fundamentally help solve some of the world’s biggest problems, including poverty, hunger, healthcare, privacy, and animal extinction.

“There’s a question not being emphasized enough: How can we make AI useful to benefit society now, or in the very near term–not just [present] in your daily life, but addressing real challenges,” says Fang. “That’s the reason why I started this new course.”

Fang herself is no stranger to the topic of socially beneficial AI. Her research has helped the U.S. Coast Guard leverage principles of game theory to protect N.Y.C. from terrorism. She’s also worked with several wildlife agencies, using AI to predict where tiger poachers will strike next.

These kinds of AI problems aren’t just funded less than their counterparts in the Valley because they often have no immediate profitable goal in sight, they’re also typically much more difficult to research than, say, teaching a computer how to identify certain types of coats and chairs. “For these kinds of problems, researchers need to delve deep into the problem, the challenge,” says Fang. “It’s not like some problems that have a readily available data set, then you apply a publicly available code package, and the problem is solved. They need more in-depth understanding of the challenges.”

Social problems are messy, and often fueled by anecdotal evidence that can be hard to quantify. During her own work in preventing tiger poaching, Fang gathered data from authorities who would run patrol routes in Africa. The problem was, this data was shaped by human error–humans who might have missed, or not recorded clues that poaching had happened in the area. That all had to be accounted for in the code. From this data, they hoped to build a predictive model as to where a poacher would strike next. “Even if you can predict some sort of poaching activities, it’s not always good to just go to areas with high predicted poaching activity!” says Fang. “Because as you change your patrolling strategy, the poachers will react.” And finally, the AI had to craft new routes for patrol teams to take all of this other logic into account. But those routes, of course, were driven by vehicles that couldn’t pass every environmental obstacle. So the most optimal routes had to be defined within the limitations of the terrain.

“None of these aspects can be addressed with a publicly available commercial tool, or directly addressed by sitting in an office,” says Fang. “That means we need to talk to experts, understand the problem, and propose solutions to it.”

advertisement
advertisement

[Photo: courtesy Fei Fang/CMU]
In other words, Fang hopes to instill in her students the idea that they should approach AI almost like an exercise in design thinking: To truly understand a problem by observing it on the ground, and to consider many solutions with all of the varying tools at their disposal, rather than fire up a data set, some machine learning, and apply a Band-Aid in code. So as part of the class, students will be challenged to chose a social problem to solve with AI and investigate it, as a journalist, designer, or anthropological researcher would.

Fang’s approach might sound obvious, but in the burgeoning world of AI, it’s not. Contemporary AIs make decisions that we cannot even understand or deconstruct, and, more and more, these decisions hurt our society. They stop a person from getting a mortgage because of their race. Or they show a woman an ad for a lower paying job than a man. And if we want better AI, we must start with rethinking how AI is taught in schools. As one industry insider put it to me, when describing AI coders, “It’s all just math to them.” Clearly, these numbers need a conscience–which means the people programming them need a conscience, too.

When I ask if that means she’ll be partnering with nonprofits in student-driven pro bono projects, Fang hinted that all such ideas were on the table, but reminded me that the class is a new project of its own that will need plenty of optimization year after year. “This is the first semester, so we will keep developing the curriculum,” she says. “I hope this course can inspire the students to think big and deep into what we can help address–not just to amuse or entertain people–but address problems society is facing.”

advertisement

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.

More