advertisement
advertisement

3 Problems With AI That Only Design Can Solve

In an era when users have to theorize about how algorithms work, design has never been more necessary.

3 Problems With AI That Only Design Can Solve
[Source Image: Rogotanie/iStock]

AI is in desperate need of designers–in part, because machine learning products often originate in the world of research, where design principles are rarely applied.

advertisement

Closing the gap between researchers and designers is the goal of Google’s People + AI Research (PAIR) initiative, which launched earlier this summer. PAIR aims to establish design principles for AI systems and build tools for designers and developers as AI moves quickly from the research lab into the products we use every day. At PAIR’s first conference in Cambridge, Massachusetts, this week, researchers from Google, MIT, Carnegie Mellon, and the University of Illinois presented their ideas on machine learning systems that have human needs at their core. Based on their comments, here are three problems AI is currently facing–and how the right design could help.

[Source Image: Rogotanie/iStock]

The “Folk Story” Effect

At the conference, University of Illinois computer science professor Karrie Karahalios spoke about her research into people’s perceptions of their Facebook News Feeds–and specifically the “folk stories” we tell ourselves about how this mysterious algorithm works.

Your theory that if you visit one person’s Facebook profile a lot, you tend to see more of their posts? That’s a folk story. Your sense that you tend to see a lot of posts from people who have similar interests and mutual friends? That’s a folk story, too.

Karahalios calls them folk stories because they circulate informally and are not actually verified by Facebook–instead, they’re the myths we invent to help us understand an inscrutable system that has a lot of control over our lives.

After studying people’s beliefs about how the News Feed works, she created a tool that enabled people to see what was happening behind the scenes. The tool showed the person all posts from all their friends versus just the posts that showed up on their real News Feed, and then presented them with a divided column showing which people were in the “rarely see” category, who was in the “sometimes see” category, and who was in the “always see” category.

Once people came to terms with the fact that their feed was controlled by an algorithm (a staggering 38% of the people in the study, conducted in 2013 and 2014, didn’t realize this), Karahalios found that they were generally happy with the algorithm’s decisions about who was in which column. But she also found participants wanted more control over the content they were being fed.

advertisement

In short, without transparency about how an algorithm works, users feel blindsided. But when the logic behind a system is presented clearly–as in the case of Karahalios’s experiment–users can trust it and its decisions. That’s where designers can step in, building interfaces that make an AI’s reasoning clear and engendering user trust in the process. And because a user won’t always agree with an AI’s decisions, since they’re often based on sweeping, categorical assumptions, designers need to build in mechanisms for control and feedback.

In 2017, questions about how Facebook’s News Feed and advertising work have only gotten more intense–especially given how fast it enables fake news to spread and how the company sold ads to Russian propagandists during the 2016 election. Transparency and feedback aren’t just good design. They’re necessary design to prevent technology’s most potent problems from impacting our social and political well-being.

[Source Image: Rogotanie/iStock]

Putting The Technology Before The Problem

When building a new product with machine learning, technologists tend to work in reverse: They start with a data set, then find a problem related to that data set, train a model, decide if it’s good enough, and then launch a product–the world’s first smart you-name-it.

That’s according to Jess Holbrook, UX lead for Google’s machine learning research group. This phenomenon isn’t isolated to technologists, either. Researchers, faced with a breakthrough in technology, pick the first problem that it could be applied to, and push it out into the world without talking to anyone about it. That’s exactly what Holbrook is trying to prevent through the PAIR initiative. “What if we started to take what we’ve used in the UX field for a long time and we approach machine learning development through that lens?” he asks.

That means tackling every new project by starting with the problem, not the technology. And development should only begin once you’ve figured out whether that problem is one that can really be solved with machine learning–and even then, user feedback needs to be incorporated throughout the process. Holbrook believes for this to work, designers need to be present at every level of development, from when the data set is analyzed to when the algorithm is written, through to the final stages of interaction design.

“We need to be down at the data set level, designing there,” Holbrook says. “We need designers asking questions–does this meet users’ needs?”

advertisement
[Source Image: Rogotanie/iStock]

We Need More Drivers At The Wheel

The massive data sets on which machine learning algorithms are based can be wildly biased and flawed. It’s a huge problem, since it can be hard to look back inside the code to figure out what went wrong.

But who makes the decision about what’s considered fair? That’s where designers can come in. By understanding the user, designers are in a position to help researchers understand who they’re creating a tool for, effectively becoming a humanist voice of fairness and anti-bias.

“I work a lot under the hood, making the machinery,” says Google researcher Maya Gupta, who heads up a research team nicknamed “GlassBox” focused on making algorithms that learn from small data sets more accurate. “But we need more work on the steering wheel. Where do we want to go, and what should those interfaces look like to humans?”

In a tech-centric field, designers have the opportunity and responsibility to act as an advocate for the user–making them something of a watchdog. “We want to get to a place where we’re all the watchdog,” Holbrook says. He aspires to instill the principles of human-centered design in everyone at Google, so designers aren’t the only ones looking out for users’ best interests as algorithms are developed. And if every member of a team isn’t looking out for the user, that’s a problem. “If you’re talking about yourself as a watchdog, you’re exposing a bug in the system,” he says.

With AI research exploding and giant companies like Google investing in bringing the technology to every industry, designers will be called on more and more often to be a voice for the user–really, a voice for us all.

About the author

Katharine Schwab is a contributing writer at Co.Design based in New York who covers technology, design, and culture.

More