advertisement
advertisement

The Art Of Manipulating Algorithms

Meet the “AI nudge,” a method of subtly “persuading” algorithms.

The Art Of Manipulating Algorithms
[Images: Fabian Irsara via Unsplash, chaluk/iStock, Markus Spiske via Unsplash] [Images: Fabian Irsara via Unsplash, chaluk/iStock, Markus Spiske via Unsplash]

This winter, the events that led to President Trump’s election reverberated through the technology and design world.

advertisement

Reddit Is On A Mission To Dispute Fake News

The phrase “fake news,” which rose so quickly to prominence in November and co-opted so seamlessly by Trump in the new year, was hotly debated by journalists, designers, and Mark Zuckerberg alike. It introduced many Americans to the fact that algorithms are already our silent partners. They shape us–our views, our finances, our friendships, our purchases, and even our laws. The kicker? There seems to be nothing we can really do, aside from lobbying technology companies for more transparency or becoming digital ascetics. To borrow a phrase, join or die.

J. Nathan Matias, a PhD candidate at the MIT Media Lab’s Center for Civic Media, studies safety and fairness in online communities. He has argued that testing how online products and platforms affect us–and how they affect civil liberties and the common good in general–isn’t just necessary, but an obligation. It’s no different than the testing we demand on our food, cars, and medicines.

Matias is the creator of CivilServant, a software that lets online platforms do A/B testing on different approaches to moderation. Say you’re a moderator on a social media platform and want to see if posting rules above the comments section will change how many people actually follow them. With CivilServant, you can set up a test and compare how well it works. It’s a systematic way to test solutions to what Matias describes as an age-old question of governance: How do you defend freedom of speech but also stop the spread of straight-up lies?

As the country debated fake news this winter, Matias set up a test of an entirely new kind of approach, one powered by humans acting in unison to influence a powerful algorithm. “It may be the first time, as far as I can tell, that anyone’s done a field experiment out in the wild, a systematic effort, to influence the behavior of an algorithmic system for the common good,” he tells Co.Design about the study, the results of which he published this month on Medium.

To test the idea, he turned–like so many others searching for a glimpse into the soul of the internet–to its front page: Reddit.

Working with moderators, Matias set up an experiment with r/worldnews, where some 15.5 million users read and discuss the news on any given day. Reddit uses both human moderators and a ranking algorithm to promote news stories–the perfect place to test a new approach that relies on both. “We couldn’t control the AI system,” he explains. “We don’t run Reddit. We couldn’t control the code of Reddit’s algorithms. But we still wanted to influence how that system works, without restricting human agency and human liberty.”

advertisement

His test was simple: Above any news link from sources commonly accused of publishing fake or misleading news, a box appeared asking readers to respond with links that could “help us improve this thread by linking to media that verifies or questions this article’s claims.” (On other posts, a different box asked readers to fact-check and “downvote,” but more on that later).

Basically, it asked readers to fact-check each story. That’s in contrast to Facebook, which is partnering with third-party fact-checkers to verify sources, or researchers who suggest algorithms could kill fake news automatically. Instead, Matias’s approach relied on humanity at large to shift the behavior of Reddit’s algorithms. Yet he found that commenters responded with fact-checking links twice as much, compared to normal posts without any suggestions of fact-checking. And more importantly, the box suggesting fact-checking actually influenced Reddit’s algorithm, which pushed down tabloid links by a factor of two when readers were encouraged to respond with fact-checking links. Meanwhile, the alternate box that suggested readers not only fact-check stories, but also “downvote” the unreliable news, actually canceled out the benefit of fact-checking. It seems as though users didn’t mind being asked to help verify the truth of a story, but didn’t like being told to punish fake news in Reddit’s rankings.

Without restricting anyone’s freedom to share news or comment, Matias was able to slow the spread of fake news via algorithm–simply with a suggestion to human readers. He named this new approach the “AI nudge,” after a similar idea coined by Richard Thaler and Cass Sunstein, whose 2008 book, Nudge, proposed how subtle behavioral “nudges” could influence people to make better decisions about everything from their retirement savings to their long-term health. Matias’s “AI nudge” similarly prods users to simply think critically, which persuades the algorithm and, in turn, benefits other users. It’s proof that humans, collectively, can influence the way algorithms behave–with help in the form of a frequent reminder from moderators, developers, or designers. In a world where the structure of most algorithms is often protected by law, and some developers can’t even predict how their own algorithms work, it’s a heartening reminder of the power of human users.

But the real crux of the research, and Matias’s work, is the systematic testing of technology products–in this case, Reddit and its algorithm. “Each algorithmic system probably behaves differently than the next, and in that context it becomes incredibly valuable to be able to look systematically at the outcomes of the combined efforts of humans and machines together,” he says. Tech companies, developers, and even users have a moral obligation to test how these platforms behave–and make the results of those tests public. In December, Matias, Allan Ko, and CivilServant co-creator Merry Mou published a passionate essay arguing that we have an “obligation to experiment” with how technology impacts the liberty and safety of users:

Organizations and platforms that operate at larger scales should be more subject to this obligation. When a service mediates the life experiences of millions or billions of people, even small risks can add up. Examples of types of platforms that satisfy these conditions might include (but certainly are not limited to): messaging applications (Facebook messenger, WeChat, Snapchat, Slack), public and semi-public forums (Wikimedia, Twitter, Facebook, Reddit, YouTube), marketplaces and markets (app stores, Etsy, Amazon, Tinder, Uber, Airbnb), search engines (Google, DuckDuckGo), network providers (ISPs), operating systems (Apple iOS, Ubuntu, Android), and internet of things applications (Nest, Fitbit).

Even as we encourage institutions to take this obligation seriously, the rest of us can also conduct our own studies.

advertisement

After all, they write, most products in our lives are subject to rigorous safety testing. NASA and the FAA conducted a famed study that involved crashing a Boeing 720 in the desert in the 1980s, aimed at understanding accidents and improving safety. In Victorian London, doctors joined together to test food for toxic contamination, eventually leading to the first food safety commission. Now, the trio argues, it’s time for a revolution in how we test our technology products for safety. And it’s never been easier to do so, even independently if tech companies that design them refuse–thanks to software like CivilServant.

Over the next few months, Matias is working to expand CivilServant to a broad range of social media platforms where moderators and users could test their own AI nudges independently. “Hopefully in the next few years [we can] generate hundreds if not thousands of new studies on questions like the one that we just tested,” he says. “This is the web. It’s really easy to run experiments!”

About the author

Kelsey Campbell-Dollaghan is Co.Design's deputy editor.

More