Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

8 minute read

The Algorithmic Democracy

AI is changing how we think, debate, and choose.

The Algorithmic Democracy

The day before the election, as millions of Americans were feeling confident that the vast majority of the country shared their opinions, a pair of researchers at the University of Southern California Information Sciences Institute published a paper that looked closely at something many of us ignored: the provenance of political tweets. Where do they come from? How many are, in reality, made by humans? And if not, who is designing these crude straw-bots?

Analyzing Twitter during three televised debates, they discovered that 20% of all political tweets were made by bots. Their patterns and provenance, both pro-Clinton and pro-Trump, were mysterious. Georgia dramatically outpaces any other state when it comes to producing bots. ("We have no evidence of who's behind the [Georgian] bots," says author Emilio Ferrara over email.) Ferrara and his coauthor Alessandro Bessi concluded with a warning. These bots, they wrote, can make online conversations more polarized. They make it easier to spread factually incorrect news stories. And they are easy to make: Nearly anyone "could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation."

related video: How Are Bots Affecting Social Activism And Politics?

Twitter bots are not artificial intelligence; they’re little more than spreadsheets, as Fast Company's Ainsley Harris recently explained. Yet, as Ferrara and Bessi warn, they easily and subtly coexist with human Twitter users, serving to create noise and amplify misinformation, exemplifying the subtle way that intelligent systems shape our conversations, thoughts, and choices. "I think this is a serious social issue with strong implications, and therefore both academics and industry experts should try address it," Ferrara told Co.Design. "I don't know who has the responsibility to do it. I feel we all, as researchers, have some responsibility."

The steady march toward a world where machines taught by humans make our lives easier, smoother, and more delightful has gone largely unquestioned. Now, as algorithms make their way into systems that deeply affect our democracy—not only the way we access journalism and discuss politics, but also how criminals are convicted and other fundamental mechanisms of our government—it's time to begin reckoning with AI.

All The News That's Fit To Your Feed

During and after the election, Facebook was thrust into the center of this debate. Machine learning is a major element of Facebook’s Newsfeed design; AI is built into its user experience, a "feature" that lets us nest in isolated ideological suburbs of users who agree with each other. The Newsfeed algorithm is a super-optimized gratification machine, observing what types of content you enjoy, and then curating that content for maximum user engagement, regardless of whether that content is true or false. This algorithm distorts the conversation in an invisible, insidious way. News stories are shown to those who are likely to like or re-share them. Users are shielded from stories the algorithm determines they won't enjoy.

"If Facebook took its self-described role as a technology company seriously, it might recognize its role in the gargantuan distribution of falsehoods sufficient to influence an entire election, and leverage technology to correct that," Sam Biddle argued on The Intercept in a post titled "Facebook, I'm begging you, please make yourself better." Within the company, some employees are now suggesting changes to the Newsfeed algorithm in response, reports the New York Times. Yet Facebook’s ability to judge the massive amount of content shared on its platform is sticky. "Frankly, too, I’m not sure I feel comfortable allowing Facebook’s heavy hand . . . determining what is legitimate and what is illegitimate news," wrote New York magazine’s Max Read wrote on November 9. "I feel even less comfortable ceding that determination to an algorithmic sorting mechanism as opaque as Facebook’s."

In a statement posted on Facebook this weekend, Mark Zuckerberg acknowledged that "there is more we can do" to prevent the spread of hoax news stories, but concluded that "I believe we must be extremely cautious about becoming arbiters of truth ourselves." If Facebook itself refuses to take a greater hand in sorting the hoaxes from the facts, perhaps it should at least give users greater agency to alter—or even simply understand—why they’re being served a particular story. Could users even opt-out of the algorithm entirely? Or is Facebook’s AI the toll users must pay for the free service?

It isn’t easy to place blame on the flawed way that nearly half of Americans get their news, but it's clear that this effect won't be going away anytime soon. And it’s vital that we keep talking about it.

Optimizing Guilt

While the election put Facebook in the spotlight, more serious questions could easily be asked of other forms of AI already at work within mechanisms of government—whether within campaigns of elected officials, or within police departments, or the courts themselves. While much of this software has been shown to help prevent crimes in some cases, it has also been shown to be critically flawed. When humans are the ones "training" intelligent systems, they pass along their biases.

This summer, a Pro Publica investigation revealed how the criminal justice system is already dependent on algorithms to predict the likelihood of a suspect committing another crime. Pro Publica’s study of one such algorithm, built by a private company, revealed that it incorrectly flagged black defendants as "future criminals" more than twice as often as white defendants. "The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret," wrote Microsoft Research principal researcher Kate Crawford in a subsequent essay about bias in AI, "Artificial Intelligence's White Guy Problem." "There is little [judges] can do to understand the logic behind them."

The same goes for a wide range of algorithms already being used elsewhere in the criminal justice system, including a proposed program to use such systems in prisons themselves. Part of a bill introduced to Congress last year, it includes the use of predictive tools to determine things like what type of prison a convicted criminal may serve his or her time inside, or even what kinds of visitation rules he or she may be allowed, as the Atlantic reported this summer.

"Inmates would have no avenue to challenge or appeal their score, through a court or otherwise," wrote Christopher I. Haugh. "The very concept of predicting crime challenges the presumption of innocence, a central tenet of the American criminal-justice system."

Reckoning With The Impact Of AI

In this overly determined version of our world, what room is there for humans—whether users or citizens—to contradict the suggestions of a powerful, data-driven algorithm?

Should we expect companies to regulate themselves? Some major tech companies are moving to establish "best practices" for AI. Facebook, Amazon, and Google recently announced a new partnership to establish "ethics, fairness, and inclusivity" in AI. This summer, Elon Musk announced a nonprofit called OpenAI that would seek to build "safe AI," that is "most likely to benefit humanity as a whole." (A researcher from the nonprofit recently presented an algorithm that learns to speak and interact by reading Reddit.) Backed by brilliant thinkers and companies with near-infinite resources, these programs are an acknowledgement of AI’s challenges. But they are still, directly or indirectly, related to companies—companies influenced by shareholders, boards, and profits. Meanwhile, the people funding them may have their own political biases and motives; Peter Thiel, a member of President-elect Trump's Transition team, is listed as a sponsor of OpenAI.

Academic efforts are broad, but offer some insight. A Stanford-led project called the One Hundred Year Study on Artificial Intelligence, which will publish a report on AI every five years until 2116, released its first report this fall. It assuaged anyone worried about a Terminator-style Singularity event (phew!), but had sobering warnings about AI and the public good. First, we need to include more AI experts in government; politicians and policy makers don’t understand AI enough to make decisions about it. Second, we must make it easier (and legal) for researchers to study proprietary AI, which might be protected by copyright or other laws. Third, we need to better fund research about how AI impacts society. Should the government take a greater role in regulating AI? Facebook, Amazon, Google, and others are already lobbying to avoid being regulated by the federal government’s mandate over "critical infrastructure," the panel noted. It doesn’t seem right—or even feasible—to regulate every piece of software; at the same time, some of it impacts the public in a critical way. In short, they don’t know.

So what can we do, as citizens and users? First, we need to be more cognizant of the ways this software is influencing our behavior. We must demand transparency about AI from companies that build it, whether that means Facebook or the private firms building software used in government. (It's worth noting that after Pro Publica revealed Facebook let its advertisers filter their audience by race, the company changed its policies.) And we have to keep asking questions, long after the shock of the election has passed: Who benefits from a system that learns from users? What are its goals? Do users have a right to know how their behavior is being used as a training tool? Should AI have standard "guardrails" that protect it from learning destructive behavior—and who defines what destructive is? Do users have a right to opt-out?

Pop culture has spent a century gleefully predicting a world where machines are more intelligent than humans—oscillating between "our lives will be so easy!" and "we will be subjugated by a robot overlord!" Now, we must talk about the less dramatic but far more ominous way that future has arrived. In reality, machines are changing our lives, our very democratic processes, in much more banal and imperceptible ways.

related video: Facebook Wants To Win At Everything, Including Artificial Intelligence

loading