advertisement
advertisement

What Facebook’s Fight Against Fake News Got Wrong (And Right)

Facebook’s design team breaks down the good, the bad, and the ugly of fighting fake news.

What Facebook’s Fight Against Fake News Got Wrong (And Right)

After the 2016 election, we had a collective reckoning about the viral nature and impact of fake news. Empowered by the rise of social networks–namely Facebook, but others, too–propagandists leveraged the public’s newfound capacity to easily share a sensational, fictitious story to take over a news cycle with partisan falsehoods.

advertisement

By December of last year, Facebook offered hints of a mea culpa, promising to make stories flaggable for mistruths and partner with independent fact-checkers to label stories as “disputed.” The company implemented a series of tests and updates to the platform, while polling users across the world. Now, with the close of 2017, the Facebook design team has taken to Medium to share what they got right, and what they got wrong, over the last 12 months.

Most of all, Facebook has done a psychological about-face on its approach. Rather than labeling some stories as potentially false, it now labels related, clarifying stories as true. Why? As Facebook explains, it’s science, and the data backs it up.

[Screenshots: Facebook Design]
But first, what did Facebook get wrong? It started by requiring two independent fact-checks per article in order to label it “disputed.” That led to slowdowns so bad that fact-checks couldn’t possibly keep up with the fake news deluge. Beyond that, perhaps the biggest miss was in how Facebook communicated a story was “disputed.” It deployed a red alert, something like a traffic sign, that it stuck right under a link to warn a user of fake news. “Just because something is marked as ‘false’ or ‘disputed’ doesn’t necessarily mean we will be able to change someone’s opinion about its accuracy,” the team writes. “In fact, some research suggests that strong language or visualizations (like a bright red flag) can backfire and further entrench someone’s beliefs.”

Facebook found something humbling: that the click-through rates on its “WARNING!” red disputed articles didn’t differ much from when these hoax stories weren’t disputed. Essentially, its own data seemed to support the psychology of reinforcement and entrenchment.

But another intervention did seem to work. When Related Stories were loaded with fact-checked articles that offered a counterpoint to the hoax link, click-throughs were lessened, and, unlike “disputed” flags, these Related Stories required zero taps to get better facts to the reader. The correct information was just there, staring them right in the face.

This approach also allowed Facebook to soften its policies on fact-checks themselves–an article fact-checked by a single source could now be included in these Related Stories, and that story might address what was both true and false in a fake news link (yes, Hillary Clinton served as secretary of state, but no, she didn’t operate an underground pedophile pizza ring). Put more simply: A disputed flag alone had a weakness because it essentially communicated true or false in its imagery. The clarifying link(s) can offer more meaningful context.

advertisement
advertisement

One intervention Facebook began in 2016, it will continue into the future. It’s a feature that automatically sends a notification to anyone who shared a fake news link as soon as that link has been fact-checked. “When someone shares this content going forward, a short message pops up explaining there’s additional reporting around that story,” explains Facebook. “Using language that is unbiased and nonjudgmental helps us to build products that speak to people with diverse perspectives.”

Now, a skeptic may still point out that the best way for Facebook to prevent fake news is for it to simply train its algorithms to bury it (much like it can prioritize baby photos, the company can train its machines to cut out clear, zany falsehoods–and to be fair, Facebook has adopted at least some algorithmic policing). But it’s become clear over the last year that Facebook is concerned at taking any stance that might be labeled partisan in a political climate when the truth and freedom of press has been skewed into a liberal ideal by the administration’s fear-mongering rhetoric.

advertisement

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.

More