Today, Facebook is starting to roll out an update to its platform, the first in many promised updates to stop the spread of fake news. It allows users to flag news for fact-checking, and marks questionable content shared on the service. But it stops short of explicitly labeling stories fake.
CNN: “Drunk Hillary” Beat Sh*t Out Of Bill Clinton On Election Night
Taylor Swift SHOCKS Music Industry: “I Voted For Trump”
Hillary Clinton In 2013: ‘I Would Like To See People Like Donald Trump Run For Office; They’re Honest And Can’t Be Bought
These are real headlines that have all trended on Facebook, but none of the stories are true. They’re part of the epidemic of fake news. Fake news–unbelievable headlines backed by entirely fabricated stories–is a phenomenon that was by no means invented during the 2016 election cycle, but it was this political season that fake publishers perfected the system, banking on the sensational sharing of misinformation, all while drowning out real information. Many even believe fake news may have swayed a close election in Trump’s favor.
Whether or not that happened, fake news is a real problem. One study has found that 75% of people who see fake news believe it. While another, using publicly available Facebook APIs, discovered that the share behind the Denver Guardian’s top story outpaced the Washington Post’s top story by over 10 times. The Denver Guardian isn’t a real paper. That’s why we’re not even italicizing it.
Facebook’s new updates, the company tells us, are only the earliest moves in what will span many design decisions that will play out over the coming years to balance this wrong.
The main feature is called Disputed Posts. By clicking a new button on the upper right hand corner of every post on Facebook, you can flag a story as fictitious. If enough people flag the post, it will be sent to a member of Poynter’s International Fact Checking Network, a reputable third party that will parse the news for veracity. If it’s found to be fake, the story will still be shareable, but it will be marked as “disputed” whenever it’s displayed on the platform–as well as flagged to someone the moment directly before she shares in hopes that she’ll think twice about sharing. Notably, Facebook is not labeling these stories “fake” or “false.” It’s using the euphemistic term “disputed,” which literally means “debated.”
Disputed Posts only launches in the United States at this time. And it’s being rolled out conservatively. Facebook tells us that the company will be flagging only the most obvious posts for Poynter to fact check. When asked what would happen to politically sensitive stories that were less outrageous but perhaps just as dangerous–like hyperpartisan headlines that question the scientific consensus of climate change–a Facebook spokesperson said the company had to be careful and keep Facebook a place where all people feel like they could have conversations most important to them.
Which gets to the real dilemma facing Facebook as it considers design solutions to the problem of fake news. On one hand, Facebook argues that it is not a media company. It recently even fired its editorial staff largely to avoid claims of bias, letting algorithms manage news instead.
But Facebook is a media company. Through those algorithms, it makes personalized, editorial decisions every time you open its app or website–whether you see a friend’s baby photos or a story from Breitbart. And the company is not transparent about how the algorithms work.
In this regard, relying on a third party fact checker could prove too little too late. Stories can go viral in minutes, and Facebook admits to not yet knowing how long the fact-checking process will take. The user interface will only be as effective as the speed with which real assessments are made. Furthermore, while Facebook will flag stories as disputed to users, truth doesn’t seem impact how often Facebook’s algorithms share a story–nor will multiple disputes all leading to a single publication weigh down its prioritization across feeds. In other words, even if Fast Company publishes five individual stories in one week that fact checkers flag as disputed, Facebook still won’t curtail the spread of Fast Company stories at large. Each story is treated as an isolated incident.
Finally, Facebook’s UI is making one critical assumption: That people will believe Facebook enough to question their own beliefs–or the depressing believability of fake news itself. Because ultimately it’s Facebook’s UI that’s flagging a story as disputed. And if you are prone to believe a Denver Guardian headline more than Facebook or Poytner? That “disputed” flag might just serve as confirmation for the liberal media bias. People are known to weigh “facts” supporting their world view more than those that question it–right or wrong, it’s a basic psychological phenomenon that Facebook must account for in its design.
Which is why it’s so irresponsible that Facebook is hedging by labeling fake news “disputed” rather than “false.” The use of the word “disputed” treats the facts themselves as if they’re up for debate, which is precisely the mentality driving fake news and the post-truth era.
Indeed, Facebook’s more useful interventions will probably happen inside their invisible algorithms, and its most promising new initiative may be one your eyes never see: Facebook has plans to experiment with ranking stories lower if people don’t tend to share them after reading, as Facebook claims this pattern can correlate with misleading content.
Because for Facebook to squash fake news for good, the only real design intervention may be that it stops making it so shareable. Sure, some of that can be fixed by slowing down users who are about to share fake news, but fake news is only a phenomenon because it has a huge audience being presented these stories in the first place. And that culpability falls on companies like Google, Twitter, and Facebook. After all, Facebook is the world’s largest publisher, whether it claims the title or not.