A few days after President Donald Trump’s November 2016 election win, Facebook’s CEO Mark Zuckerberg discounted the notion that fake news had swayed voters. But, as it became clear that some fake political stories had received more traffic on Facebook than work from traditional outlets, Zuckerberg said he would “prioritize fixing it.” His main solution thus far has been a fact-checking effort.
In early 2017, Facebook contracted (for one year) PolitiFact, Snopes, ABC News, factcheck.org, and the Associated Press, to identify fake news on its social network. The biggest challenge has been that third-party fact checkers can only tackle a small fraction of the bogus news stories that flood Facebook feeds. In fact, it is a process that some partners say is too cumbersome and inefficient to stop misinformation duplicating and spreading.
There are whole hosts of copycats that spread a story. By the time we’ve done that process it’s probably living in 20 other places in some way, shape or form. – Aaron Sharockman, Executive Director of PolitiFact
An inside look at Facebook’s fact-checking operation suggests that the small-scale, human approach is unlikely to control a problem that’s still growing and spreading globally. At the moment, the fact-checking sites sometimes have to debunk the same story multiple times. Facebook has said it is working on adding two new partners to help with the workload.
Facebook argued that paying outside firms helped address the problem without making the social network the arbiter of what is true or untrue. Moreover, Facebook expects this manual fact-checking work to help the company improve its algorithm over time. Doing so will make it better at automatically spotting patterns and determining what stories might be worth showing, even before they’re flagged by users.
Looking to the future, Facebook announced its plans to provide further updates on progress before the end of 2017, and to begin communicating more frequently with fake news partners in 2018.