In 2015, finding enough material to fill a weekly newsletter about fact-checking was often a struggle.
This was one of my tasks as head of the newly-launched International Fact-Checking Network. And despite all the Google alerts, Twitter lists and RSS feeds I was monitoring, it wasn’t an easy one.
Many fact checks were being published (and many falsehoods disseminated), but there was precious little analysis of the field itself by researchers and reporters.
Today, the newsletter essentially fills itself.
A combination of factors turned fact-checking into something of a growth industry: increasingly brazen falsehoods on the campaign trails, the demise of the gatekeeping role of traditional media and the explosion of viral misinformation.
Since 2015, the number of active fact-checking projects around the world has almost tripled. Facebook has partnered with fact-checkers in 17 countries to reduce the reach of viral hoaxes on its platform. Google highlights fact checks in search and its news platform. Legislators from Brasilia to Singapore have discussed (and sometimes enacted) actions against misinformation. And lots more research has been published about the impact and reach of falsehoods and their corrections.
The pervasiveness and reach of misinformation has led some to throw up their hands in despair, with dozens of headlines around the world heralding a “post-truth” era. At a conference in Brussels last week, several senior European officials declared with a straight face that “Brexit” won in the United Kingdom because of fake news.
While some view misinformation as all-powerful, others think solutions are just one algorithm away. I’ve lost count of the improbable techno-utopian solutions I’ve been pitched. One of my favorites is a product that promises to “automatically determine the trustworthy [sic] of any news webpage, in seconds.”
I’m wary of these black-and-white attitudes. If the past three years have convinced me of anything, it’s that there is value in grayscale.
The heightened attention on fact-checking has greatly expanded our understanding of the instrument — while at the same time greatly complicating it. Every nugget of good and bad news contained in new studies is tempered by a slew of caveats and limitations in study design.
This insistence on paying attention to caveats has made me unpopular with at least one commenter:
“on the one hand, here are some of my thoughts. on the other hand, here are other thoughts of mine that are in opposition to the first set of thoughts. any fair minded person will tell you that somewhere, halfway, between my first set of thoughts and my second, is a perfect and measurable truth. also, the earth! half round, half flat!
thanks alexios for literally less than nothing.”
Point taken. Yet with a field as young and (relatively) understudied as that of online misinformation, moderation isn’t a cop-out. It’s a recognition that the evidence out there isn’t incontrovertible.
For instance: While 2015 research found that Facebook users can be prone to burrowing into conspiracy-driven echo chambers, a more recent study of Americans’ internet consumption found that internet habits were less segregated. Per the study, 13.3% of users only consumed fake news and 11.3% only went on fact-checking websites — but 14.0% visited both and the vast majority, 61,4%, saw neither.
It is similarly hard to evaluate the results of Facebook’s partnership with the fact-checkers in a definitive manner. (Disclosure: I played a role in making it happen.)
In December 2017, BuzzFeed found that viral fakes were still reaching huge audiences. This finding was somewhat replicated by my colleague Daniel Funke in a July deep dive of two pages that have frequently posted hoaxes and conspiracy theories.
Yet a working paper by researchers at Stanford University, New York University and Microsoft Research suggests that at least 570 domains billed as fake news sites saw their Facebook engagements significantly decline compared to real news sites after a peak in 2017. Even that study should be taken with caution, as the domains identified don’t always publish fake news and new sites might have popped up that more than compensate for the decline in this stable list.
Much of what we are learning about misinformation and fact-checking depends on how we ask the question. A lot of how we interpret these findings depends on what we would like fact-checking to be able to do.
For instance, after years of critics arguing that fact-checking might backfire by making people even more convinced of a wrong belief, academics now say this almost never happens. In fact, in lab settings at least, corrections seem to be effective at reducing misperceptions on average (see here and here).
Because reduced misperceptions didn’t translate into acting differently in the studies above — such as voting for a different candidate — Eurocrats who’d like for fact-checking to be a bulwark against populist attacks on the EU are going to be disappointed. But that’s their problem, not ours; the role of fact-checkers is to improve the overall understanding of available facts. Others are tasked with changing votes and actions.
It might seem obtusely academic, even trivial, to write an elegy for caveats. But when journalists use the term “WhatsApp killings” to define those lynchings that are fomented by rumors received on the messaging app, they flatten a complicated situation and absolve many other societal failings. They also provide ammunition to politicians who ask for greater control of social networks when they themselves are vectors of misinformation.
When politicians weaponize the term “fake news” to either bully the media or denigrate their opponents, our capacity to come to a shared understanding of reality becomes harder.
When media-watchers suggest that fact-checking is entirely impotent, or when they stretch the significance of its unscientific individual report cards to discredit populists, the field gets less of the constructive criticism it could benefit from.
When technologists argue that AI will get us out of this hole, they are lulling themselves into believing that the problem isn’t an intrinsically human one.
So yes, with apologies to my angry commenter, I will continue making the case for moderation. We need more caveats and less certainty when we discuss fact-checking and the fight against misinformation.
I’d rather be unconfidently right than confidently wrong.
Alexios Mantzarlis is the Director of the International Fact-Checking Network at The Poynter Institute. He frequently writes about and advocates for fact-checking. He also trains and convenes fact-checkers around the world.