Fake news is content that attempts to look like real news, but is not based on reality. It contains major falsehoods and errors of fact. It is not merely accidentally misleading, but crafted to intentionally deceive people. Fake news stories often manipulate human feelings, such as anger, sadness and schadenfreude. That way, they are more likely to influence their beliefs about the world and people are more likely to share it on social media. This brings in a ton of ad revenue for the fake news network creators and keeps their operation going.
However, more and more people are reaching the point where they have had enough. Fake news degrades the quality of social media content and has the power to influence human decisions about health, current events and potentially even elections. Giant tech companies like Apple, Google and Facebook are increasingly starting to realize that something has to be done. Suggestions that have been proposed include manual quality filtering of news websites that are included in certain news apps, not allowing the worst fake news offenders to use their advertisement system and others.
Just a few days ago, Facebook began to deploy their mitigation systems against the influence of fake news. What does it consist of? How will it work in practice? Will it work? How could it affect the efforts of fact-checkers, science advocates and scientific skeptics on social media?
How will Facebook crack down on fake news?
Facebook has entered into a collaboration with several independent fact-checkers, such as Snopes and Politifact. All fact-checkers in the collaboration are required to abide by the fact-checkers’ code of principles from The International Fact-Checking Network (IFCN) at Poynter Institute (a global journalism organization). If one of these fact-checking websites lists a story as false, it will get flagged on Facebook as “disputed” and offer a link to the fact-checking article. Here is how it allegedly looks on live servers in the United States:
This feature is not available worldwide yet, but the Facebook help section has additional information for those that can view it. You have to be in a country where the mitigation strategy is deployed to see it or use the Tor browser. Here is a screenshot:
So how can you mark a news item as fake news? Facebook’s help center informs us about this as well:
How could it impact efforts by science advocates and scientific skeptics?
It is hard to make highly accurate predictions of ongoing events in a complex and uncertain world that changes very fast. However, we might look closer at a couple of possible consequences of Facebook’s crackdown on fake news. How could it affect fake news providers? How might in impact scientific skepticism efforts on social media? Will it work, and if so, how well? Issues discussed are fake news volume, financial incentives and gains, the dynamics of the social media ecosystem, potential spillover effects as well as the risk of false balance and reporting wars.
Reduce fake news volume and reach: although only a small percentage of the unique content on Facebook could qualify as fake news, the volume of that content is very large. It gets shared and interacted with a lot more than real news and real content generally. This is likely due to its inflammatory clickbait nature. Cracking down on fake news the way Facebook is doing it might reduce this volume down. In other words, it might be possible that the reach of content that are determined to be fake news by e. g. independent fact-checkers might be reduced or otherwise throttled down.
Reduce financial incentive for worst offenders: since Facebook (and Google) will stop the worst fake news websites from using its advertisement system (and other advertisement providers might follow), this will reduce the large monetary incentive that fake news websites run on. Thus, if there is less economic incentive to run networks of fake news, it might make the people behind it more inclined to do something else and reduce likelihood that other individuals get into the fake news industry.
Reduce financial gain for fake news promoters: the combination of less reach and less financial incentive might lead to a sharp reduction in the actual financial gains for fake news providers. Since these appear highly profitable because they get a lot of traffic, this might make some fake news network owners close down their business because it is simply too expensive to run.
Alter the social media ecosystem: the crackdown on fake news might alter the social media ecosystem in such a way that more credible websites gets a higher reach compared with fake news websites. Due to the complex and sometimes figuratively chaotic nature of these systems, it is hard to predict precisely what will happen with some degree of accuracy, but these changes might boost the signal of credible content.
Spill over to other issues: many scientific skeptics are surely hoping that the crackdown on fake news will spill over into a general crackdown on fake content. After all, is there really such a major difference between a fake news article about a politician and a fake news article about aspartame or vaccines? As many have pointed out, pseudoscience and quackery make abundant use of fake news content. Even if this does not directly lead to a crackdown of pseudoscience generally, it brings it one step closer.
Increase interest in fact-checking? because of the way the fake news warning system in set up on Facebook, this could very well drive traffic to fact-checking websites and possibly increase knowledge of and interest in these websites.
Break down social media filter bubbles? the people who view fake news content the most will the people most strongly exposed to the mitigation system against fake news. They will see the most number of fact-checking links and most of the pairing between fake news sources and disputed status. Could this weaken the social media filter bubbles that many conspiracy theorists are stuck inside of? It seems plausible, since more and more contrary information would filtrate into their social media experience on Facebook. There is, however, obviously a risk of a backfire effect that increases the strength of their belief in nonsense, but that might be mitigated by the way that fact-checking content is written.
Could it give the appearance of false balance? because the disputed content and the fact-checking content is displayed side-by-side, this might result in a kind of false balance experience where nonsense content and fact-checking is viewed as two different, equally credible, sides. This is a clear risk, but that system is so much better than letting fake news content stand unopposed, that it might be worth the risk. For instance, some biologists have agreed to debate creationists in a false balance environment when the creationists have already taken control of the local education system and this appears to have had a beneficial outcome.
Reporting wars? the mitigation system that Facebook uses also includes the ability to flag a story as fake news. This allows Facebook to use the wisdom of the crowd to determine what content might be fake news. However, this feature might be abused if pseudoscience websites launch flagging campaigns.
Facebook has recently deployed a mitigation strategy against the powerful influence of fake news. This involves refusing to let them use their advertisement system, collaboration with independent fact-checkers, marking fake news as disputed with links to fact-checking content and the ability for users to mark stuff as fake news.
This system is not perfect and has limitations. However, at this point, a decent mitigation strategy is much better than doing nothing at all. Let us see how this works.