Facebook Dumps Fake News Warning Tag and Does This Instead
Facebook is further increasing the pressure on fake news pushers.
After internal research, they discovered that the disputed tag was not effective in reducing the spread of fake news and in some cases even made the problem worse. They therefore decided to get rid of this method and instead put the relevant fact-checking articles in the related content section and their testing shows that this was more effective. What Facebook essentially did was to remove one of many countermeasures against misinformation and replace it with an alternative that empirically works better.
In such a complex and dynamic environment like social media, generating new countermeasures, testing them against reality and keeping the ones that work is a crucial method to overcome the current pestilence of misinformation on the Internet.
What is fake news?
There has always been false and misleading information on the Internet, from the early days of Usenet discussions to the era of social media. In recent years, however, misinformation has been sharply weaponized to scam millions of people for money, convince them to buy into pseudoscientific beliefs and push destructive and polarizing falsehoods that affect elections. This kind of targeted misinformation disguised and masquerading as reliable news content has come to be known as fake news and comes in a few different basic varieties:
Deceptive imitation: fraudulent websites that uses web design elements and logos to impersonate real news website in order to borrow from the credibility of real news agencies to prop up their nonsense.
Complete fabrication: websites that pushes material that is 100% made up in order to make people sad, angry or make them gloat. Triggering these feelings in the viewer increases the likelihood that they will be shared and that generates revenue for the fakers.
Extreme manipulation: websites or articles that are grossly distorted and manipulative, taken out of context or given misleading titles or introductions when they are really about something else. This can also include real photographs or videos that are put inside a misleading context.
Disguised satire: some satire can be genuinely funny and thought-provoking, but some satire websites have sprung up that actively try to fool readers into thinking their content is real even though they themselves admit that it is satire without intent to harm or deceive. Other forms of fake news also wrongly claim to be satire in order to deflect criticism.
How did Facebook decide to handle it?
At first, Facebook refused to take any responsibility for the impact of fake news on the 2016 U. S. general election. Their argument was that only a small fraction of the unique content shared on Facebook was fake, but only later realized that the fake and misleading material had a considerable proportion of the total content shared to the platform. Imagine going to the food store and buying one each of 11 kinds of food items, but then also buying 100 packets of strawberry ice cream. In terms of unique items, the ice cream is just 1 out of 12, but you are still getting a ton of ice cream with you home. In other words, while only a small fraction of everything unique that got shared on Facebook was fake news, the fake news got shared at such an astonishingly high rate that it had a discernable impact.
Follow Debunking Denialism on Facebook or Twitter for new updates.
As more and more information came to light, including that hundreds of Facebook pages linked to a well-known Russian troll farms displayed over 3000 political advertisement to over 120 million Americans, Facebook knew it had to take action. They stepped up their game and banned fraudulent websites that impersonated real news websites, banned fake news websites from their advertisement program, partnered with fact-checking websites and put up a red warning icon next to content that had been determined to be fake news according to their vetted collaborators.
However, there was a slight problem. The red warning triangle (“disputed flag”) did not work. In fact, it made the problem worse.
Why did one of their countermeasures fail?
The Facebook article Designing Against Misinformation and the associated Facebook blog post News Feed FYI: Replacing Disputed Flags with Related Articles four reasons why the red disputed tag did not work.
1. It took long (in terms of number of clicks) to find out how and why the content shared was fake.
2. It caused a backfire effect whereby the beliefs just get more entrenched if they are corrected.
3. It required false ratings from two fact-checkers, which made the anti-misinformation system very slow.
4. People wanted more context regardless of rating, as some fact-checkers used a range of ratings and not just false.
Essentially, the system was slow, ineffective and backfired to some extent.
So Facebook did some more research and came up with a system that their own tests showed worked better. But what is it?
What is Facebook going to do instead?
The alternative system they have designed and tested involves showing a list of related articles that includes content from the fact-checkers debunking the story. Although this did not impact the number of clicks on the false article, it decreased the number of shares that the fake content got, which restricts their impact. Facebook also received user feedback that indicated that it countered some of the problems with the disputed tag: it did not require multiple fact-checkers, it can comfortably allow a range of ratings and most importantly, it did not seem to produce the kind of backfire effect that the disputed tag did.
Facebook will also continue using methods that seem to have worked previously, such as sending a notification to people who have shared fake news.
How successful will this new approach be?
Not all countermeasures to the problem of fake and misleading content will work. That is just the way reality works. Facebook, social media and Internet usage trends are complex and dynamic with billions of users. This means that some proposed solutions will not work or even be directly counterproductive. There are two crucial point, however, that must be kept in mind.
First, Facebook has a range of different tools that target the entire chain of misinformation, from removing the advertisement revenue from those who produce fake news, cutting down the ability to boost posts from those that share fake news, adding in fact-checking information to fake content, notifying people who shared it and so on. If one of them happens to fail, there are many other methods that have been shown to work.
Second, Facebook is actively testing their methods against the empirical data. Does this or that anti-misinformation method really lead to fewer shares for fake content? By continuously coming up with new and inventive methods to target fake news, testing them, keeping those that work and modifying or discarding those that do not work, Facebook is using science and technology to adapt and improve the fight against fake news.
Take away the “Post Anyway” option and we’d be getting somewhere. At least make them visit the fact checker A & B before they get the option to “Post Anyway”
If they visit fact checker A and B, then hit “Post Anyway,” at least they had to be exposed to some facts before they went ahead with the Trumpism.