Facebook has gone from saying that fake news was not a significant problem, to admitting that it might be a problem to being completely onboard with the realization that fake news has harmful impact on the world.
Previously, Facebook has cracked down on fake news enlisted the help of third-party fact-checkers and stories that have been flagged as fake news will now include a tag calling it disputed and linking to a fact-checking piece. It is also possible for users to flag content as fake news as a separate category. Finally, it refused to allow fake news websites to advertise with them. These efforts are intended to reduce the volume and reach of fake news and reduce the financial gain from fake news pushers.
Perhaps it will also help to break down the ideological isolation of social media filter bubbles where people mostly see material that supports their own stance and are not exposed to opposing arguments. Perhaps it might even boost interest in fact-checking. It could also possibly give the appearance of false balance.
Overall, the Facebook have targeted fake news in three separate ways. First, reducing the financial incentives to rely on and spread fake news. In essence, this is perhaps one of the largest incentives that fake news pushers have. If they can create content that makes people angry, upset, or frustrated, they can get shares and clicks. This translates to advertisement impressions on the website and more followers. If Facebook were to crack down on this, the strongest reason for pushing fake news might be severely compromised.
Second, they are continually making new products to combat fake news. In particular, they are being innovative and creating technological solutions to this persistent problem. Third and finally, they want to make it easier for people to make informed decisions on their platform when they encounter fake news.
Now, they are now stepping up and intensifying their crackdown by taking the next step in combating fake news. On the Facebook Newsroom blog, two product managers Satwik Shukla and Tessa Lyons unveiled their new approach:
Over the past year we have taken several steps to reduce false news and hoaxes on Facebook. Currently, we do not allow advertisers to run ads that link to stories that have been marked false by third-party fact-checking organizations. Now we are taking an additional step. If Pages repeatedly share stories marked as false, these repeat offenders will no longer be allowed to advertise on Facebook.
In other words, they have already made it impossible for page owners to advertise with stories that have been marked as false by some of the fact-checking organizations that Facebook collaborate with. The improvement that is now being deployed is that pages who share a lot of fake news stories on their page timeline will not be allowed to advertise on Facebook. This will severely reduce the financial incentives of pages to share fake news. If they get blacklisted from the Facebook advertisement program, their reach will be strongly impacted.
What are some of the reasons that Facebook has taken that action? It turns out that Facebook pages are even more calculating than what many people would think:
This update will help to reduce the distribution of false news which will keep Pages that spread false news from making money. We’ve found instances of Pages using Facebook ads to build their audiences in order to distribute false news more broadly. Now, if a Page repeatedly shares stories that have been marked as false by third-party fact-checkers, they will no longer be able to buy ads on Facebook. If Pages stop sharing false news, they may be eligible to start running ads again.
Some Facebook pages do not only post fake content to drive traffic, but also playing the much longer game by posting inflammatory clickbait to grow their audience and further boost the spread of fake content. The practical consequences of this change is that Facebook pages who keep sharing stories that collaborating fact-checkers have marked as false will be blocked from buying ads on Facebook. This means that they will not be able to show sponsored posts to target people who they want to persuade to like their page or click their links to generate ad revenue.
This could have sustained impact not only on pages such as InfoWars and Natural News, but also on radical political pages that target people using highly sophisticated data-gathering and statistical methods to influence voting behavior. If they keep sharing fake content, they will no longer be allowed to use the advertisement system on Facebook.
One could always argue that it is not that difficult for them to start a new page and gain more followers, but this is an inconvenience both in terms of time and money. The larger the page, the more inconvenience to start and build a new page in an effort to reset the damaging record of pushing fake news.
Imagine if pages like David Avocado Wolfe (11 million likes), Food Babe (2 million likes), Natural News (2 million likes) or InfoWars (800 thousand likes) had to start over or get a substantial reduction in reach. Perhaps some of them could become irritated enough to even quit the platform. That might be wishful thinking since even a page with less reach and ability to advertise will still give them net benefit, but it might reduce the risk of newer players to game the system and profit from obscuring reality.
At the very least, it could make them more regulated in their behavior and reduce some of the worst forms of social media abuse. At least that is the hope.