The Internet has brought many benefits for humans. It has allowed us to communicate with loved ones in other parts of the world and has provided us with just about any information thinkable just a few keystrokes away. There is, however, a darker side to the Internet. It has never been easier to push misinformation in order to deceive people into believing false things, spending their money on quackery or creating fear about products and people.
What began as chain mails and urban legends have in many ways become weaponized into websites that have created entire worldviews based on anti-scientific beliefs about medical treatments, historical events or scientific findings. One such variant, called fake news, became especially troublesome during the 2016 general election in the United States where hundreds of fake news websites cropped up and starting pushing different kinds of targeted misinformation that ended up having a discernible impact on the community. This ranged from the Pizzagate conspiracy theory to thousands of false news stories about Donald Trump and Hillary Clinton.
Fake news as a business concept relies on writing inflammatory stories about some person or event that appears in the current news cycle that provokes feelings of fear or anger sprinkled with clickbait titles encouraging views from many different kinds of readers. This is then shared on social media and often going viral due to the nature of its content. As more people view the content (and thus end up being misinformed), the more ad impressions the website gets and the more money the people behind the fake news website gets.
In reality, many fake news websites are part of a larger network of fake news servers that push out misleading and often outright false content all hours of the day and all days of the week. Fake news is also just one of the many battlefields in the ongoing misinformation wars.
What have social media companies done so far?
Their approach to fake news started with trying to downplay it. At first, both Facebook and Google pointed out that only a small amount of unique material being shared and displayed qualifies as fake news. However, this does not address the issue of volume. After analyzing the situation further, both companies agreed that fake news was a problem and has been designing and implementing various technical solutions. Apple CEO Tim Cook has also stated that fake news is “killing people’s minds”.
For instance, many social media companies are refusing some people to use their advertisement system to make money from running fake news websites. The idea behind this is that fake news creators are in it for the money, or at the very least heavily dependent on their ad revenues to run their fake news servers, so that if you restrict the incoming cash flow, you decrease the incentive for fake news pushers to operate. Other solutions involve letting people report content as fake news and reducing the impact of clickbait by measuring how often people instantly go back to the previous page.
How is Google intensifying the fight against fake news?
On April 25, 2017, Google VP Search Engineer Ben Gomes published a post titled “Our latest quality improvements for Search” on the Google Blog discussing some ways in which Google will now take more concrete steps to reduce the volume of fake news from trying to game the search algorithms. Gomes views fake news as the next step in the social evolution of webspam, content farms with low quality content and “other deceptive practices. At the same time, he understands that this is a somewhat different problem because it primarily focuses on information deception, rather than just making money from e. g. scams. Because of these new challenges, Google has realized that they need to make “structural changes” to Google search. So what are these structural changes that Gomes is referring to? They have focused on three features: evaluation and algorithm changes, feedback from users and increased transparency.
Google has deployed two primary improvements to their search ranking. First, they have tweaked their algorithm to be better at identifying low quality content and demoting it in the search results while lifting up credible content. Second, they have set up a system with human evaluators to provide feedback to Google while following detailed Search Quality Rater guidelines to accurately flag content that is misleading or malicious.
Users now have increased opportunity to provide feedback on search field autocomplete options when you type in a search query as well as for the Featured Snippets, which is the box just below the search bar that appears when you search for something.
Finally, Google has provided more information about how Google search works, including facts about crawling and indexing and search algorithms. For instance, they now explain how their algorithms analyze words, matching your search, ranking websites, personalization based on past searches as well as how Google tries to return the best results they can.
What will this mean for the larger struggle against misinformation?
It is hard to know for sure before we have seen how these changes play out in the real world. It is great that Google has come to understand that fake news and other forms of low quality content is a problem large enough to be taken seriously and acted upon. Improvements to search algorithms to prevent different kinds of blackhat SEO techniques or other deceptive ways to manipulate the system so that low quality, deceptive or malicious content surges above highly credible and authoritative content is great and something that Google should keep doing continuously.
They also seem to have realized that algorithms are not sufficient without human evaluators, so they have not only expanded feedback options but also specifically hired human evaluators that can help distinguish between pages that are legitimate and those that are misleading. While it is certainly true that humans are imperfect and that their guidelines cannot ever be perfect, the problem with fake news and other forms of misinformation is so large that it is a welcome development despite the fact that some people try to spin the censorship claim against Google.
Another benefit to Google taking measured action is that the company might encourage and inspire others to take the problem seriously and develop their own technical solutions. Over a longer period of times, this might lead to pseudoscience websites and those who push quack “treatments” for profit to get less and less impact on the world and less and less revenue. This could, in turn, lead to these low quality websites trying increasingly shady techniques to gain back high search engine rankings, which might result in them being blocked from Google completely for such violations. One related incident involved the quackery website Natural News getting removed from Google search results due to Google Webmaster violations related to using sneaky mobile redirects.
This might also be too much to hope for, as history has made it clear that misinformation evolves to get around most methods that try to combat it. Science advocates and scientific skeptics likely have an uphill battle in the future, but some technological solutions to this problem might make it a bit easier, at least in terms of reducing the volume and impact of fake news.