Skepticism

How to Spot a Pseudoscientific Paper

spotpseudopaper

With the rise of low-impact journals and predatory open-access journals, the journal jungle has become considerable more difficult to navigate for the informed reader. There are even journals started by groups promoting pseudoscience: young-earth creationists have Answers Research Journal, intelligent design creationists have the BIO-Complexity journal, homeopaths have the Homeopathy journal, proponents of acupuncture have the Journal of Chinese Medicine & Treatment and so on. Even more alarmingly, high quality journals (such as JAMA) have on rare occasions published what appear to be promotional pieces of quack treatments (Gorski, 2013). Thus, it is more important than ever to be able to sift the gems from the trash and approach published research papers with a skeptical eye.

This post exposes many of the common tricks used by proponents of pseudoscience to make their research papers appear more credible than they actually are: unjustified claims in the abstract, misrepresentations of previous research.

Abstract:

In a real scientific research paper, the abstract contains a summary of each major section of the article. This allows researchers to quickly get a grasp of the main methods and conclusions without reading the full text version. In the ideal case, the abstract accurately reflect the content of the paper.

Watch out for:

  • Claims not found in the paper
  • Claims not justified by the results
  • Cherry-picked and/or spun results

However, proponents of pseudoscience can distort the abstract in a number of different ways. They can report claims in the abstract that is not found in the paper, not justified by the data or they can select the most impressive finding and ignore or otherwise downplay the rest in a deceptive manner. Because abstracts are read more often than the fulltext, this creates a misleading depiction of the paper. This is especially troublesome considering the fact that most non-scientists do not have access to the entire paper. Since the ability to critically examine the content of the paper is low, this makes it all the more enticing for pseudoscientific cranks to create and post long lists of abstracts and links all over the Internet. They know that few people will be able to find and access the papers, let alone spend hours refuting them.

Follow Debunking Denialism on Facebook or Twitter for new updates.

Introduction:

A good introduction surveys the background literature in a broad, consistent and balanced way. It attempts to provide a high-quality summary of research that has already been carried out. Typically it goes from a sweeping perspective to a centered focus on the specific topic of research that the authors are interested in. It highlights areas were there are gaps in our knowledge and also explains how the present study contributes to closing that gap (essentially the aim of the study).

Watch out for:

  • Selective reporting of background knowledge.
  • Misrepresentations of previous studies.
  • Unclear or diffuse aim.

A paper written by proponents of pseudoscience, on the other hand, contain an introduction section that tend to cherry-pick background information. This is done in order to make their area seem more respectable, inflate the relevance of their own research and to making it harder their readers from getting critical information. Look no further than any arbitrarily selected paper on homeopathy or faith healing. Even more deceptive papers will misrepresent previous research that went counter to their claims. Also, pseudoscientific papers typically have a very messy and unclear aim. What exactly is the goal of their research? If their rationale is to promote a certain quack treatment and masquerade it as science, the aim is typically convoluted and hard to grasp (because they are essentially making it up as they go along).

Method:

The method section describes the entities under study (cells, model organisms, human populations etc.), the different tests and assays done or treatments, the nature of the controls used and the statistical analysis carried out on the results.

Watch out for:

  • Overt methodological flaws.
  • Inappropriate study population.
  • Lack of proper controls
  • Misleading statistical treatments.

In terms of the method section, pseudoscientific papers are characterized by many methodological flaws. They often have non-representative samples, insufficient randomization and improper model organism strain. A classic example is the recently retracted Séralini anti-GMO paper were the researchers used a rat strain that spontaneously develops tumors in order to test if GM foods make rats develop tumors. Another major issue is the lack of proper controls and the standard textbook example is acupuncture without using sham acupuncture (that uses telescopic needles). Finally, there is the issue of misleading or inappropriate statistical treatments of data, such as using the wrong statistical significance test, falsely arguing that statistical significance directly implies that the differences are large enough to be practically meaningful or clinically useful.

Results:

The result section contains tables, charts and graphs presenting the major outcomes that the study investigated. This includes effect sizes, error bars and any results of statistical tests. The result section also describe and summarize the conclusions provided by the tables and graphs so that you can get the gist of the results either by checking the graphs and reading the graph texts or the main body result text.

Watch out for:

  • Misleading graph axes.
  • Error bars missing or left undefined.
  • Sample size not specified or distorted.
  • Inconsistencies between aim and results.

It is very easy to create a misleading impression of large differences between groups by artificially modifying the y-axis, so it is vital to check where the y-axis starts and how far each step on the axis is. Another common issue is that error bars are either missing or not defined (standard deviation, standard error or confidence intervals?). This means that the differences are uninterpretable because you do not know the within-group variation. Sample size is often left unspecified or distorted. A given difference means almost nothing if the sample size is small (i.e. the results can easily be attributed to chance), but could be substantial if the sample size is larger. Another key issue revolves around representative experiments. In some cases, experiments cannot be pooled and so researchers have to pick one experiment to show as “representative” of their entire set of experiments. This can be cherry-picked or mis-characterized. It can also be N = 1, which means that they should be interpreted carefully. After reading the result section, it is worth going back and checking the aim again to see if the researchers did what they said they were going to do. If discrepancies exists between the aim and the result section, ask yourself why that is.

Discussion:

The discussion section, as the name suggests, discusses the results of the study and puts them into the broader context of existing research. High-quality papers take great care to ensure that the conclusions are proportional to the results, avoid spin and provide an honest discussion of study limitations.

Watch out for:

  • Overinterpreting research results.
  • Downplaying negative findings or spinning the result.
  • Insufficient treatment of study limitations.

Proponents of pseudoscience often inflate the practical relevance of their results, while downplaying the results of limitations. There are at least three common techniques for overinterpreting research results: confusing statistical and practical significance, inferring causation from correlation and ignoring study limitations. Statistical significance is the probability of obtaining at least as extreme results given the truth of the null hypothesis. Practical significance, on the other hand, is when the results are large enough to be relevant in the scientific (biological, psychological, physical, clinical etc.) context. Clearly, results may achieve statistical significance but be too small to be of clinical relevance. Another classic example often found in pseudoscientific papers is the confusion between correlation and causation. Just because two factors co-vary does not mean that one of them caused the other. It could be the other way around (reverse causation), both can cause each other (bi-directional causation) or something else entirely may cause both (third variable problem). Finally, proponents of pseudoscience rarely write papers that include an appropriate and sufficient treatment of the limitations of their research. This may be done to mislead readers, but it can also stem from ignorance of the basic science and suitable research methodology. For instance, the failure to include an appropriate placebo group may be a symptom of not being aware that the perceived benefit of alternative medicine is largely the result of placebo.

Further reading:

Gorski, David. (2013). The director of NCCAM wants a “nuanced conversation” about “complementary and alternative medicine”. Accessed: 2013-12-21.

emilskeptic

Debunker of pseudoscience.

%d bloggers like this:

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!