A recently published study (Riekki et. al. 2012) on the difference in susceptibility to pareidolia between paranormal believers and skeptics have has been making its way through various skeptical circles online. The essential message of the study is that that people who believe in the supernatural are more likely to see specific patterns (in this study, faces) where none exists.
I think the general conclusion is true; believers in the paranormal are more susceptible to pareidolia than skeptics. However, as good skeptics, we should keep two fundamental questions: (1) the study seems to support a position we already believe, so we should be especially careful not to fall prey to selective skepticism and confirmation bias and (2) what does the result of the paper really mean?
So what are the results of the study? I have extracted the effect sizes, sample sizes and the standard deviations from the paper, plotted the effect sizes and calculated parametric 95% confidence intervals. I did all of this in Microsoft Excel and it took me around 5 minutes. Here is the graph:
As we can see, most of the differences are in the range of 2-6 percentage points with the confidence intervals being relatively broad. While statistically significant, these differences are probably not practically significant. The differences are quite small. This means that, there is most likely little or no practical difference between the susceptibility to pareidolia between the paranormal believers and skeptics / religious believers and non-religious believers investigated in any of the experiments in this study that I looked at.
The study also suffered from a low sample size (groups being compared had a sample size around 20). This might also mean that the data are not approximately normally distributed and thus that the parametric 95% CIs that I calculated in Excel might deviate somewhat from the corresponding non-parametric CIs. However, since I do not have access to the original data, I cannot calculate approximate 95% non-parametric CIs for the median, but I suspect that such modification will not change my overarching conclusion in any major way.
There were two additional effect sizes that was reported, called “face-likness” and “emotionality” in the rating task. Religious believers and paranormal believers had a higher effect size in both of these compared with non-religious people and skeptics. However, I do not have the ability to interpret the differences between the various groups in their psychological context, so I leave them out of this analysis. It is possible that they may salvage the paper, but the confusion between statistical and practical significance is a pretty big mistake regardless.
It could also be useful to mention that the journal in which this article was published, Applied Cognitive Psychology, only had an impact factor of less than 1.7, which is very low. This should not take the front seat when evaluating research, but we should keep in mind that it means that the research probably could not get published in any more prominent and high-impact research journal.
So what are the lessons of this exercise?
1: Don’t believe everything you read in the popular media, especially about scientific research. Always go back to the original paper and look at the effect sizes / error bars. This is especially relevant if the research supports a conclusion you already have.
2. Too many studies confuse statistical significance with practical significance. This is a major problem that biases the research literature, but we know that it is partly caused by a pressure to publish and an lack of detailed understanding of statistics on the part of the researchers.
3: If you make a study, takes steps to make sure that it cannot be refuted by a random person on the Internet that spends 5 minutes in Excel. Waste of research money and effort.
4: Sometimes, reviewers do not have more statistical knowledge than the authors of the papers being sent in.
5. Evaluate effect sizes and confidence intervals in the psychological (biological, medical sociological etc.) context to draw conclusions about the practical significance of the results. Put much less emphasis on statistical significance and p values.
The biologist Thomas H. Huxley is attributed as saying “The great tragedy of science, the slaying of a beautiful theory by an ugly fact.” This is not the greatest tragedy in science. The greatest tragedy in science is when the existence of an ugly fact that is very persuasive is not enough to make researchers change their interpretations of the data.
Riekki, T., Lindeman, M., Aleneff, M., Halme, A., & Nuortimo, A. (2012). Paranormal and Religious Believers Are More Prone to Illusory Face Perception than Skeptics and Non-believers Applied Cognitive Psychology DOI: 10.1002/acp.2874
Categories: Misuse of Statistics