Debunking "Alternative" MedicineMiscellaneousSkepticism

In Defense of Paranormal Debunking – Part I: Bayesian Self-Defense

Note: This is the first installment of an article series refuting claims made by the online book “Debunking PseudoSkeptical Arguments of Paranormal Debunkers” written by Winston Wu. For all posts in this series, see the index post here.

Winston Wu's website

Proponents of paranormal claims often feel threatened by scientific skepticism. This is because core skeptical principles erode their scientific pretensions. Instead of trying to back up their original paranormal claims with real scientific evidence, they attempt to deflect by attacking these skeptical principles. Most of the time, they make a hatchet job arguing against principles they misunderstood to begin with. This is because skeptical principles such as extraordinary claims require extraordinary evidence, Occam’s razor and burden of evidence can be formally stated and defended using basic Bayesian probability theory.

One such individual is Winston Wu, who has compiled a list of thirty sections attempting to defend paranormal claims and attack scientific skepticism. Wu attempts to offer a series of refutations to what he sees as thirty core scientific skeptical positions. Half of them deal with overarching objections to paranormal assertions and discuss topics such as burden of evidence, extraordinary claims, Occam’s Razor and anecdotal evidence. The other half concern specific paranormal beliefs such as psychics, miracles, alternative medicine, answered prayer, precognitive dreams, consciousness, UFOs and creationism.

In this first installment, we take a closer look at confidence in relation to the strength of evidence, extraordinary claims require extraordinary evidence, Occam’s razor, burden of evidence and anecdotes.

Misunderstood principle #1: Confidence should be proportional to evidence

The first argument that Wu objects to is the notion that “it is irrational to believe anything that hasn’t been proven”. This, however, is a straw man. The correct version promoted by serious scientific skeptics is that the confidence in a proposition about the world around us should be proportional to the evidence for that proposition. In other words, the confidence in the atomic theory of matter or the existence of the sun should be high because the evidence is so overwhelming. In contrast, we should have very low confidence in propositions for which the evidence is rare, non-existence or directly contradicting it.

This principle can be formulated using Bayesian statistics. The posteriori probability of a hypothesis given evidence, P(H|E), is proportional to the probability of evidence given the hypothesis P(E|H):

P(H|E) = \frac{P(H)P(E|H)}{P(E)}

The higher P(E|H), the higher P(H|E) becomes (assuming that P(E) is constant). Although the formal description of the principle, it is straight-forward: the more evidence for a claim, the stronger confidence is justified in that claim. The less evidence, the less confidence is justified.

Wu goes to great lengths to misunderstanding this simple principle.

First, he tries to claim that it is an argument from ignorance: just because something has not been “proven” does not mean that it is false. As noted above, this stems from his misrepresentation of the principle. The argument is not that we should dismiss everything that has not been “proven”. Rather, the argument is that our confidence in a proposition should be in proportion to the evidence for that proposition. In defense of his interpretation, Wu cites the alleged fact that the AMA supports acupuncture. In reality, AMA has no official position on acupuncture, but had this to say (webcite) about quack treatments: “There is little evidence to confirm the safety or efficacy of most alternative therapies. Much of the information currently known about these therapies makes it clear that many have not been shown to be efficacious. Well-designed, stringently controlled research should be done to evaluate the efficacy of alternative therapies.”

A recent review of the alleged evidence for acupuncture, Colquhoun and Novella (2013) concluded that “[l]arge multicenter clinical trials conducted in Germany7–10 and the United States consistently revealed that verum (or true) acupuncture and sham acupuncture treatments are no different in decreasing pain levels across multiple chronic pain disorders: migraine, tension headache, low back pain, and osteoarthritis of the knee” and that “[…] the benefits of acupuncture are likely nonexistent, or at best are too small and too transient to be of any clinical significance. It seems that acupuncture is little or no more than a theatrical placebo”.

Second, he tries to appeal to anecdotal evidence and personal experience. Just because something lacks scientific evidence, he argues, does not mean that it has not been “proven” to people via first-hand experience. However, if anecdotal evidence and personal experience was sufficient to settle issues regarding the paranormal (or indeed any larger question about the nature of reality) then there would be no need for science. In reality, anecdotes and personal experience does not qualify as scientific evidence. This is because it is subjected to distortions, biases and other processes that makes them unreliable. These can be overcome by using controlled scientific studies critically evaluated and discussed in the larger scientific community. His analogy to “sitting in a car is evidence for the existence of cars” does not work because the car can be independently examined and experiments can be repeated. This is typically not the case for anecdotes or personal experiences.

Third, Wu appeals to experiments on psi carried out at the Princeton Engineering Anomalies Research labs (PEAR), the Ganzfeld experiment on telepathy and research on alleged psychics by Gary Schwartz at Human Energy Lab of the University of Arizona. However, the PEAR results showed only a tiny deviation from chance, have failed to be replicated by external researchers and even the PEAR group themselves. There was also substantial evidence that the device used to generate random pulses was, in fact, non-random (thereby compromising comparisons with baseline). This puts the alleged evidence for psi produced by PEAR into serious question. The Ganzfeld experiments suffered from a number of flaws, such as incomplete randomization, the fact that the receiver was not fully isolated and insufficient correction for multiple testing. Attempts to improve the failings of these experiments showed negligible effect sizes and any positive results have not been independently replicated. The research on alleged psychics by Schwartz has numerous problems, from lack of proper blinding and independent verification of “hits” to not controlling for multiple testing and the retrofitting of failures.

Fourth and finally, he asserts that non-skeptics need not agree with skeptics that a specific proposition is irrational and that many people who subscribe to allegedly irrational beliefs are otherwise rational. These two objections are obvious non sequitur. The issue is not that scientific skeptics believe that a certain proposition is irrational and that therefore, non-skeptics have to agree. Rather, the issue is that, objectively speaking, the evidence is not strong enough to justify the level of confidence that believers in the paranormal display for their irrational beliefs. The second objection is irrelevant. Just because person X holds rational beliefs A and B does not mean that belief C, that person X also holds, is necessarily rational. If skeptical discussions online has taught us anything it is that people can be selectively rational. In fact, Wu acknowledges this in the introduction when he claims that scientific skeptics only apply critical thinking and skepticism “to that which opposes orthodoxy or materialism, but never to the status quo itself.”

In summary, Wu incorrectly characterizes the principle of “confidence proportional to evidence” as an argument from ignorance and his “objections” appeal to anecdotes and personal experience or references flawed research that does not provide solid evidence for paranormal phenomena.

Misunderstood principle #2: Extraordinary claims require extraordinary evidence

Wu moves on to misunderstand the skeptical maxim of extraordinary claims require extraordinary evidence. He thinks that it is a very subjective move used to move the goalposts. Quite the contrary, this principle can precisely and objectively be formulated in a Bayesian statistical framework just like the previous one.

An extraordinary claim is a claim for which the prior probability, P(H), is extremely low. The prior probability may have some elements of subjectivity, but it can be formulated with respect to prior evidence to mitigate this problem. That is, the prior probability can be seen as the probability of H given prior scientific knowledge. In order for it to be rational to accept a hypothesis H with extremely low prior probability, the evidence needs to be extremely strong for that hypothesis and weak for alternative hypotheses.

P(H|E) = P(H) \frac{P(E|H)}{P(E)}

In other words, the P(E|H)/P(E) ratio has to be extremely large if the P(H) is extremely small (see the size differences in the above expression as a simplified visual explanation). On the other hand, if the evidence is not that impressive for H, then the probability of H on the evidence is low and H should be rejected:

P(H|E) = P(H) \frac{P(E|H)}{P(E)}

In this equation, the prior probability P(H) is low (the claim is more extraordinary) and the P(E|H)/P(H) fraction is also low (the evidence is not extraordinary) and so the posterior probability of H on E (i.e. the probability of the hypothesis given the evidence) is also low and thus H should not be considered credible. Our confidence in H should be very low.

Wu makes a number of objections to the principle that extraordinary claims require extraordinary evidence. First, he thinks it is a way for scientific skeptics to move the goalposts because they leave “extraordinary” undefined. However, as we saw above, “extraordinary” can be objectively defined with great precision.

Second, he confuses subjective and objective priors. His argument that everyone may not agree that a claim is extraordinary, but this is mistaken as priors can be constructed from empirical evidence. The classic textbook example is population base rates for diagnostic tests. Wu thinks that “having Astral Projections and Out of Body Experiences are extraordinary to those who have never experienced them, but for those who have them regularly, they are an ordinary part of life”, yet this is obviously a circular argument since the existence of astral projections or OBEs is precisely the kind of alleged paranormal abilities that are under debate between scientific skeptics and proponents of paranormal claims.

Third, he confuses ontology with epistemology when he claims that “extraordinary phenomena can exist without leaving behind extraordinary evidence”. The discussion is about whether we have sufficient evidence to establish the existence of paranormal abilities (epistemology), not whether they existed outside in the universe beyond the realm of possible experience.

Fourth, Wu thinks that assigning something as improbable requires omniscient. However, when scientific skeptics say that a paranormal claim is improbable, we mean that it is improbable with respect to the currently available evidence (both the background information and the alleged evidence presented by proponents of paranormal beliefs).

Fifth, he tries to compare “having OBEs” to “being in Spain”, but that is precisely the comparison that is invalid. Yes, just because person X has not been in Spain does not mean that we can say that people who claim to have been to Spain are wrong and delusional. This is because we have good external evidence that Spain does exist, and the people who have been to Spain can bring home evidence from their trip. This is not the case for OBEs or miracles: there is no credible evidence of the existence of miracles or OBEs.

Sixth, Wu thinks the principle of extraordinary claims require extraordinarily evidence favors conservatism, close-mindedness and the refusal to give up dominating models. He performs the slippery slope fallacy when he implies that if science always favored the status quo, no scientific advances would occur. However, scientific advances do occur because the evidence for the new model accumulates, which is consistent with this principle.

Seventh, he appeals to tradition by stating that science has allegedly not applied this principle historically. However, this confuses descriptive with prescriptive philosophy of science. The two examples provided by Wu (quoting Michael Goodspeed) is the Big Bang Theory and the general and special theory of relativity. However, both of these models (which Goodspeed describe incorrectly) was accepted because of the evidence. Contra Wu, these two historical examples are consistent with the principle of extraordinary evidence requiring extraordinary evidence.

Finally, Wu attempts to provide a number examples of paranormal claims that he thinks already has extraordinary evidence: UFOs, ghosts and spirits, ESP and telepathy and mystical experiences. However, this alleged “extraordinary evidence” reduce to nothing but anecdotes, appeals to popularity and repeats the already debunked arguments about PEAR and the Ganzfeld experiment.

Misunderstood principle #3: Occam’s Razor

Next principle to be misunderstood by Wu is Occam’s Razor. Wu thinks it reduces to “simplest explanation is the best” and that “simple” is subjective. To the contrary, Occam’s razor is supported by a substantial amount of empirical evidence, rests on a solid mathematical foundation and is reasonable given falsifiability considerations.

The empirical support comes from the fact that overly complex models tend to overfit the data. This is because there is always noise in experimental observations, so introducing too many explanatory factors will tend to make the model describe reality + noise instead of reality and thus have worse predictive ability. Less complex models will not tend to describe that much noise and thus reduce this problem.

There are several mathematical arguments for Occam’s razor, but this section will only cover two of them: the conjunction argument and the Bayesian argument. The conjunction argument rests on the basic statistical fact that the more factors you include in a model, the less probable the entire construct becomes (as long as at least one of the extra factors have a probability of less than 1). More formally,

P(A) > P(A) \cdot P(B) \cdot P(C) \cdot ... \cdot P(n)

given that at least one of the factors P(B), P(C), …, P(n) is < 1.

P(A) is always going to be larger than the product of P(A) and the additional factors, as long as you admit that those additional factors are not all absolutely true (i.e. have a probability less 1). In other words, a model that just includes A is going to be more probable than a model that includes A, B, C, .., n. Therefore, we should provisionally accept the first model as the most probable one.

Here is how this works in practice: most proponents of the paranormal accept that scientific explanations do play some kind of role, but propose additional explanations because they do not accept that scientific explanations are enough on their own. Because these alleged “explanations” (like extra-sensory perception or demons) have a probability of << 1, their entire model is going to be considerably less probable than the scientific explanation.

Occam’s Razor can also be given a powerful Bayesian justification that rests of the fact that the models with few factors tend to make very precise predictions, whereas complex models can encompass a larger range of predictions. Thus, the simpler model has all of its eggs in one basket, whereas the predictive probability of the complex model is smeared out. In areas where both models apply, the less complex model (the one with fewest additional factors) is going to have a higher P(E|H) and thus a higher posterior probability. This argument is described in additional detail in the book Information Theory, Inference, and Learning Algorithms (webcite) by MacKay (2003). Here is a screenshot of the relevant section:

Occam's razor

The falsifiability arguments stems from the fact that simple models are more easily falsifiable than complex models because the simple models applies to more cases and the complex model has more adjustable factors and enables ad hoc escape to a larger degree.

How does Wu misunderstand this simple principle? His first argument is that Occam himself and other people like Newton never intended that the principle be used in this way. This is, of course, irrelevant as the validity of the principle rests on the fact that it is supported by mathematical and empirical evidence. Second, he claims that “simpler” is a subjective criteria, but as we have seen, it can be given a precise mathematical formulation (fewer factors or assumptions). Third, he claims that some unlikely models are going to be true, but this confuses epistemology with ontology yet again. The debate between scientific skepticism and proponents of the paranormal is about whether we have sufficient evidence to establish the existence of paranormal abilities.

Misunderstood principle #4: Burden of evidence

A common tactic used by proponents of paranormal claims is to try to shove the burden of evidence onto the skeptic and request that the skeptic disproves their claims. Placing the burden of evidence on the wrong side in this fashion is known as the “burden of proof fallacy”. The burden of evidence rests squarely on the paranormal believers because, as we have seen, their position is less likely with respect to the background evidence and is more complex.

An easy way for skeptics to illustrate this principle is to retort: “well you cannot disprove the existence of pink invisible unicorns or the flying spaghetti monster, but that does not mean that you are justified in believing them”. Most denialists do not understand that this is a simplified way to illustrate the burden of evidence, not an attempt at making a highly accurate analogy. Thus, denialists typically attack the analogy rather than addressing the underlying argument with respect to the burden of evidence.

Wu makes a number of such claims: (1) people do not actually experience pink, invisible unicorns, (2) people are, allegedly, not knowingly making up their paranormal claims, (3) belief in gods and ESP is more popular than a belief in pink, invisible unicorns, (4) people who are otherwise intelligent believes in gods and ESP, (5) all unprovable things are not in the same category.

These “arguments” attack the simplified analogy, not the underlying argument, and can therefore safely be dismissed as irrelevant.

Misunderstood principle #5: Anecdotal evidence

Anecdotes do not qualify as scientific evidence for paranormal claims for several reasons. It often lacks independent corroboration, it may be subjected to biases such as memory failures, embellishments and error, it may be non-representative and distorted by cherry-picking. Furthermore, according to the principle of extraordinary claims require extraordinary evidence, anecdotes are not sufficiently strong evidence to off-set the low priori probability. Historian Richard Carrier explains:

If I say I own a car, I don’t have to present very much evidence to prove it, because you have already observed mountains of evidence that people like me own cars. All of that evidence, for the general proposition “people like him own cars,” provides so much support for the particular proposition, “he owns a car,” that only minimal evidence is needed to confirm the particular proposition.

But if I say I own a nuclear missile, we are in different territory. You have just as large a mountain of evidence, from your own study as well as direct observation, that “people like him own nuclear missiles” is not true. Therefore, I need much more evidence to prove that particular claim–in fact, I need about as much evidence (in quantity and quality) as would be required to prove the general proposition “people like him own nuclear missiles.”

[…]

In contrast, “I own a nuclear missile” would be an extraordinary claim. Yet, even then, you still have a large amount of evidence that nuclear missiles exist, and that at least some people do have access to them. Yet the Department of Homeland Security would still need a lot of evidence before it stormed my house looking for one. Now suppose I told you “I own an interstellar spacecraft.” That would be an even more extraordinary claim–because there is no general proposition supporting it that is even remotely confirmed. Not only do you have very good evidence that “people like him own interstellar spacecraft” is not true, you also have no evidence that this has ever been true for anyone–unlike the nuclear missile. You don’t even have reliable evidence that interstellar spacecraft exist, much less reside on earth. Therefore, the burden of evidence I would have to bear here is enormous. Just think of what it would take for you to believe me, and you will see what I mean.

Clearly, evidence would have to be considerably stronger to establish the ownership of an interstellar spacecraft than the owner of a car. Although anecdotal evidence may be sufficient to justify ordinary beliefs such as “there exists fishes in this lake”, “someone with a higher Ebay feedback rating is trustworthy” or similar, they are wholly inappropriate in a discussion about paranormal claims, which by virtue of their content, are extraordinary.

How does Wu attempt to salvage anecdotal evidence? He claims that since skeptics do not reject anecdotal evidence or experience for ordinary claims (like if there is a person in a Santa Claus costume at the supermarket or whether France as a country exists) it makes no sense to apply it to paranormal claims. However, as we saw, paranormal claims are much less likely than these mundane claims. That means that it requires stronger evidence to establish and so anecdotes by themselves are not powerful enough to justify paranormal claims. Furthermore, there is often high-quality scientific evidence against paranormal claims, so even stronger evidence would be required. Wu thinks he stumps skeptics all the time with the existence of France. However, there is independent evidence that France exists even if the skeptic has not been there and it is a mundane claim and so require less evidence than a paranormal one. Clearly, anecdotes are not sufficient to establish the truth of a paranormal claim.

emilskeptic

Debunker of pseudoscience.

One thought on “In Defense of Paranormal Debunking – Part I: Bayesian Self-Defense

Comments are closed.

%d bloggers like this:

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!