Is cryonics unfalsifiable and uses an excessive amount of ad hoc maneuvers? Why are proponents of existential risk research relatively uninterested in submitting their work to a high-quality, peer-reviewed scientific journal? Why does the Doomsday argument seem immune to self-correction? How come there is very little connection between ideas about surviving destructive teleportation or uploading of the mind to the mainstream scientific literature on neuroscience? To what extend do the existential risk crowed overuse hypertechnical language?
Pseudoscience is an imposter of science. An area that superficially might appear to be scientific, but has an intellectually vacuous inside. Now that we are approaching the end of this articles series where we critically reviewed Olle Häggström’s book Here Be Dragons, it is time to sift through the issues and see if we can reach some kind of conclusion of what of it is scientific and what is obviously not.
This will keep us preoccupied in the final two parts of this series. Previous installments of this series has tackled bioweapons, destructive teleportation, self-replicating nanobots, philosophy of science, doomsday scenarios, Dyson spheres and Pascal’s wager. This is the penultimate installment and will investigate to which degree these and many more issues discussed and defended by Häggström qualifies as pseudoscience and the final part will be an addendum and conclusion.
What do we mean by “pseudoscience”?
One of the best definition and treatment of what it means for something to be pseudoscience that I have come across is that of Lilienfeld and Landfield (2008). They define pseudoscience as areas that “possess the superficial appearance of science but lack its substance” and that pseudosciences are the “imposters of science: They do not play by the rules of science even though they mimic some of its outward features.”
Although Lilienfeld and Landfield does acknowledge that it is somewhat of a fuzzy concept, they outline ten warning signs of pseudoscience: unfalsifiable and overuse of ad hoc defenses, lack of interest in the peer-review process, no self-correction, reliance on anecdotal evidence, use of extraordinary claims, appeal to tradition, reversing the burden of proof, lack of connectivity to the broader scientific literature, use of hypertechnical language (e. g. psychobabble, technobabble etc.). They certainly accept that some pseudosciences lack some of these signs, whereas some sciences have some of these signs as well, but note that the more warning signs show up, the more likely it is that you are dealing with pseudoscience. This list is by no means perfect or exhaustive, but it gives us a decent framework to explore the contents of Here Be Dragons. However, before we do that, we must acknowledge some of the difficulties in this approach.
The borderlands of science
Some forms of pseudoscience are easy to identify. Ideologies and beliefs such as homeopathy and astrology fulfill all or almost all of the pseudoscience criteria listed in the previous section. They are completely at odds with basic physics and biology, they are highly resistant to self-correction, frequently makes appeal to tradition, often relies on anecdotes over hard science, push the burden of proof on the skeptic and many of the supposed studies that they rely on are poorly designed and lack proper controls.
In other cases, it is much more difficult. This is because there is some degree of overlap and fuzzy boundaries between several different categories. Some things are scientific, some are pseudoscience, some are non-science whereas others can be found in the borderlands of science and pseudoscience. This is perhaps best illustrated by the intro sequences to the television series “Fringe”, whereby different scientific areas or phenomena are written across the screen (such as quantum entanglement or neural networks) but mixed in with pseudoscience (such as astral projection or pyrokinesis) and borderline topics (such as cryonics). Particularly illuminating is the retro version of it (where events play out in 1985) that lists scientific ideas such as personal computing and DNA profiling that were not fully developed at the time, but also things that never amounted to anything, such as cold fusion.
Many of the issues discussed by Häggström in Here Be Dragons most likely fall in the borderlands between science, prescience and pseudoscience, but there are clear examples of both science and pseudoscience in there as well. We are going to look at each of the warning signs discussed in the above paper and see if it fits with some of the things discussed in the book and to what extent. We will leave the discussion of the great things in the book to the next and final installment of this series, that will focus on conclusions.
Unfalsifiability and overuse of ad hoc measures
Many of the futurist issues discussed by Häggström is unfalsifiable. Take cryonics as an example. How could you falsify the belief that, at some point, a technique can be developed that allows to freeze people near death and revive them in the distant future? We might say that the current ideas (such as chemically fixate the brain) are laughably mistaken, but this would at most even disprove one particular idea and not the core beliefs of cryonics. Similar issues arise with destructive teleportation, uploading and so on.
Cryonics is also a great example of the use of ad hoc hypotheses. First, the issue of ice crystals being lethal is raised. This is explained away by rapidly replacing the water in the body with a cryoprotectant. This introduces the problem with chemical fixation, which is explained away by citing examples with freezing bacteria or kidneys, despite the fact that most bacteria die and a kidney consists of a basic unit being duplicated and so can sustain damage without breaking down (as oppose to the brain which is very sensitive to damage).
Another good example is atomically precise manufacturing, whereby problems with scale is dismissed with the ad hoc hypothesis of self-replicating nanobots. Criticisms against this is deflected by claiming that it will be done with enzymes, but that is not possible due to do the fact that enzymes require water and are limited in their capabilities. At every turn, there appears to be an ad hoc maneuver to get away from refutation.
Lack of interest in the peer-review process
The vast majority of content produced by futurists and existential risk thinkers that are held in high regard are not published in credible, peer-reviewed scientific journals. Instead, they are published on blogs (such as Less Wrong), books (such as Superintelligence or Here Be Dragons) or PDF documents from various think tanks (such as Machine Intelligence Research Institute or Future of Humanity Institute). When proponents of the ideas discussed by Häggström manage to get published in the scientific literature, it is typically in virtually unknown journals or journals of low quality. This does not mean that the arguments discussed in those papers are terribly bad, but it shows that there is a lack of interest or an unwillingness to subject their ideas to proper peer-review.
Häggström and others might retort that many books published by university presses are peer-reviewed, but this is not the same kind of peer-review that are used for scientific journals. For instance, many books written by intelligent design creationists have been published by academic publishers and thus “peer-reviewed”, but had any of those claims been submitted to any high-quality scientific journal, they would have been rejected because of its unscientific and flawed content (Isaak, 2006).
Max Tegmark and Nick Bostrom managed to get one paper published in Nature (Tegmark and Bostrom, 2005), but that was over 11 years ago and only a brief communication lasting about half a page. However, this “paper” is not a research paper, it produces no new empirical data, and almost no information about how they reached this conclusion is available in the paper or the supplementary information online.
Lack of self-correction
There are also examples of a lack of self-correction. Perhaps the best examples of this is the so-called Doomsday Argument that purports to mathematically derive the total number of humans that will ever exist and thus get an estimate of when humans will go extinct solely based on the number of humans that have lived so far and certain other assumptions. Although both I and Häggström agree that this argument is flawed in quite serious ways, it was first invented in the early 1980s and has been defended in the 2010s. This seems to be an idea that is extremely stagnant and lacking in self-correction.
Reliance on anecdotal evidence or testimonials
Most of the issues discussed in the book do not rely on anecdotal evidence. This is probably because a lot of these issues are about the future, about issues that are not subject to anecdotes and perhaps because the original list of warning signs are intended for things like alternative medicine or investigation methods that have been used for a very long time.
The only thing that might qualify as appeals to anecdotal evidence or testimonials is the way Häggström and other existential risk thinkers appeal to specific famous individuals (such as Stephen Hawking) that appear to share some of their beliefs with regards to e. g. artificial general intelligence (p. 101). However, on the large scale, there is very little reliance on anecdotes.
Extraordinary claims (without extraordinary evidence)
The existential risk areas surveyed in the book definitely include extraordinary claims: surviving the physical destruction of your brain, having your consciousness uploaded to computer servers, teleportation, cryogenics, superintelligent robots, atomically precise manufacturing with self-replicating nanobots and many others. These have been discussed in great detail in previous installments of this article series. Many issues discussed in his book fall, without a doubt, under the umbrella of extraordinary claims without evidence.
Appeals to tradition
One might expect that there is very little appeal to tradition in the existential risk topics surveyed by Häggström. Being a futurist or a transhumanist would seem to be the very opposite to making appeals to tradition. This is to a large degree true and this means that the appeals to tradition is not really relevant here. However, Häggström does make some appeals to tradition. In particular, he uncritically cites (p. 227) Hume on the alleged impossibility of deriving an ought from and is, a text written hundreds of years ago.
Reversing the burden of evidence
Reversing the burden of evidence is extremely common in those issues that Häggström writes about and Häggström himself does this repeatedly. This is discussed throughout this article series, and examples include cryonics, artificial general intelligence, intelligence explosion, and atomically precise manufacturing. In reality, of course, the burden of evidence is not on the skeptic, but on the proponent and no amount of appeal to “well, it has not been completely disproved in all conceivable ways” will change that.
Lack of connectivity to the broader scientific literature
There is an extreme lack of connectivity to the scientific literature within the existential risk issues that Häggström covers. Here we will review two of the examples that have been discussed at length in earlier installments of this series: uploading / teleportation and cryonics.
From modern neuroscience, we know that the mind is a function of the brain and that if you destroy the brain, you destroy the mind. This means that uploading and teleportation is not possible. Whatever you get after it has been done, it is not the original person. Scanning techniques are also too slow to capture anything other than dying cells. There is almost completely a disconnect between the beliefs and arguments about surviving death and modern neuroscience.
For cryonics, proponents believe that you can freeze the entire body or just the brain close to or just after death and then be revived hundreds or thousands of years in the future. The problem is that you will typically get ice crystals that kill cells, but they attempt to explain this away by postulating that all blood is replaced by a “cryoprotectant”. But this leads to even more problems, because this will actually chemically fixate the brain that will wash out chemistry and physiology. That brain will no longer be able to work. Even if this was somehow not a problem, the person would still be dead or almost dead.
There is an extreme amount of hypertechnical language used within the existential risk movement and there are entire glossaries available that contain hundreds of words and concepts that they have coined. These includes ugh field, Beisutsukai, claytronics, extropianism, noosphere, technogaianism, coherent extrapolated volition and many others. Even when you read the definitions, they are exceedingly bizarre. Here is the definition of the last term by Yudkovsky (2004), as cited by Häggström (p. 120):
Coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interference, extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
…”where the extrapolation converges rather than diverges”? Sounds like something right out of postmodernist nonsense. Some technical terminology is expected and new scientific areas are often forced to define some new terms in order to make progress. But when it goes to this kind of extreme, there is an opening for plenty of doubt regarding the legitimacy of the area.
So where does all of this leave us with?
The main take home message of this installment is that there are considerable red flags available for many of the issues being discussed in the contemporary existential risk movement. Some of them can probably be ironed out with improved arguments, but several of them seem almost insurmountable. In particular, things like the Doomsday Argument, surviving destructive teleportation, uploading your mind to a computer, atomically precise manufacturing with self-replicating nanobots and cryonics is almost certainly pseudoscience. It is also difficult to come to any firm conclusions to what extent artificial general intelligence is prescience or pseudoscience, but it is likely going to be a borderlands topic for quite some time.
References and further reading:
Isaak, M. (2006). Intelligent Design and Peer-review. Accessed: 2016-10-27.
Lilienfeld, S. O., & Landfield, K. (2008). Science and Pseudoscience in Law Enforcement: A User-Friendly Primer. Criminal Justice and Behavior, 35(10), 1215-1230.
Tegmark, M., & Bostrom, N. (2005). Astrophysics: Is a doomsday catastrophe likely? Nature, 438(7069), 754-754.
…and references in previous installments of this series.