# Harbingers of Doom – Part VI: Doomsday Predictions

Can you prove that we are in the last few millennia of human existence based on a statistical argument alone, in the total absence of scientific evidence? What if we use even more sophisticated statistical paradigms? Is scientific evidence from billions of acres of GM crops over at least two decades not enough evidence to show that GM crops are safe? What is the Ord-Hillerbrand-Sandberg methodology and can it help us evaluate the claims of experts in its proper context? How big of a threat to humanity are asteroids? Can a single rotten apple in a cake mix productive plant cause an epidemic infection millions? Do governments really need to prepare for an astronomically large number of potential pathogens or can they successfully use more general approaches? Is i possible to be an expert in something that have never ever happened? What are the most prominent risks to the future of humanity?

Through this article series, we have dived into an enormously broad range of topics and issues, such as medieval maps, bioweapons, anti-psychiatry, heritability, embryo selection and IQ, neuroscience, cryogenics, destructive teleportation, uploading your consciousness to a computer, superintelligent machines, atomically precise manufacturing, 3D printing, science in antiquity, philosophy of science, solipsism, and statistical significance. In this sixth part, we take a closer look at two chapters of Here Be Dragons, namely The fallacious Doomsday Argument (chapter 7) and Doomsday nevertheless? (chapter 8) and the reason why we briefly return to the two chapters per post approach is that the seventh chapter is almost completely without problems in stark contrast to previous (and later) chapters.

**Section LI: The Doomsday Argument (basic version)**

Throughout history, there have been many predictions of imminent doom made by various religious or political groups. Needless to say, all of them have been mistaken because we are still around. This long history of utter failure by doomsayers should make us skeptical of any future doomsday prediction made. In statistical terms, the prior probability of any doomsday prediction being correct is very, very low. Technically, it would be zero since 0% of doomsday predictions that have been tested have occurred, but if you use a 0% doomsday probability the conservation is over since no amount of evidence, however strong, can overturn a prior probability of 0, thus making the skeptical stance unfalsifiable in a Bayesian framework. So we can grant a very, very small non-zero prior probability for any given doomsday prediction.

So what kind of evidence would convince us? Well, the answer to that question is independently converging scientific evidence. There are massive problems facing the future of humanity on earth, such as our sun turning into a red giant and vaporizing the planet or our galaxy colliding with Andromeda. So despite a low empirical prior probability, the evidence is enough to make us accept these scientific conclusions. However, these events will not occur until several billion years into the future.

But what about more imminent doomsday predictions? One such popular prediction is simply known as the Doomsday argument (p. 171). Originally developed in the 1980s, it attempts to calculate the likely total population of humans that will ever exist (N) from the number of people who has lived so far (n) and, making certain distribution assumptions, get an idea of how long it will take for humans to go extinct. Häggström is extremely skeptical and critical of this argument and for good reason, since it is little more than a pseudomathematical sleight of hand.

The basic version of the doomsday argument starts by assigning every human that has ever lived with a birth order 1, 2, 3, …, N, where 1 is the first human and N is the last human ever born. Now, suppose that we select an arbitrary human with the birth order n. Since this person can have any birth order between 1 and N with equal probability, it roughly means that there is a 5% chance of being in the first 5% of humans ever born, 10% chance of being in the first 10%, 50% of being in the first 50% of humans born and so on.

Let us, for the sake of argument, suppose that a given person alive today (such as the reader) is this person with birth order n (p. 172). After all, do we really have any reason to suppose that the person with birth order n are particularly early or particularly late in humanity? We have a rough idea of how many humans have ever lived up to this point and it does not matter if we are wrong by several orders of magnitudes since the doomsday calculation use very large numbers, so the error in our total human population estimation is going to be negligible in comparison. When you plug in the numbers, the probability that humans will die out before reaching an N of ~10^{12} is about 95%, which, given reasonable estimates of equilibrium population size and morality rate per year, turns out to be a little more than 10 000 years into the future.

Did we just prove that humanity only has approximately 10 000 years to live before we go extinct? Of course not. Why? There are a number of minor empirical problems: we do not specify a mechanism so we cannot really evaluate it using science. We can of course make philosophical speculations, but if there is very little scientific input in an argument, there can be very little scientific output. The argument does not properly define what a human is and so there is an uncertainty in the past as well as the future. Does it really matter if our species transforms into other species in the future? Is that really what we should mean when we say the extinction of humanity? If so, we can do much better by using scientific evidence on how long mammalian species typically last before going extinct or evolving enough to count as another species. Finally it is not at all clear that we can go from treating n as a randomly selected human to treating it as a particular, fixed human (typically considered to be the reader).

Häggström briefly touch some of these problems, but the major flaw he identifies in the basic version of the doomsday argument is that a pseudomathematical sleight of hand has just occurred (p. 174). The best analogy here is that if you have an equation, both sides needs to have the same properties. If the left-hand side (LHS) of an equation is divisible by 3, then the right-hand side (RHS) must also be divisible by 3. Otherwise, the equation is invalid (for instance 4 is not equal to 3). Similarly, the RHS of the doomsday argument is a probability, the LHS must contain a random variable. But when we fixed n to a particular number, we removed this random variable. Since the only other unknown quantity in the LHS is N (that we know is fixed), we have set up an equation where only one side has a random variable, but not the other. More technically, we confuse the probability of data on a hypotheses with the probability of the hypotheses on some data, which is known as the fallacy of transposed conditionals, a well-known statistical fallacy. This, according to Häggström, means that the Doomsday argument implodes, and he is completely correct in that assessment.

From the nightmare that was Häggström’s flawed treatment of cryonics, surviving destructive teleportation, atomically precise manufacturing and reliance on statistical testing, this is a welcome return to critical thinking and scientific skepticism at its best.

**Section LII: The Doomsday Argument (frequentist version)**

Some people might have thought that it would be enough to destroy the basic version of the Doomsday argument and move on to tackle more important and realistic issues facing humanity, proponents of the Doomsday argument has something else in mind. Like many forms of pseudoscience, it is rarely, if ever, abandoned, merely modified to avoid objections. Since the basic version, two new versions have been developed, using either of the two main statistical orientations (frequentism or Bayesianism). To simplify the distinction, frequentists considers probabilities to be long-term averages whereas proponents of Bayesian statistics either consider probability to correspond to subjective degrees of belief or as an objective probability with rationally or empirically derived prior probabilities.

The core of the frequentist version of the Doomsday argument (p. 174) involves setting up a hypothesis test with the null being, for instance, N = 1.2 * 10^{12} and then calculating a p value, or more precisely, calculating the value of n that is required for the p value to cross the arbitrary boundary of p < 0.05. However, this cannot be used to make reliable claims about N, because of the flaws of NHST. The p value is only the probability of at least as extreme data given the null, but not the probability of the null. The principal objection delivered by Häggström against the Doomsday argument is that the type-I error rate (probability of statistical significance being obtained given the truth of the null hypotheses) might be constrained at maximum 0.05 if n is truly randomly selected out of all humans, but this is probably not the case if n is fixed to a specific individual living today. Without a reliable way to control the type-I error rate, we cannot be confident that the frequentist version does not lead us completely astray. If N is very, very large, then the type-I error rate is likely to be close to 1. Again, Häggström is correct.

It should be noted that there is an important distinction to be made between type-I error rate and the empirical false positive rate. It is possible to have a situation where the type-I error rate is restricted below 5%, but where the empirical false positive rate is very high, such as 60%. This is because the type-I error rate assumes the truth of the global null hypotheses, whereas the empirical false positive rate assumes the true distribution of false and true null hypotheses. This difference is especially important when testing several hypotheses and where the hypotheses being tested have a low prior probability of being true. For instance, if you do a statistical test on the effects of copper bracelets on diabetes, you might have a type-I error rate of 5%, but the empirical false positive rate is surely 100%, since copper bracelets is not effective for diabetes. The type-I error rate is a feature of the test (and thus set by the researcher), whereas the empirical false positive rate is set by reality.

**Section LIII: The Doomsday Argument (Bayesian version)**

Others have tried to reformulate the Doomsday argument in Bayesian terms (p. 176). The basic idea is that we take the total number of humans that will ever live (N) as a random variable, introduce a prior distribution and update this prior with the available evidence (n) with Bayes theorem to arrive at a posterior probability. However, Häggström raises a number of problems. He is skeptical that we have any good idea about what the prior probability should be, he points to two different ways that the prior should be updated and that it is not at all obvious which is correct, and he thinks the prior is naive in that it only takes into account n and not any other available evidence.

Häggström also brings up two more general issues: the problem of reference class selection (p. 181) and the pragmatic argument (pp. 181-182). The reference class problem involves the question of which population should the random sampling of n be done from. If results differ markedly depending on the reference class, we should be skeptical of the resulting calculations. The pragmatic argument notes that people have failed to produce a convincing argument for a long time and that there are more concrete risks to worry about. In all of this objections, Häggström is yet again correct.

However, Häggström does make one objection that does not make much sense. He notes that the Doomsday argument rules out N being infinite as long as n is not infinite and that it seems weird to rule out N as infinite without reference to empirical evidence. He considers this “a pretty good shot at a *reductio ad absurdum*” (p. 181, italics in original). But all this really says is that the Doomsday model is not coherent for infinitely high n or N. Clearly, this is also true for various singularity calculations that posit computational power being infinite at some finite time in the future, but this does not necessarily refute an intelligence explosion. Or more concretely, refuting an application in the far extreme does not necessarily rule it out in non-extreme territory. For instance, drinking some water benefits survival, but it does not make sense to extrapolate and say that drinking an infinite amount of water obviously does not promote survival, so water must be completely irrelevant for survival.

Secondly, we can refer to empirical evidence to address this point. There does not seem to be any infinite quantities in reality and those that appear in models are either contradicted by other evidence-based models or not observed. For instance, quantum mechanics disprove black hole singularities since the matter would have a zero uncertainty in position, thereby contradicting Heisenberg’s uncertainty principle.

Thirdly, we also do not have any evidence that any biological organism has ‘reached’ a ‘state’ where there have been an ‘infinite number’ of individuals. In fact, it is not at all obvious that we can talk about “reaching” infinite (since it is not like we find ourselves counting 1,2,3,…,infinity – 2, infinity – 1, infinity, since finite mathematics does not considers infinite to be a number like 5 or 679 etc.), or having a biological “state” being infinite or “having an infinite number” when it comes to decidedly fixed and numerical quantities such as Häggström’s birth order and N. Häggström makes an impressive case against all of the formulations of the Doomsday arguments that he discusses, but this latest objection does not have the same merit as the others.

**Section LIV: Genetically modified crops **

The next chapter after the chapter critically investigating the Doomsday argument looks at more plausible sources of human extermination. However, Häggström stumbles on a couple of issues. The first one we will examine is genetically modified foods.

Häggström starts off his discussion by thinking about low probability events that have enormously high consequences. While we might ignore a low probability risk of being hit by a piano, we might not want to ignore a low probability of global nuclear war. Thus, Häggström suggests, there has to be some other additional evidence we need to favor the low risk part over the large consequences part. He illustrates this with the example of genetically modified foods, but botches more or less every aspect of the situation (pp. 186-187):

And what goes for global nuclear war goes for the event of genetically modified crops (GMCs) technology leading to a new plant spreading uncontrollably and disrupting ecosystems on a continental or even global scale. In order to claim that GMC technology is safe, an argument like the following, take from Fagerström (2014), simply will not do:

What is needed is arguments that convincingly demonstrate on more concrete mechanistic grounds that such catastrophes are impossible (or at least very unlikely). Taleb et al. (2014) complain about the lack of such arguments in the literature.

Häggström goes on to cite evidence (p. 187) that GM crops involve less modification than other breeding methods that we accept without a problem and so GM crops technology is at least as safe as these other breeding methods and that this is a consensus position among GM scientists.

The Taleb paper being cited is his paper on the precautionary principle that has been taken apart in Choking the Black Swan: GM Crops and Flawed Safety Concerns. Generally, Taleb’s argument suffers from the garbage in, garbage out principle. Since he bases his argument on false empirical premises, the argument must be unsound: GM crops involve smaller, more precise and more well-known modifications than conventional plants and have much more regulation than conventional counterparts (Committee on Genetically Engineered Crops, 2016; Brookes and Barfoot, 2015; Conko et al. 2016; Klümper and Qaim, 2014; Lemaux, 2008; Tagliabue, 2015). Since Taleb’s argument is based on denying this, his argument collapses.

It should also be noted that GM modifications currently being made, such as herbicide tolerance and increased content of vitamin A precursors, has no selective advantage in the wild. So the idea that these can, at any moment, transform into something that will be “spreading uncontrollably and disrupting ecosystems on a continental or even global scale” does not make sense. Hardly any plant can even grow in both tundra and desert (or even in all biomes on a given continent). Thus, the anti-GM argument is not just based on wrong assumptions of GM technology as such, but also about ignorance of plant biology.

Perhaps worse is that the evidence cited by Häggström does nothing to disprove the flawed risk analysis. Regardless of the GM risk being present for just 20 years (since commercial growing) or since the origin of green land plants (~500 million years ago, if we accept that GM crops are at least as safe as conventional), because this is a rounding error compared with the loss of human life given the Pascal Wager-like approach taken by Häggström and Bostrom. They envision a future with up to 10^{54} human lives. So even if the risk of GM catastrophe is 8 orders of magnitude lower than Taleb would suppose, this pales in comparison with the at least 45 order of magnitude higher consequences on the assumptions that GM crops be an existential risk to humans. We will settle the score with this flawed risk analysis in a later part of this articles series.

**Section LV: The Ord-Hillerbrand-Sandberg methodology**

How much should we trust experts when it comes to risk assessment? Häggström generally believes in the merits of experts, but brings up a particular approach to risk assessment that involves the Ord-Hillerbrand-Sandberg methodology (p. 187-188). This is just a fancy way of saying that we must take into account the probability that experts are mistaken. He acknowledges that this is difficult to do rigorously (p. 188), but wants to highlight that experts can be wrong. This is true, but experts are typically disproved by smarter experts with better evidence and arguments, not by bloggers or researchers outside the relevant area who at best have a high-school level understanding of the science involved. We can also apply the Ord-Hillerbrand-Sandberg methodology recursively and ask ourselves the question “how likely is it that the critics are wrong in their estimation of the probability that the experts are wrong” and so on. How much should we trust the judgement of experts by non-experts? This might lead us to conclude that we can emphasize the expert term over the Ord-Hillerbrand-Sandberg term, while of course being open to the possibility that science is wrong.

The remainder of this chapter discusses the risk from particular sources, such as asteroids, nuclear weapons, pandemics, supervolcanos and so on.

**Section LVI: Asteroids**

Häggström discusses some historical asteroids that have hit earth and their consequences (p. 188). He considers the risk to be low (0.0001-0.0002 per 100 years) in the short-term but something we should be wary of in the long-term. However, what is missing from the discussion is that asteroids can be understood with Newtonian mechanics and we can see far in space. This means that we will have plenty of advance notice of a large asteroid coming towards earth. It will probably not be like in the space movies where it comes out of nowhere, but have plenty of early warning and the trajectory would be easy to calculate.

**Section LVII: The ‘one rotten apple’ syndrome**

Later, Häggström discuss the impact of natural pandemics (p. 192), and cites an ignorant argument by Kilbourne about food contamination, which Häggström labels as the one rotten apple syndrome:

This might seem superficially convincing, but this argument fails in so many ways that it is hard to know where to begin. First, not all bacteria are harmful. In fact, most bacteria are incapable of establishing a productive infection in humans, since pathogens are often host-specific or can infect a very narrow range of hosts. When it does infect another untypical host (zoonosis), it is typically a dead-end and a lot of evolution is required for it to as productive in the new host as the old (Bean et al., 2013). When you buy an apple at the store, it is not sterile. Billions of bacteria live on it and in it when you cut it at home and that is completely harmless for the consumer. So for this scenario to work, it has to be human pathogens. Any old bacteria will not work.

Second, the bacteria that are native on apples are not going to harm humans, so you actually need a third host, such as a domesticated animal, to spread the bacteria to the fruit or vegetable first. So you actually need three systems in close proximity: animal industry, fruit and vegetable production and cake mix production. This probably does not occur that often in the world.

Third, the quality systems of the production plant have to collapse and countermeasures will have to fail. One simple countermeasure is to irradiate the cake mix with high-energy radiation that kills all biological material.

Fourth, the resulting concentration of bacteria is negligible. The example involved a billion bacteria, which we can take to be 10^{9} (it will not matter if we use U. S. definitions as this is just a rounding error). Let us take the number of customers to be at least 2 million and one package of cake mix has a mass of, let’s say, 500 grams. This will in total be 1000 million grams, or 1 billion grams, making the concentration of about 1 bacterium per gram cake mix at most. If any bacteria happen to die (see next point) or there are more than two million customers, this is even less. This is unlikely to make a productive infection. This common ignorance of scale has led to some humorous results. In 2014, a young man peed in a water reservoir that contained 38 million gallons (~144M liters) of water (Blackman and Thompson, 2014). A sane person would conclude that this had a completely irrelevant impact on water quality. If a human bladder contains, say, 1/2 liter, then it is a concentration of about 3.4 parts per billion which borders on homeopathic concentrations. Yet the authorities dumped it all, despite not doing so when birds fall in that apparently happen with regularity.

Fifth, the bacteria are likely to be annihilated by the dryness of the cake mix, the time it takes to transport until use and the heat from the oven when making a cake. Bacteria need water to live, so in a very dry environment, they either die or form spores. So this means that the bacteria in questions have to not only be human pathogens, but also must be efficient at forming spores that can survive for a long time and also be heat-resistant to several hundred degrees Centigrade.

Sixth, since the scenario painted by Kilbourne requires an epidemic, the pathogen has to not only infect the end-user, but also be a communicative disease that can spread and infect others. Not only that, but be so efficient after the dryness and massive heat treatment to have an R_{0} of over 1. Anthrax, for instance, cannot spread between humans.

Even assuming that the bacteria can spread from farm animals to salad to cake mix, that there is no quality control, the bacteria are pathogenic to humans, can survive extended dryness during production and transport, several minutes in an oven and even grow substantially under these conditions (to attain a concentration that is likely to infect humans productively), be able to be spread between humans and spread so efficiently that it causes an outbreak (and not just a small proportion of them, but most bacteria), this will unlikely be a major problem. This is because most industrial countries have an efficient epidemiological surveillance system for food-borne illnesses. Of course one of the first questions they would be asked is “what did you recently eat” and if everyone says that they have used the same brand of cake mix, then the source can be found easily just on the basis of classical field epidemiology (WHO, 2016) without massive parallel sequencing methods. More technically, the attack rate from consuming the product would be very, very high.

**Section LVIII: Bioweapons (again)**

Häggström goes on to repeat many of his arguments about nuclear weapons and biological weapons, but we have dealt with those before, so we will not repeat ourselves. However, Häggström makes one false assertion (pp. 196-198): he thinks that bioterrorists can pick any one out of an astronomic number of potential biothreats, whereas governments and medical organizations must be able to handle any of them individually within a small time frame. This just isn’t true. This is because different pathogens are as not equally easy to get a hold of, grow, engineer and spread. Just like there are lower-hanging fruit on the destructive technology tree than grey goo, so is true within the category biological weapons as well. Also, defenders do not have to limit themselves to specific vaccines, but can also boost innate immunity that has a lower specificity and can thus handle a broader range of threats or focus on social and medical methods of isolation and reduction in spreading potential. The biggest decline in spread of infectious diseases generally occurred before antibiotics and modern vaccines.

**Section LIX: “Experts” surveys in considerable absence of evidence**

Häggström cites expert surveys on the future again (p. 201), but we have already critically discussed the problem with expert surveys when there is hardly any evidence available in Section XXVI: The failure of human-level AI predictions in Harbingers of Doom – Part III: Luddism and Computational Eschatology. It is very difficult to consider experts on “global catastrophic risk” genuine experts because they claim to be experts on something that *has never ever happened in the history of humanity*.

**Section LX: What are the most prominent risks to the future of humanity?**

So what do I think are the major treats to the future of humanity? The boring answer is infectious diseases, degenerative diseases and diseases such as cancer and cardiovascular disease. Are these likely to be global existential risks? Of course not. So what risks do I find most convincing? The top three on my list would be, in no particular order: climate change, nuclear war, and pandemics (regardless of source). I base this selection primarily on scientific evidence and very little on the hand-waving and sophistry of extremely-low-probability-large-impact events so common in the existential risk literature.

**References and further reading:**

Bean, A. G. D., Baker, M. L., Stewart, C. R., Cowled, C., Deffrasnes, C., Wang, L.-F., & Lowenthal, J. W. (2013). Studying immunity to zoonotic diseases in the natural host – keeping it real. Nat Rev Immunol, 13(12), 851-861.

Blackman, T. & Thompson, J. (2014). Man urinates in reservoir, ruins 38M gallons of water (cache). USA Today. Accessed: 2016-08-15.

Brookes, G., & Barfoot, P. (2015). Environmental impacts of genetically modified (GM) crop use 1996–2013: Impacts on pesticide use and carbon emissions. GM Crops & Food, 6(2), 103-133.

Committee on Genetically Engineered Crops. (2016). Genetically Engineered Crops: Experiences and Prospects. Washington, DC: National Academies Press.

Conko, G., Kershen, D. L., Miller, H., & Parrott, W. A. (2016). A risk-based approach to the regulation of genetically engineered organisms. Nat Biotech, 34(5), 493-503.

Klümper, W. and M. Qaim (2014). A Meta-Analysis of the Impacts of Genetically Modified Crops. PLoS ONE 9(11): e111629.

Lemaux, P. G. (2008). Genetically Engineered Plants and Foods: A Scientist’s Analysis of the Issues (Part I). Annual Review of Plant Biology, 59, 771-812.

Tagliabue, G. (2015). The nonsensical GMO pseudo-category and a precautionary rabbit hole. Nat Biotech, 33(9), 907-908.

WHO. (2016). Field epidemiology. Accessed: 2016-08-15.

Pingback: Harbingers of Doom – Part VII: Aliens and Space | Debunking Denialism

Pingback: Harbingers of Doom – Part VIII: Existential Risk and Pascal’s Wager | Debunking Denialism

Pingback: Harbingers of Doom – Part IX: The Pseudoscience Question | Debunking Denialism

Pingback: Harbingers of Doom – Part X: Summary and Addendum | Debunking Denialism