Are we rapidly approaching a technological singularity where intelligent computers and robots recursively self-improve into a superintelligent paperclip maker who annihilate the planet and all life on it in order to fill the universe with more paperclips? Is the apparent cosmic silence strong evidence that the origin of life was nearly impossible? Can the human mind survive destructive teleportation or uploading to computer servers and will self-replicating nanobots consume all life on earth? Or is this just the last in a long list of flawed doomsday prophecies that are based on false empirical premises, faulty logic, technobabble and pseudoscience? Or perhaps somewhere in between?
A recently published book by Olle Häggström, Professor of mathematical statistics at Chalmers University of Technology, called Here Be Dragons attempts to address some of these issues. The different writings by Häggström have been critically examined on this website before, particularly his uncompromising defense of statistical significance, p values and the NHST procedure. In his defense, Häggström has written decisive refutations of the creationist abuse of mathematics, climate change denialists and anti-science postmodernists.
In this first installment, we take a closer critical look at if ancients maps really had dragons designating dangerous places, threat of biological weapons of mass destruction, the case of Stanislav Petrov and faulty warning systems for nuclear attacks, dual use of concern research and the Soviet offensive bio-weapons program, and his objections to the way science funding is done by the Swedish Research Council. Although credit is given where credit is due for his defense of mainstream climate science and his criticisms of geoengineering projects, his uncritical discussion of induced meat intolerance is taken to task.
Section I: The historical myth of the “Here Be Dragons” narrative
During the Middle Ages, or so the argument goes, ignorant humans with a partial and patchy knowledge of global geography frequently labeled unexplored places on the map with the phrase “Here Be Dragons” together with simplistic drawings of fearsome dragons. Perhaps this was a way to indicate the existence of hidden dangers, to caution people from traveling there, a way of covering up the lack of geographical knowledge or simply a genuine belief in the existence of these kinds of mythological monsters. In the present, Häggström considers this an analogy for the potential dangers that lie ahead of us in the near and distant future and that if we are not very, very careful, we risk running blindly into existential ruin for all of humanity.
There is just a tiny problem with this narrative. It is almost entirely false from a historical perspective. There are only two maps that bears dragons in this way: the Hunt-Lenox Globe and the Ostrich Egg Globe and both are from circa 1500 (Kim, 2013). Evidence presented by Missinne (2013) suggests that these are not independent, but the former is a cast made from the latter. However, some details are not clear, such as whether Missine owns the globe (in which case there is an undeclared conflict of interest) and there are some weak speculations that the Ostrich Egg Globe has connections to Leonardo da Vinci because one of the ships looks like the ones drawn by someone who knew Leonardo (Kim, 2013).
Whatever the outcomes of those two issues, it is more or less clear that there was likely little to no tendency for cartographers in the Middle Ages to put dragons on their maps to mark unknown or dangerous regions. Some researchers suggest that the dragon on the Hunt-Lenox Globe does not represent mythological dragons or even unknown dangers, but rather depictions of Komodo dragon from somewhere in southeast Asia, whose bites can cause nasty infections and possibly death for people without medical access. For instance, here is a quote from McCarthy (2009, p. 181), another book from Oxford University Press also called Here Be Dragons:
“Here Be Dragons” is a phrase that most people associate with the outskirts of ancient, sepia maps — the ominous warning scrawled over those places where the lands leave the familiar. At least, that is what many people think we would find the phrase. As noted in the Preface, “Here Be Dragons” is unknown from historical maps, appearing only once [Now twice with the Ostrich Egg Globe – Emil’s note], in Latin, on the Hunt-Lenox Globe, constructed in the years after Columbus’s trip to the New World. The phrase was not, as so many believe, a general warning to sailors about alien realms. It was, instead, one of the first recorded post-Columbian biogeographical remarks and has now become, perhaps, the most famous distributional comment ever, likely marking the general region where tales of the Komodo Dragon originated.
In other words, those “dragons” might have referred to actual, known dangers and not unknown mythological dangers. This resonates well with the observation that the real phrase used by cartographers at the time was more akin to “Here Be Lions” (Jacoby, 2011, p. 285), again warning for known dangers, rather some mythological beings with apparent supernatural powers.
So why is this at all relevant? Surely, the fact that the overarching narrative is historically inaccurate says nothing about the validity of the arguments presented in the books and certainly not about the issues discussed in the book. Well, there are a couple of reasons for why it is relevant. First, it is an erroneous factual claim and those should be countered whenever possible. Second, if Häggström did not bother to fact-check the overarching narrative thoroughly enough, what else did he not bother to fact-check? As it will be shown later in this article, there are a couple of additional situations were this was the case. Finally, the mind-gripping power of narratives is often so substantial that it merits a critical discussion on its own.
So in the end, examining grand narratives matters not just as a theoretical exercise, but may have deep consequences for the larger case being made. For instance, distracting ourselves with futuristic threats like mythological dragons might make us blind to more realistic threats like lions or Komodo dragons (so to speak).
Section II: The future that Häggström fears is already here
In the opening pages (pp. 1-2) of Here Be Dragons, Häggström mentions part of the plot from a book by Yudkowsky (2009). Basically, humans encounter aliens and compare their scientific results. Everything is in agreement, expect a certain physical constant. It turns out that humans got it wrong, but this was the result of a conspiracy of physicists in the past, when they discovered that the true value would mean that any delusional person could make the sun become a supernova with tools that are available to almost anyone. Häggström compares this with potential “Pandora’s Box” discoveries that, if knowledge becomes widely available, will cause grave existential risk without the possibility of putting it back into the box.
There are a few problems with this. First, our sun cannot become a supernova because it has too low mass (NASA, 2004). This is perhaps not crucial to the argument, because you can easily construct other forms of existential risk, and Häggström discusses many of these in his book. Second, most conspiracy theories do not last very long and cannot really be sustained if a lot of people know about it. For instance, Bill Clinton could not keep his affair with Monica Lewinsky a secret and the NSA could not keep a global surveillance program spying on innocent people from a lower level contractor. This is going to be especially difficult if it involves issues like physical constants that can be independently tested, scrutinized and corroborated. This issue yet again illustrates the tendency to focus on obscure and unrealistic dangers when perhaps there are more imminent issues.
However, the idea that almost anyone could produce large-scale dangers is already here. The remaining part of this section will discuss two such possibilities: bio-weapons and nuclear reactors and will finish off by thinking about why large-scale existential consequences have not occurred despite this.
Today, any biology or biotech undergraduate understands the basic experimental methods used for producing biological weapons of mass destruction, because it is based on the same general principle used in any microbiology research: isolate and grow cells, transform genes, and select beneficial traits. In 1999, there were about 350 000 life scientists in the U. S. alone (Wilkinson, 2002). The global figure for 2016 is likely very, very much larger. So let’s make a conservative estimate that there are, right now at this very moment, probably over 1 million researchers who have the knowledge required to make and release dangerous bio-weapons. This is not an unreasonable estimate. Most who work in life sciences have likely taken a course in basic microbiology or molecular biology, and those that have not are probably capable of learning by just reading about it. Should life scientists have at least similar prevalence figures for e. g. psychopathy as the population at large (1%), that would mean that right now, at least 10 000 psychopaths out there know how to make bio-weapons (and this has likely been the case for many decades). A common argument is that it is hard to make biological weapons of mass destruction, because you would require industrial fermenters and so on which are hard to get and could easily be controlled by the authorities. However, there is nothing that says that you need an industrial amount of product. For very infectious agents, it might be enough to produce it on a smaller scale with tools available to anyone. Suitable agents to start with can be dug up from the ground, gotten from sick people or ordered off the Internet.
The second example is from Sweden, where a guy named Richard Handl made a nuclear reactor in his kitchen from extracting nuclear material from smoke detectors (Peralta, 2011) or buying other material from the Internet. He bought all the materials he needed for a minimum estimate of 1000 USD and blogged about his experiments. Even if it turns out that he specifically did not get it to work, it is a clear proof of principle that a single person without any particularly advanced science education can get their hands on radioactive material and do stuff with it. To be sure, this would not have created a nuclear annihilation, but perhaps a scaled-up version would create a sufficiently large disaster to increase the risk of mistaking it for a small nuclear attack.
The main reason why we should not be afraid is that most people, even psychopaths, understand that these large-scale risks cannot be controlled. Even if you release a deadly pandemic, there is nothing that suggests that the people you care about or causes you support will survive. Or yourself for that matter. The alternative “deranged lunatic narrative” (i.e. such a person would not care at all) is not very persuasive, because engaging in advanced scientific research and having severe psychosis is almost mutually exclusive. You can probably envision that someone in this situation might be successful in areas that are not highly time-sensitive (i.e. you can take irregular breaks without setbacks), like mathematics or maybe computer science. This is not likely the case for e. g. growing, handling, maintaining and developing biological material that require regular, sustained attention.
Perhaps the best narrative is not so much that we are heading into dangerous regions were dragons live. Instead, apparent dragons live all around us, slithering and multiplying. However, people do not seem to care. Even less exotic dangers that are easier to make, such as bombs made from widely available diesel and fertilizers, are not terribly common. This is not an attempt to show that existential risk scenarios like the sort described by Yudkowsky (2009) are impossible, merely that the fear is unreasonably high. Even if people are angry or deluded, they seem to be more interested in playing one of those smartphone games, go to the gym, find dates or simply feel exhausted from work.
Section III: The case of Stanislav Petrov might not have been a very close call in comparison
One example that Häggström often brings up is that of Soviet lieutenant colonel Stanislav Petrov and his decision not to initiate a nuclear strike against the U. S. during 1983 during a system malfunction. Basically, the early-warning system wrongly showed nuclear missiles being launched from the United States, but Petrov came to the conclusion that it was a false alarm and did not alert his superiors, thus sparing the world from a very deadly potential nuclear war.
Häggström considers this to be a “close call” and that “it seems clear that the risk was non-negligible” (p. 3). However, Petrov had access to substantial evidence that it was a false alarm (Hoffman 2009). First, the early-warning system showed only five launches, whereas both the U. S. and the Soviet Union had tens of thousands of nuclear missiles each. If you, as the U. S., are going to launch a nuclear strike against the Soviet Union, it is completely barking unreasonable to launch a mere five missiles because the Soviet retaliatory strike would be massive and the U. S. would be turned into radioactive dust. A nuclear attack on Soviet Russia by the U. S. would have to be massive, and not merely five missiles. Second, he was not confidence in the reliability of the new early-detection system and third and finally, there was no missiles visible on the ground radar system. Häggström is surely aware of this evidence, since he cites Hoffman (2009) himself.
So was the risk non-negligible? Well, perhaps this is not the best measure of the risk. Rather, we should compare it to some sort of baseline risk, since nothing is risk-free. What constitutes this background risk? Well, things like miscommunication, systematic error or the “deranged lunatic” narrative. The U. S. and the USSR have had nuclear weapons since the 1950s, and so it is not clear that the Petrov case was a close call or that it was non-negligibly larger than this cumulative background risk. There was certainly a psychologically uncomfortable risk, but what should we compare with?
Section IV: Scientific research is the best protection against biological WMDs
Häggström raises concerns (p. 4) about a couple of recent publication dealing with dangerous human pathogens, such as sequencing of the strain behind the 1918 pandemic influenza and research detailing the changes needed to make influenza infectious between mammals (and thus potentially humans as well). He fears that this research can be abused by e. g. terrorists.
Previously in the book (pp. 1-3) and indeed later in the book as well (p. 246), Häggström suggests that suppressing dangerous research would be better. However, this has at least five substantial negative consequences.
First, it means severely restricting access to this knowledge from the scientific community and thus fails to reap the benefit of international collaborations on defensive bio-weapon research and vaccine development. If disaster strikes, this is the best defense we have, and perhaps arguably the “only game in town”. To put it simply, scientific and medical communities, backed up by reasonable governments have more money, personnel and resources than terrorists and so can work harder and faster. Suppressing information means we might not be able to make use of this resource before it is too late.
Second, real conspiracies like this is very difficult to keep secret. The more people who know about something, the more opportunities for both accidental leaks or intentional leaks due to moral qualms or financial gain. Perhaps the best example is the mass surveillance carried out by the NSA. Despite being one of the worlds most advanced intelligence service in the world, and having substantial incentives to keep it a secret, they were not able to keep their top-secret surveillance program from an external contractor (Edward Snowden). When it finally leaks, we will be in an even worse position than publishing it due to the problem outlined in the first point.
Third, once the leak occurs, other countries are going to become extremely suspicious, because it very much looks like the U. S. could have been engaged in offensive bio-weapon research. This is banned by the Biological Weapons Convention from 1972 and perceived transgressions can have far-reaching consequences (potentially including nuclear conflict).
Fourth, there is likely going to be a substantial outrage that the U. S. did not share their beneficial development with other countries, probably leading to even more strain in relationship with many other powerful nations, which would be very negative in a situation where terrorists or evil governments have used deadly bio-weapons against the world.
Fifth and finally, there will be a need to cover-up not just a single research result, but a large collection of them. This large collection, if stolen by terrorists or governments, could easily be turned in something that is in the same general danger category as nuclear weapons. We know that information can be stolen from the U. S. government (e. g. the case of Edward Snowden) and even if there was a low probability, the consequences are very large, and so is vulnerable to the standard low-probability-massive-consequences argument that Häggström implicitly makes throughout his book (but see pp. 240-244 for an attempt by Häggström to distance himself from fallacious Pascal wager-like arguments).
Reminiscent of section II, there already exists papers written by Soviet researchers many decades ago that provide information about their offensive bio-weapon research (Alibek, 1999; Garrett, 2000; Leitenberg and Zilinskas, 2012). Their work also included the creation and testing of so-called biological bomblets. When these detonate, they disperse aerosols of dangerous human pathogens over large geographical regions. Although those papers only discuss biology (and you need to read between the lines just like the recent U. S. papers to gleam any relevant knowledge), there are many declassified American designs available to read about on the Internet, such as E120, M143, E61, and so on. Since these papers and bomblets have been available since at least the 1960s, we must ask ourselves why they have never been used by terrorists. It cannot be due to lack on knowledge, since they are discussed openly on the Internet and popular press books. Someone like Häggström, who fear-monger about biological WMDs, has to explain this. If access to this kind of basic and applied research is such a terribly threat, why have we seen so very, very few cases of biological WMDs being deployed despite easy access to knowledge? Häggström might respond by stating that the danger is still substantial from a longer time perspective, but then surely the best protection from such future threats is collaborative scientific research. Does Häggström have any alternative protection that he can show is better?
Häggström is fond to point out that “not making a decision is also a decision” (p. 249), but it should be also pointed out that there are false decisions: decisions that appear to be risk-conscious, sensitive to unlikely risks and take into account the dangers of future technologies, but with more knowledge, actually may end up making us less safe from e. g. bioterrorism.
Section V: Science funding by Swedish Research Council
Häggström appears critical of the way the Swedish Research Council (VR) funds research. Their basic idea, according to Häggström, is that they have as an explicit goal to look at quality and not practical application. Although he is willing to extent some merit to what he calls a “romantic view of knowledge” (p. 8), he ultimately considers it “mad” (p. 7).
Why does he think that? What is so wrong with funding basic research even if there is no immediate beneficial application? He produces two arguments, which are, roughly, (1) quality is poorly defined and (2) a utilitarian argument that surely it is better to fund research that is more likely to lead to beneficial applications than any other research. Unfortunately, his arguments are flawed and based on profound misunderstandings of VR funding criteria, the relationship between basic and applied research, and what it means to be objective.
What funding criteria does VR use? Häggström claims it is only quality, but it is more complicated than that. It turns out that “scientific quality” is just one of four criteria. The others are “novelty and originality”, “merits of applicant(s)” and “feasibility” (Swedish Research Council, 2015). So while Häggström repeatedly claim (pp. 7-8) that VR uses quality as the only measure, VR actually uses four criteria. With this in mind, the charge leveled by Häggström that it is “mad” and just a “matter of taste” (p. 7) seems premature. This is because a collection of experts are eminently qualified to judge what research is novel, original and feasible. If they did not know, why call them experts? The merits of the applicant(s) can be assessed by looking at what this researcher has accomplished before. Are they able to complete projects of this size and difficulty? Even though you might be able to say that there is some degree of subjectivity to these judgements (like all human judgements), they are not arbitrary and can certainly not be dismissed out of hand by calling them a “matter of taste” as Häggström does. In fact, it is probably the most objective we can get. If Häggström disagrees, he is welcome to provide his own criteria on which decisions about funding should be made. I suspect they will be even more subjective, especially since he wants to focus more on potential future risks for which knowledge is very limited. Of course, he does not really produce a full-fledged alternative to the VR criteria and certainly no evidence that his imagined criteria are superior.
Häggström has a perfectly valid point that an excessive focus on bibliometry is an extraordinary bad method for the reasons that he outlines (p. 7), but it might be useful in a minor role in the evaluation of the merits of the applicants (together with many other metrics).
What about his utilitarian argument? Häggström thinks that “a bright future for a humanity” is a very important goal to move towards, so he thinks that “completely ignoring this aspect of science seems like negligence bordering on insanity” (p. 8). Even if we accept utilitarianism without objection (and utilitarianism is by no means uncontroversial), there is a clear problem: a lot of applied research does not work or is not reproducible and a lot of basic research done without any practical goals in mind have produced highly useful technical applications. For instance, over 85% of potential treatments fail in the early stages of clinical trials (Ledford, 2011) and most published research findings are likely false (Ioannidis, 2005). Conversely, space research produced useful applications for cancer screenings and MRI scans (NASA, 2011). Thus, the distinction Häggström makes between basic and applied research is, in reality, very murky.
A second problem with this argument is that VR surely does not ignore practical consequences. The point is that immediate and tangible benefits just around the corner are not required, not that they would fund any old evil research. If anyone doubts this, they should submit an application to VR and ask them to fund a research project designed to find out how long people survive after decapitation or how to best exterminate a group of people with bio-weapons. Don’t hold your breath, because those projects will not be funded regardless of perceived “quality”.
Section VI: Predictions and prior probability of near-term existential ruin
So how do we approach the risk of near-term existential ruin? The movement that Häggström, Bostrom and Yudkovsky belong to has a very favorable view towards Bayesian statistics, so one idea would be to try to use Bayes theorem. However, this looks very, very difficult. For instance, what is the prior probability of near-term existential ruin? Since we have not experienced any existential ruin (since we are still here), all predictions made about the end of the world that we have been able to test for real have turned out to be completely wrong. So based on the background knowledge, the prior probability could be set to 0. However, this would be quite uncharitable and mean that the conversation instantly ends, since no matter how good the evidence is (assuming it is not infinitely high), a prior probability of 0 makes the posterior probability 0 as well. So if we want to move forward and not give up, it has to be non-zero. But should we set it to e. g. 10-200, 10-20 or 10%? Here, Häggström shoulders a substantial burden of evidence. First, he must come up with a way to justify his prior probability (or it will reek of subjectivity) or accept that it is extremely low (since all past predictions of near-term extinction have failed). Second, and perhaps more difficult, he must also argue that the available evidence is strong enough to overturn this low prior probability. The lower a prior probability is, the better the evidence has to be to overturn it. Häggström faces a substantial challenge.
A common method to distract from having to provide evidence for a position is to complain about lack of funding or even active suppression by the scientific community. A classic example of this is psi research (Carroll, 2011). But that is not how it works. You typically start by providing some evidence, and then you use that evidence to get funding. We will example the evidence provided by Häggström throughout his book in later parts of this articles series. So far, this first part have been very critical to the claims made in the book, but the second book chapter about climate change and the myriad of problems with radical geoengineering is substantially better (but see Section IX on the flaws with meat intolerance).
Section VII: The science behind global warming and climate change
The first part of the second chapter gives an overview of the mainstream scientific consensus position on global warming and climate change by cementing it into the historical context of atmospheric research at the same time as carefully refuting the most common falsehoods spread by climate denialists. A lot of space is generously devoted to key concepts such as feedback and climate sensitivity that are essential for understanding current concerns without getting bogged down in details. In sum, a very well-argued section of the book.
If there is any constructive criticism to be made of this chapter, it is that it would have been useful to see a (1) a short discussion on the negative consequences of a 1K direct warming from CO2 [to make people understand that even a 1K warming can have substantial negative impacts – Emil’s note 20160312], since many climate denialists like to falsely reject feedback effects as speculation, (2) a short discussion on the negative consequences of global warming that we are currently observing to make it more tangible and (3) a reference to the climate change and anti-denialist resource Skeptical Science that feature 170+ high-quality scientific refutations to climate denialist assertions, complete with pedagogical graphs and references to the primary scientific literature.
Section VIII: Skeptical approach to geoengineering
Häggström swiftly moves on to taking a skeptical look at geoengineering proposals to combat global warming. The primary proposal examined in that of cheaply spraying large quantities of sulfur dioxide to cause global cooling. Häggström rightly thinks this has many severe problems. For instance, it would have to be done continuously, or risk a massive temperature increase when it stops, and this cannot be guaranteed over decades or centuries. Other problems include that it might still lead to regional climate change, ocean acidification and potentially exacerbate ozone depletion. This is Häggström at his best: skepticism, evidence and clarity. He discusses other potential methods for geoengineering and then what he labels as “solutions further outside the box”, such as making humans shorter and improving our moral character through cognitive enhancement.
Section IX: Meat intolerance
One such “solution further outside the box” that Häggström mentions is “meat intolerance”. He writes (p. 35) about a paper he read:
As a means to cutting down on meat consumption, they suggest pharmacologically induced meat intolerance, for instance by stimulating our immune system against some characteristic bovine proteins.
This is a horrible idea. First, humans and cows are both mammals and so closely related in evolutionary terms. This means that there will be substantial sequence similarity between humans and cows. Thus, stimulating the human immune system against a bovine protein has a severe risk of causing autoimmune disorders against the human variant and potential death in the long run. Since Häggström uses proteins in the plural, this can then cause multiple, distinct autoimmune disorders. How far would Häggström be willing to go in order to cut meat consumption?
Perhaps Häggström would suggest that we only target proteins in cattle that do not exist in humans. However, this betrays a fundamental ignorance of evolution. Even if the exact variant only existed in cattle, it is likely that other members of that gene family or that gene super family existed in humans and that opens up the risk of sequence similarity large enough to trigger autoimmunity. This is because evolution typically builds new onto old.
What about so-called “meat allergy”? Isn’t this exactly what Häggström is talking about? Not quite. This is because it is a reaction against the carbohydrate galactose-alpha-1,3-galactose that is common to all mammals except the parvorder Catarrhini (which includes humans). Furthermore, people with this allergy can still eat other kinds of meat from e. g. birds and fish (since they are not mammals). Symptoms of this allergy can include respiratory distress and in some cases anaphylactic shock and potential death. Finally, there is some evidence that alpha gal allergy can resolve on its own (Cuda Kroen, 2012). So again, how far would Häggström be willing to go in order to cut meat consumption?
But let us, for the sake of argument, ignore all of these problems. How does Häggström figure we test this method for inducing meat intolerance? It is not enough to test it in an animal model system, because their lack of an autoimmune reaction does not mean that humans lack it. Or for that matter, how will he convince people to undergo this “procedure”? The medical benefits are going to be close to zero and the risks substantial, so this experimentation is likely to conflict with international ethical agreements, such as Declaration of Helsinki, which states that “The health of my patient will be my first consideration” and “Medical research involving human subjects may only be conducted if the importance of the objective outweighs the risks and burdens to the research subjects.” (World Medical Association, 2013). Thus, Häggström will face substantial ethical and regulatory problems, in addition to the medical risks and the biological problems.
Section X: Making smaller humans?
Another alleged “solution” that Häggström writes about is to make humans smaller. This is because he thinks that shorter people supposedly have less of an appetite and would not mind being crammed into tiny cars:
If we were smaller, we would eat less, require smaller cars, etc., and would in general be less of a burden on the environment and on climate. Techniques for making us smaller need not be particularly difficult, and may, e. g., involve pre-implantation genetic profiling, or of the growth-hormone-inhibiting hormone somatostatin.
There are some many problems with this proposal that it is difficult to know were to start. Humans do not primarily eat food in order to match the energy requirements of the body, both due to the many social aspects of eating and the fact that obesity has become an international health crisis. Even if shorter people ate less amount of food, it would not automatically mean that their food would be less of a climate impact, because they might replace some plant-based food with meat (that has a higher climate impact) and meat consumption is projected to increase as more people gain access to it (WHO, 2007; FAO, 2012).
There is also really no evidence that humans select car size based primarily on their personal height. People who drive SUVs do not do it because they are extremely tall and people who buy a car from Smart Automobile do not do it because they are too short for a Volvo. Rather, people buy cars based on e. g. family needs and status-seeking behavior.
Häggström claims that the methods for attaining smaller humans is “need not be particularly difficult” and then cites two very difficult methods. Pre-implantation genetic profiling is easy when it is a single gene disorder with a high penetrance, but very difficult when it is a phenotype that is influenced by many, many genes and many environmental factors. This will be discussed in more detail in the section about embryo selection, but one of the problems is called “missing heritability”, whereby the sum of the contributions from each of the genes cannot account for the heritability as measured by e. g. twin studies. This suggests that the genetic influence is much, much more complex than just additive genetic variation. In addition, most hormones like somatostatin have more than one effect and work in consort with many other substances in the human body and it is not just a matter of injecting kids with it. Similar ethical problems with meat intolerance apply in this case as well.
Häggström leaves himself one last escape hatch: don’t for a moment think, he says, that he advocates these solutions. Instead, he is merely raising the question to be debated. A convenient way to avoid having to defend his claims (or at least his decision to give these claims a platform in his book). This is reminiscent of the pseudoscientific debating strategy called just asking questions.
Alibek, K. (1999). Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World–Told from Inside by the Man Who Ran It. New York: Random House.
Carroll, T. (2011). A Short History of Psi Research. The Skeptic’s Dictionary. Accessed: 2016-02-19.
Cuda Kroen, G. (2012). Ticked Off About a Growing Allergy to Meat. Accessed: 2016-02-20.
Food and Agriculture Organization of the United Nations. (2012). World Agriculture Towards 2030/2050. Accessed: 2016-03-09.
Garrett, L. (2000). Betrayal of Trust: The Collapse of Global Public Health. : Hyperion.
Hoffman, D. (2009). I Had A Funny Feeling in My Gut. Washington Post. Accessed: 2016-02-13.
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Med. 2(8). e124.
Jacoby, S. (2011). Never Say Die: The Myth and Marketing of the New Old Age. New York: Pantheon.
Kim, M. (2013). Oldest globe to depict the New World may have been discovered (archive.is cache, webcite cache). Washington Post. Accessed: 2016-02-11.
Ledford, H. (2011). Translational research: 4 ways to fix the clinical trial. Nature 477, 526-528.
Leitenberg, M., Zilinskas, R. A. (2012). The Soviet Biological Weapons Program: A History. USA: Harvard University Press.
McCarthy, D. (2009). Here Be Dragons: How the study of animal and plant distributions revolutionized our views of life and Earth. New York: Oxford University Press.
Missinne, S. (2013). A Newly Discovered Early Sixteenth-Century Globe Engraved on an Ostrich Egg: The Earliest Surviving Globe Showing the New World. The Portolan. 87. pp. 8-24.
NASA. (2004). A Supernova Can Really Blow. Accessed: 2016-02-12.
NASA. (2011). NASA Contributes Research and Technology To the War Against Cancer . Accessed: 2016-02-19.
Peralta, E. (2011). Swedish Man Arrested For Trying To Build Nuclear Reactor In His Kitchen. National Public Radio. Accessed: 2016-02-13.
Swedish Research Council. (2016). Bedömningskriterier och betygsskala. Accessed: 2016-02-19.
Wilkinson, R. K. (2002). How Large is the U.S. S&E Workforce?. National Science Foundation. Accessed: 2016-02-12.
World Health Organization. (2007). Availability and changes in consumption of animal products. Accessed: 2016-03-09.
World Medical Association (2013). Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. Journal of the American Medical Association. 310(20). 2191–2194.
Yudkowsky, E. (2009). Three Worlds Collide. Accessed: 2016-02-12.