Skepticism

Harbingers of Doom – Part VIII: Existential Risk and Pascal’s Wager

Here Be Dragons

Can we neglect issues such as global warming because most of the negative consequences occur in the future? Is abortion and masturbation worse than genocide because it prevents the future existence of billions of people? Can we combine exceedingly low or unknown probabilities with extremely highly negative outcomes to argue that just about anything should be made into a global research priority? Are values something immaterial or supernatural, or merely facts about the human brain and the human conditions? Is it possible to make moral arguments that are based on false empirical premises or contain logical fallacies? Should we ban certain forms of space research? What about artificial intelligence? Is existential risk as a global priority a form of Pascal’s Wager, and if so, how?

Previously, we have explored and exposed bad arguments about bioweapons, destructive teleportation, psychiatry, statistical significance, atomically precise manufacturing, nanobots, cryogenics, philosophy of science, uploading, migrating into black holes, doomsday scenarios, large energy-absorbing spheres around stars that kill of almost all primary producers and many more.

Although Part VIII treats the last chapter of the book, it will not be the last installment of the series. The two remaining installments will investigate to what extent the futurist view expressed by Häggström is a form of pseudoscience (Part IX) and sum up and conclude the series (Part X).

Section LXXI: A minimalist approach to moral reasoning

For many people, morality (or reasonable human behavior) is a sticky issue. This is likely because the area has been corrupted by religion, politics and idle speculations of academic philosophy to such a degree that it is almost impossible to wade through all the bullshit people have been claiming about morality through the past several thousand years. In order to combat these distractions, let us make a very minimalist case for why it is possible to discuss reasonable human behavior and why some of the arguments about reasonable human behavior are better than others.

Are discussions about reasonable human behavior completely arbitrary and meaningless in much the same way as new age woo or postmodernist nonsense where words are strung after each other but has no intellectual content whatsoever? This is clearly false, since there exists a large amount of research in this area and because it is possible to make cogent arguments in favor and against a lot of the metaethical positions. So this defeats the position which hold that discussions about reasonable human behavior is vacuous gibberish.

Are there some moral arguments that are worse than others? If this is the case, it is possible to evaluate moral arguments with respect to reason and evidence. More generally, it is possible to make errors in moral arguments by relying on a false empirical premise or making a logical fallacy and these make and these same errors can be made in the construction of metaethical positions (Carrier, 2005). This defeats the moral relativist position, because some moral argument are just erroneous as scientific facts and logical fallacies do not depend on culture. After all, it cannot be the case that the world is flat in Europe, but not flat in North America.

Giving a negative answer to the first question and an affirmative answer to the second is all that is needed to establish that discussions about reasonable human behavior have a minimal degree of realism. Questions like what metaethical paradigm is most reasonable, what to do in specific situations, if morality is more like logic or biology, what the correct answer to traditional moral conundrums are etc. can be interesting to ponder, but they are irrelevant for establishing this minimalist position.

Section LXXII: Hume probably never believed in the is/ought dichotomy

It is commonly believed that Hume proved that you can never derive and ought from an is, which is typically taken to mean either that facts are irrelevant for reasonable human behavior or that discussions about reasonable human behavior (in contrast to other discussions about the world) must be a form of logic instead of, say, biology or psychology. This is often accompanied by the following quote from Hume (1739):

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

However, Hume did not prove any such thing. He merely notes that a lot of his contemporaries do not sufficiently justify their ought claims and that he himself cannot conceive how it can be done in those situations. He suggests that authors should justify their ought statements properly and that this will destroy all the vulgar moral systems (not all systems, just the vulgar ones). But merely asserting that he does not understand how to go from facts to reasonable human behavior does not mean that this is impossible, just like our inability to imagine how this or that biological structure evolved means it was specially created in its present form. This argument does not prove that you can derive an ought from an is (and this entire business is fundamentally misguided as we will see below), but it disproves the claim that Hume proved the is/ought dichotomy. As we saw above, it is not necessary to derive oughts from is to defend the minimalist position.

In fact, he probably did not even believe in the is/ought dichotomy as he frequently made inferences to ought claims from facts about the world (Cohon, 2010).

Arguments about the is/ought dichotomy are also misguided in a larger perspective. This is because it assumes that reasonable human behavior is a form of logic where conclusions need to follow logically from premises. However, this is unlikely to be the case because scientific fields most related to human behavior such as biology, psychology and sociology does not work that way. Instead, they argue for certain conclusions based on reason and evidence and does not depend on whether those conclusions follow deductively from true premises or not.

Indeed, demanding that science (or discussions about reasonable human behavior for that matter) must be purely deductive means that it cannot reach any new facts about the world, since all the truth of the conclusion can be found in the premises. If you know that “Socrates is a man” and that “all men are mortal”, adding the statement that “Socrates is mortal” adds no additional true facts, since the two former imply the latter.

Section LXXIII: Values reduce to facts about human brain and its context

Häggström interprets the supposed is/ought dichotomy as the need to introduce values into the argument in order to logically derive an ought from an is. Combining an is with a value will, according to Häggström, ensure that we can logically conclude in an ought statement (p. 227). However, he confuses values with opinions, which are two different things. Clearly, it is possible to have opinions about things that are not values at all, such as mass of some predicted but yet undiscovered elementary particle. When reasoning about values, it is possible to make logical fallacies or appeal to false empirical premises, so some values are not opinions that have no truth value, but based on a flawed argument.

But the larger problem with drawing such a stark dichotomy between facts and values is that values must reduce to facts about the human brain, the way the human body works and the surrounding context. This is because there is no reason to suppose that there is some mysterious or magical world divorced from the material world that harbors these values. If values are some feature of our human minds, they must reduce to facts about the human brain. This is because we know that the human mind is just a function of the way the brain looks and works (e. g. Bear, Connors, and Paradiso, 2007 but many others). Those who believe that values are some irreducible or immaterial entity with supernatural powers must reject the scientific consensus position and a massive amount of scientific evidence that shows that the mind is what the brain does.

It is not just that values reduce to facts, but endeavors that we typically considered to be completely about (non-value) facts are impossible without values.

Section LXXIV: Science presupposes certain values, and that is entirely unproblematic

Science is one of the best methods we have for reaching true conclusions about nature and understanding the world around us. However, this activity presupposes a number of values. It is impossible to do science (or mathematics for that matter) without valuing truth over falsehood, accuracy over arbitrariness, clarity over obfuscation, consistency over contradiction, universalism over solipsism, evidence over speculation and so on. However, these values are entirely unproblematic and there are many good reasons for having them. In many cases, rejecting these values are either a non-starter or lead to direct contradiction or pure meaninglessness.

Embracing these values in order to do science is yet another powerful blow to the supposed radical distinction between facts and values. An enterprise that is based on some values is not automatically less factual and certainly does not lead to relativism with respect to scientific facts or turn evidence into opinion.

Section LXXV: Metaethical moral relativists cannot reject normative moral relativism

Häggström identifies himself as a tentative metaethical moral relativist, but rejects moral relativism as a normative model (pp. 227-228, footnote 495). He thinks he can embrace the idea that morality is just an opinion (p. 227), while opposing behavior (such as female genital mutilation or METI) that goes against his own moral standards (p. 228). This makes as much sense as saying that there is no truth with regards to what color is prettiest, but wanting to oppose people who like the color blue is perfectly legitimate. This is a perfect example of how primitive religious beliefs, political ideologies and the rejection of rationality by the postmodernist academia has poisoned the discourse on what is and what is not reasonable human behavior. However, Häggström wants to eat his cake and keep it too. He cannot quite bring himself to accept the notion that all opinions about human behavior are equally valid.

In reality, it is the proponents of things like female genital mutilation that has the burden of proof to demonstrate that such behavior is reasonable, and all arguments that have been put forward have either been based on false empirical premises or contained logical fallacies. Virtually all cases of racism are based on cognitive biases or abuse of scientific research. These issues are not merely a matter of different opinions, but a matter of facts. This does not mean that reasonable human behavior is dictated by some supernatural moral truth, but merely that we can investigate claims about human behavior with science and reason, just like we can evaluate claims about other things.

It is commonly believed that moral relativism can help combat bigotry against others. However, this keeps us from opposing truly devastating practices in other cultures, keeps us from praising valuable cultural elements outside our own culture and also prevents us from harshly opposing dangerous cultural elements within our own culture. According to the moral reformer’s dilemma, any beneficial moral reform is mistaken, since the moral reformer goes against he prevailing morality held by the surrounding culture and has nothing to offer but opinions. However, from the above discussion of female genital mutilation and racism, we know this is erroneous and moral reformers are not automatically mistaken.

How should we value the well-being of current generations of humans compared with future generations? Häggström outlines two radical approaches to this question: discounting and existential risk as a global priority.

Section LXXVI: Discounting

Discounting is basically the idea that something is worth more today than in the future and this can be mathematically modeled in some detail. A crucial parameter is r, which is defined as how much something lose value in one year. If r = 5%, it means that something is only worth 95% a year from now compared with what it is worth today. The crux of these mathematical models is that they radically undervalue the well-being of future generations. This can be artificially made even worse by choosing specific mathematical models that amplify the value loss over time (p. 231-237).

A great many assumptions goes into using discounting to divine how much we should be concerned about climate change or artificial general intelligence. It assumes utilitarianism, that GDP is supremely important and that it accurately measure well-being, certain beliefs about the diminishing return of money, and what kind of growth are desirable etc., which means that it is suspicious way to handle things outside simplistic economic considerations. Häggström takes a largely skeptical approach to discounting (and becomes increasingly skeptical the more radical the discounting becomes), but ultimately thinks it is too hard to get rid off in practice (pp. 236-237).

It should be pointed out that a lot of arguments about discounting also fails to consider the fact that e. g. climate change has a discernible negative effect right now, which is a good reason to act on it.

Section LXXVII: Existential risk and the abuse of rational choice theory

In contrast to discounting that radically undervalue future humans, Häggström also describes the opposite scenario, namely a system that radically overvalues future humans in the context of existential risk (pp. 237-240). These kinds of models are based on taking rational choice theory into the extreme by combining extremely low probabilities with extremely high outcomes. In other words, as long as you postulate a bad enough outcome, you can overturn any argument based an outcome being extremely unlikely due to not being supported by evidence or directly contradicted by evidence. You think it is batshit to blow up a statue because you think it might become a demon that destroys all of humanity? Well, if this consequence is sufficiently bad enough, it does not matter that this is a batshit idea, because the average human life saved by multiplying the probability with the outcome would favor the action.

Bostrom, from the description made by Häggström, thinks that existential risk from superintelligent robots should be a global priority. This is because they, unlike many other risks, can exterminate all of humanity, which means that future humans would not exist and that if we imagine that humans would live on this planet for billions of years more and possibly expand into space, not putting existential risk as a global priority means sacrificing at least 1016 people. Bostrom further argues that reducing the risk of an existential crisis where all humans die with just a a millionth of one percentage point is at least 100x more important that the life of one million people currently alive today (p. 238). This has led some internet writers to state that “cancer is a rounding error”.

If this idea sounds batshit, it is because it is, in fact, batshit. This kind of reasoning would lead us to view abortion, masturbation or sex with a condom as worse than genocide, because you would eliminate billions of future humans. Ironically, prioritizing global existential risk over cancer would mean that some people with cancer would die before reproducing again or at all, which would also eliminate billions of future humans. In reality, this is an obvious abuse of rational choice theory. It takes a framework (where you multiply probabilities without outcomes to decide between two actions) that might work well in everyday scenarios into the extreme where it has never been validated or tested, and just expect it to work perfectly. Häggström makes no objections to this framework in this section (but does so in later sections), apart from noting that animal suffering might be a lot worse in such as intergalactic future of humans (pp. 239-240).

Section LXXVIII: Häggström does advocate a form of Pascal’s Wager

Häggström tries to translate the Bostrom scenario into a somewhat realistic scenario where the U. S. president has to decide between nuking Germany and killing everyone in that country because there is a 1 in 1 million risk that there is a lunatic there that can successfully trigger a doomsday weapon (p. 240-241). For Bostrom, the decision to bomb Germany is trivial, as the expected utility calculation favors it. Häggström disagrees strongly, choosing to not bomb Germany. However, he does not reject expected utility (or more precisely realize that it has its limits for where it can be applied), but merely states that there are some things you cannot do regardless of the outcome of an expected utility calculation. He still holds firm to using it as a pedagogical device and goes over some other arguments that have a similar conclusion (p. 242).

Seasoned veterans of scientific skepticism would recognize the low probability / severe outcomes argument as a version of Pascal’s Wager, whereby the belief in the existence of Jahve is said to be reasonable no matter how unlikely, because the consequence of belief is infinite reward (in heaven) and the consequences of disbelief is infinite punishment (in hell). Pascal’s Wager is a profoundly irrational argument, because it assumes that the probability of the existence of Jahve is greater than 0, assume that Jahve respects belief held because of selfishness over honest doubt, the risk of upsetting other potential deities etc. Häggström also realizes this (p. 242), but flatly denies that he is advocating Pascal’s Wager. In fact, this denial is so important for Häggström that he has this as the title of the entire section 10.4.

So despite the fact that it is obvious that Häggström advocates an argument that is analogous to Pascal’s Wager (because it pairs extremely small probabilities with enormously large outcomes, thereby abusing expected utility), he denies it. Why? After a convoluted discussion spanning two pages (pp. 243-244), he arrives at the conclusion that he is not advocating Pascal’s Wager because one objection (the argument from other deities) that works for Pascal’s Wager superficially does not seem to work against the existential risk as global priority argument. This is first of all false, because we can invent an arbitrary large number of such scenarios (such as the German lunatic) and an analogy does not have to be the same in precisely all aspects. Thus, just because one objection works for Pascal’s Wager and not the existential risk as a global priority argument does not mean that the latter can be part of the general type of argument embodied by the former.

His last defense is to argue that existential risk does not need to have a low probability just because the probability is unknown. But Häggström betrays his own ignorance, because Pascal specifically stated that it did not matter if the probability was low or unknown. In fact, he mostly argued that it was unknown (Hájek, 2012):

“God is, or He is not.” But to which side shall we incline? Reason can decide nothing here. There is an infinite chaos which separated us. A game is being played at the extremity of this infinite distance where heads or tails will turn up… Which will you choose then? Let us see. Since you must choose, let us see which interests you least. You have two things to lose, the true and the good; and two things to stake, your reason and your will, your knowledge and your happiness; and your nature has two things to shun, error and misery. Your reason is no more shocked in choosing one rather than the other, since you must of necessity choose… But your happiness? Let us weigh the gain and the loss in wagering that God is… If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is.

Especially note the sentences “Reason can decide nothing here” and “there is an infinite chaos which separates us”. Thus, the analogy holds and the existential risk argument is analogous to Pascal’s Wager, and since we reject Pascal’s Wager (and Häggström does too), he must also reject the existential risk as a global priority argument.

Section LXXIX: How should we rationally handle risk?

So how can we rationally handle risk? Expected utility is attractive since it attempts to eliminate cognitive biases, but as we have seen, an overemphasis on expected utility and rational choice theory can lead to them being applied in areas where they are not valid or neglect important values. Where is the burden of evidence? On the person claiming risk or on the person claiming no severe risk or that no severe risk has been demonstrated? Overemphasizing precaution is dangerous since it can have unintended consequences, such as increased reliance on carbon by using precaution for nuclear power or people dying from vitamin A deficiency because people are afraid of GMOs (Ropeik, 2014). Overemphasizing technological advances can create risks and it makes sense to regulate e. g. medication to ensure that people are not severely harmed by it. Thus, there has no be some reasonable balance between precaution and development and this has to be based on reason and evidence, no wild speculation about existential risks, pseudoscience and fearmongering.

Section LXXX: Where do we go from here?

Häggström finishes the book (pp. 245-249) with a couple of recommendations.

The first (p. 245) is to devote substantially more science funding to future studies. Yet he presents very little in the way of actual arguments for why this is necessary. The various flawed and unscientific arguments presented in the book so far will not do and the existential risk as a global priority will not do either. Science funding should not be automatic, and there needs to be some credibility for a research issue to get funding.

The second recommendation (p. 246) is a total ban on METI research. However, he has not taken any of the problems I discussed in Part VII into account in his reasoning, so this recommendation is also highly problematic.

In his third recommendation (p. 247), Häggström acknowledges that it is unreasonable and probably not even desirable to put a blanket ban on AI research. Instead, he wants to reorientate AI research into making it safe rather than working on it being capable. However, it is unclear how that could work assuming the grand capabilities of AIs promoted by Häggström and Bostrom such as escaping any box, brainwashing any human and so on.

The fourth recommendation (p. 248) is to invest in clean energy like solar panels and nuclear fusion, because it has no great downside. But using the same existential risk argument, we can postulate scenarios involving clean energy with extremely low or unknown probabilities, but with extremely large and negative outcomes. Furthermore, clean energy might be used by evil AIs to annihilate humans faster, since it would have an energy source being independent of humans. I do not think this is relevant, so I agree that it makes sense to invest in clean energy.

Häggström finishes the book by pointing out that “not making a decision is also a decision” (p. 249), to which I retort that making the wrong decision (especially a decision based on pseudoscientific nonsense and trivially flawed claims) could be worse.

References and further reading

Bear, M. F., Connors, B. W., Paradiso, M. A. (2007). Neuroscience: Exploring the Brain. Baltimore: Lippincott, Williams & Wilkins.

Carrier, R. (2005). Sense and Goodness Without God: A Defense of Metaphysical Naturalism. Bloomington: AuthorHouse.

Cohon, R. (2010). Hume’s Moral Philosophy. Stanford Encyclopedia of Philosophy. Accessed: 2016-10-02.

Hájek, A. (2012). Pascal’s Wager. Stanford Encyclopedia of Philosophy. Accessed: 2016-10-02.

Hume, D. (1739). A Treatise of Human Nature. London: John Noon.

Ropeik, D. (2014). Golden Rice Opponents Should Be Held Accountable for Health Problems Linked to Vitamin A Deficiency. Scientific American. Accessed: 2016-10-02.

emilskeptic

Debunker of pseudoscience.

3 thoughts on “Harbingers of Doom – Part VIII: Existential Risk and Pascal’s Wager

Comments are closed.

Discover more from Debunking Denialism

Subscribe now to keep reading and get access to the full archive.

Continue reading

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!