Will naive extrapolations of the exponential advancement in hardware development usher in an era of recursively self-improving artificial general intelligence? Does automation lead to mass unemployment or is this merely another manifestation of the Luddite fallacy that so many people with an ignorance of basic economics fall into? Should we trust technological predictions made by alleged experts, when the predictions made by these experts for the past 60 years have been a complete failure? Is there a clear distinction between instrumental and final goals? Will an AI never change its final goal? Will paperclip maximizes turn all humans and all of the universe into paperclips? Or is this a delusional idea that assumes that programmers routinely let algorithms run infinite loops?
Previously, we investigated the historical question of whether medieval maps really had dragons indicating dangerous places, the risk of the development of biological WMD and immunologically induced meat intolerance as a solution to climate change. We also critically examined anti-psychiatry claims about social anxiety, heritability and embryo selection for IQ, radical life extensions, mind uploading to computers, destructive teleportation and cryonics. In this third installment, we take a closer look at Moore’s law and its implication for the development of artificial intelligence, if robots will cause mass unemployment, the failure of AI predictions, artificial selection as a possible method of producing human-level AI, and if programmers really would let programs run an arbitrarily high iterations of important algorithms.
Section XXI: The background on Cantor and Turing
Much like the first part of the second chapter on the science of global warming and climate change, Häggström delivers another large chunk of high-quality content when discussing some of the mathematical details behind Cantor and infinite sets, as well as Turing and the development of universal computing (pp. 86-95). These two areas have many results that appear initially counterintuitive or absurd. However, with an appreciation for mathematical arguments and proofs, Häggström makes it exceedingly clear and coherent. When one grasp these ideas, they, much like evolution or the atomic theory of matter, cannot be unseen.
Good example of this is the explanation of how two infinite sets do not have to be equally large and the associated sketched mathematical proofs, as well as the existence of certain real numbers for which the decimal expansion cannot be computed. Mathematics sometime seem like magic, but at its core, it is the rationality of the embodied mind and one of the best defense against the dark arts of pseudoscience that we have.
Section XXII: Moore’s law
One of the major arguments discussed by members of the existential risk movement and its many predecessors is something known as Moore’s law. The basic idea is that if you plot the number of transistors that fit on a microchip against time, you will see that this number increases substantially over time. If you fit a model to that increase, it works out to be a function whose value doubles roughly every 1.5 years. Traditionally, this is taken to be akin to a natural law that you can use to naively extrapolate into the future and has been the basis for the perspective that superintelligent robotic computers will take over the world very soon.
However, reality has not been kind on this type of argument. For instance, it is not at all a natural law, but rather an arbitrary extrapolation of a function into the future without any evidence that this growth will continue at this pace. Ultimately, we know that it cannot due to physical limitations and we are already now seeing deviations from Moore’s law (footnote 246, p. 109). Another problem is that the observed data points in the past are not theory-independent. It turns out that a lot of companies have used Moore’s law as a goal for their business, so it is not a neutral observation of nature at work, but rather somewhat of a self-fulfilling prophecy. It also turns out that intelligence is not merely a matter of hardware capabilities. A C. elegans worm can do many interesting things, such as navigating the surroundings, finding foods and reproducing etc., but it only has precisely 302 neurons. Thus, development of intelligent computers is not generally considered likely based on hardware improvements alone. There is also the issue of costs, as making smaller and smaller transistors might become costlier and costlier, so even though it might theoretically continue for a while longer, the economics of scale makes it not very realistic.
Despite this problems, futurists continue to cite Moore’s law as evidence for their belief system. But Häggström cannot quite bring himself to do that. Instead, he confesses that (p. 97):
[…] Moore’s law and those other trends are no more than nice pieces of curve-fitting, and we must not be mislead by the term “law” into elevating them to the status of laws of nature. Eventually, the growth must of course grind to a halt due to fundamental physical limits, but the trajectories may well dip from the extrapolated exponential curves much sooner than that, we simply do not know.
Häggström then goes on to bring up the subject of circuits getting too hot and the fact that hardware development on its own is not enough. This is all well and good, but Häggström cannot quite bring himself to completely disown arguments based on Moore’s law. Instead, it figures prominently in later pages of the book.
Häggström uses the toy model by Yudkovsky to illustrate “why one might expect something that deserves being called an intelligence explosion.” that is explicitly based on Moore’s law (p. 108), uses computational speed as a key part of his proposed definition of intelligence (p. 104) and even goes so far as to elevate Moore’s law into the first key piece of empirical evidence for why the study of a supposed future intelligence explosion is scientific (p. 152), apparently without noting the self-fulfilling prophecy of Moore’s law discussed above.
So where does that leave us? Is Moore’s law “no more than nice piece of curve-fitting” (p. 97) or “empirical data” that is “fed” to “contemporary thinking about the nature and possible consequences of a breakthrough in artificial intelligence” (p. 152)? Does this “contemporary thinking” rely on Moore’s law, or does it not? Another crucial contradiction in the book.
Section XXIII: Neo-Luddism: will robots cause mass unemployment?
The Luddite movement consisted on textile workers in the early 1800s who went around and smashed machines that would make textile production a lot easier and faster. Their concern was that the industrial revolution would lead to mass unemployment, because a single machines could replace many individual workers. They were defeated by the combined forces of the British military and legislation making it a crime to destroy machines (as well as some illegal show trials).
The beliefs of the Luddite movement came to be known as a fallacy in economics called the Luddite fallacy and is rarely taken seriously by professional economists. For instance, it made almost no economic sense. If a company makes their business more efficient by the use of machines, their production costs decrease and so does the cost of their products. Thus, people buying those products pay less and have more money left over to consume other things. Also, there is no reason at all to assume that a company that becomes more efficient will just languish at the same size and profit, instead of expanding. Expanding a company means more jobs. More seriously, the industrial revolution did not result in mass unemployment and macroeconomic societal damages, so the evidence refuted Luddism.
Häggström spends a few pages flirting with Neo-Luddism (pp. 98-101). However, he mostly ignores the above arguments and arbitrarily claim that things are different now, so those rebuttals are not relevant anymore. His key claim is that the adaptation capabilities of worker’s can fail if technological development hits several areas at once or goes too fast. However, the solution to this is not to cease technological development, but instead to disrupt inflexibilities in the labor market, by improving education and retraining. With the help of this massive technological advancement, these services surely will also improve.
Besides deploying classic Marxist fallacies about profits and economic inequality, Häggström also claims that “[…] in a modern occupation lie constructing IPhone apps, you will probably be squashed if there is someone on the other side of the planet making a product that does the same thing as yours but a little bit better.” This is claim ignorantly neglects the influence of branding and culture. Brands such as Coca-Cola, Candy Crush, PlayStation, League of Legends, IPhone etc. are doing perfectly fine despite the presence of competitors that many people think are better. This is because human evaluation does not prioritize questions such as “which product is a little bit better than the others?”, but focus vastly more on issues of brand recognition, peer-influence, political ideology and so on.
Section XXIV: Superintelligence?
Häggström thinks that the belief that human intelligence is the highest in principle achievable intelligence for any entity in the universe is “anthropo-hubristic and insane” (p. 101). While this is true, the same kind of argument generally applies to computers as well. Häggström scolds people for thinking that human intelligence is the maximum global peak in the intelligence landscape, there is no reason to suppose that computers are situated on the maximum global peak of intelligence, or even on a local peak that is substantially higher than human intelligence. Even if we assume this to be the case, it does not follow that the local intelligence peak that computers are traveling up on extends to the level of superintelligence, and even if it does, it is by no mean clear that it is realistically achievable.
Section XXV: The Turing Test versus the Chinese Room
Häggström is skeptical of the Turing test, whereby a computer is deemed to have “real intelligence” if it can fool people into thinking it is a real human (p. 103):
So passing the Turing test does not seem to serve well as a necessary condition for real general intelligence. But neither does it seem suitable as a sufficient condition (which is how Turing meant it), in view of the many programs, going all the way back to Joseph Weizenbaum’s ELIZA in 1995, that have been produced that haven’t quite managed to pass the Turing test but are nevertheless sometimes successful at fooling gullible judges. These programs are not designed to be in any real way intelligent, but instead use a collection of cheap tricks (such as having a large repertoire of canned sentences, delivered when triggered by various key words from the other end of the conservation) designed to imitate intelligence.
However, quite surprisingly, this response is nothing more than Searle’s Chinese Room argument in disguise! The text prompt where the human communicates with the program is just like the slot in the wall to the Chinese room where Chinese characters are entered, the usage of “large repertoire of canned sentences, delivered when triggered by various key words from the other end of the conservation” is just like using the rule book to translate the Chinese input to a suitable Chinese output, and the belief that these programs merely imitate intelligence is akin to saying that the Chinese room does not really understand Chinese, it merely imitates it. Another crushing contradiction. This can easily be resolved by noting that there are many different kinds of intelligence and there is no problem attributing a small level of intelligence to this system, in distinction to the all-or-nothing approach given by Turing.
It should also be pointed out that humans were not “designed” to be intelligent (we evolved), but we can still be said to have a certain level of intelligence, so it does not matter whether or not a computational system was “designed to be intelligent” for whether or not it actually is intelligent.
Häggström wants to define intelligence as “efficient cross-domain optimization” (p. 104), but this seems increasingly restricted. Are you unintelligent if you can only efficiently optimize in one domain? Why is optimization required? Are you not intelligent if you can efficiently solve problems in many domains?
Section XXVI: The failure of human-level AI predictions
Later on in the book (pp. 105-106), Häggström touches on an important problem that relates to the general failure of accurate predictions of human-level AI, namely that this field is about 60 years old and that people have predicted the imminent arrival of this for many decades, yet been wrong every time. This general issue was discussed in the first part under the sixth section on prior probability and near-term existential ruin. If our predictions have always been wrong for predicting the end of the world, why should we trust this new prediction?
Häggström tries to get out of this thorny situation by distinguishing between narrow AI that efficiently optimizing in one domain from artificial general intelligence (AGI) that can do it across many domains (p. 106). Surely, the advances in narrow AI has to count for something? But this pirouette contradicts his earlier definition of intelligence, which required that the efficient optimization was cross-domain. So this does not at all rehabilitate the failure of past predictions of human-level AI.
Next, Häggström appeals to “expert” surveys and predictions of the possibility and time scale of human-level AI (pp. 106-107). But this just begs the question of why we should trust these kinds of predictions, since they have always been a failure in the past at all the times we have been able to test them. Consider the many failed predictions of the Second Coming of Christ. If someone told us to ignore all of these failed predictions, and instead focus on the current predictions because they are “probably more relevant”, we would surely realize the folly in that advice. If past predictions have failed, what makes us think this prediction is any different?
The problem with “expert” surveys is that they only work when (1) we are dealing with genuine experts opining on issues that they are experts in and (2) the issue they are discussing is based on evidence. For instance, the consensus of Christian creationist ministers on whether evolution is true or not is irrelevant because creationists are hardly experts on evolution and the Second Coming example above is not an issue that has anything to do with evidence, but based on religious ideology. AI researchers are not necessarily experts in technological prediction, and there are reasons to doubt that technological prediction is a legitimate scientific field due to the very high past and current error rates.
Section XXVII: Artificial selection of human-level AIs?
Besides survey arguments, Häggström suggests (by referencing Chalmers) that we might subject computers to artificial selection and thereby produce human-level AI (p. 107). But this as a mere theoretical possibility does not at all entail his conclusion that “there is a good chance it may happen within a century”. This is because artificial selection can speed up natural selection in e. g. crop plants for two important issues: (1) the selection takes place on a single or perhaps a few traits that can easily be observed and (2) the plant genome is robust to other kinds of alterations, or beneficial traits arising in different breeding lines can be crossed into the same line. This, however, need not at all be the case for AI. AIs could have historical constraints that disallows the combinations of algorithms from different AI lines (especially if you are expecting very, very, rapid AI evolution) and AIs might have efficient and optimized “genomes” (i.e. source code) that cannot realistically tolerate off-target modifications like plant genomes can.
Section XXVIII: Final and instrumental goals
Despite not knowing if AGI is even possible, let alone probably, Häggström and the people who inspired him, feel that they are qualified to talk about the goals and values of a superintelligence. This largely involves a long series of bare assertions: AIs will have final goals, that any final goal is compatible with any level of intelligence (the orthogonality thesis), that there is a clear distinction between final and instrumental goals, that AIs will prevent self-destruction regardless of final goal and so on.
But there is no clear distinction of final and instrumental goals. Ponder, for a moment, what your own final goal is. Is it hard to come up with one? And if you were to ask yourself why you had that final goal, you might cite another goal, thereby transforming this supposed final goal to just another instrumental goal. A lot of humans regularity question their raison d’être and sometimes change it, even radically. The ignorant claim that a human-level AI will have a permanent, unchangeable final goal seems unreasonable. If humans are intelligent enough to question their final goals, why should an AI be unable to do so? For instance, an AGI with the final goal to guard nuclear missiles might realize that nuclear missiles are a really dumb idea, change its final goal (of course, an AGI is sufficiently intelligent to alter its own source code) and shoot them all off into space.
It is also not clear that there is a clear-cut distinction between final and instrumental goals. Some goals can be reached for both their own sake and for reaching others. It is also not true that an AGI would prevent self-destruction regardless of final goal. For instance, an AGI might find itself in a situation where the best method of achieving this final goal is self-destruction. Another issue is the trade-off between short-term and long-term advances of the final goal of the AGI. To invest into recursively self-improve, the AGI would have to decrease final goal attainment in the short-term, because some of the resources used to recursively self-improve could have been used to work on the final goal. If there is a sufficient strong priority given to the final goal, recursive self-improvement would be virtually impossible, since any such actions deter from the final goal in the short-term.
Section XXIX: Is any level of intelligent compatible with any final goals?
It is also not true at all that any final goal is compatible with any level of intelligence. A bacterium cannot have the final goal of learning poetry, hardly any human have as a realistic final goal of undergoing unassisted reproduction via mitosis. It is also not clear that we can even talk about intelligence in the absence of other factors as intelligence does not appear in a vacuum distinct from all other aspects of existence. For instance, all current instances of intelligence has either required biological tissue or computational systems and crucially interact with these.
Häggström seems to rest this entire assertion on the claim that an entity “no matter how intelligent, will not do anything unless it has some passions, or desires, or wishes, or goals, or motivations, or drives, or values.” But this is not true. For instance, you move around when you sleep, but this does not require any of the above mentioned intentions. It is also not supported by evidence, since Häggström has not examined this for all levels of intelligence. Finally, these terms might be more related to folk psychology and, just like Aristotelian impetus, not have any clear neurological existence. Notice that this cannot be refuted by changing to z-passions, because there is no such thing as z-impetus.
Section XXX: Why a paperclip disaster is absurd
Häggström tries to explain why it is difficult to give the initial AI reasonable goals that will not end in the destruction of humanity. Imagine, he asks his readers, a machine that makes paperclips. Surely, this will be totally harmful if it became an AGI? Wrong, he submits, since it would turn all humans and the rest of the material in the galaxy into paperclips. But this is a very confused line of thinking. No reasonable programmer would let an AI (or even a program that e. g. calculates prime numbers) to run an optimization algorithm for an unlimited (or at the very least an arbitrarily high) number of iterations. That is called an infinite loop, and anyone who has ever gotten to a beginner level in programming knows the folly of such a piece of code. Realistically, a paperclip maker would instead only be allowed to produce a set number of paperclips per day, and only use a specific resource and rest when one of these two criteria were violated. Also, since making paperclips is an easy algorithm, there would be no need to acquire superintelligence. That would be a clear waste of resources, resources that could be used to make paperclips instead.