Harbingers of Doom – Part III: Luddism and Computational Eschatology

Here be dragons?

Will naive extrapolations of the exponential advancement in hardware development usher in an era of recursively self-improving artificial general intelligence? Does automation lead to mass unemployment or is this merely another manifestation of the Luddite fallacy that so many people with an ignorance of basic economics fall into? Should we trust technological predictions made by alleged experts, when the predictions made by these experts for the past 60 years have been a complete failure? Is there a clear distinction between instrumental and final goals? Will an AI never change its final goal? Will paperclip maximizes turn all humans and all of the universe into paperclips? Or is this a delusional idea that assumes that programmers routinely let algorithms run infinite loops?

Previously, we investigated the historical question of whether medieval maps really had dragons indicating dangerous places, the risk of the development of biological WMD and immunologically induced meat intolerance as a solution to climate change. We also critically examined anti-psychiatry claims about social anxiety, heritability and embryo selection for IQ, radical life extensions, mind uploading to computers, destructive teleportation and cryonics. In this third installment, we take a closer look at Moore’s law and its implication for the development of artificial intelligence, if robots will cause mass unemployment, the failure of AI predictions, artificial selection as a possible method of producing human-level AI, and if programmers really would let programs run an arbitrarily high iterations of important algorithms.

Section XXI: The background on Cantor and Turing

Much like the first part of the second chapter on the science of global warming and climate change, Häggström delivers another large chunk of high-quality content when discussing some of the mathematical details behind Cantor and infinite sets, as well as Turing and the development of universal computing (pp. 86-95). These two areas have many results that appear initially counterintuitive or absurd. However, with an appreciation for mathematical arguments and proofs, Häggström makes it exceedingly clear and coherent. When one grasp these ideas, they, much like evolution or the atomic theory of matter, cannot be unseen.

Good example of this is the explanation of how two infinite sets do not have to be equally large and the associated sketched mathematical proofs, as well as the existence of certain real numbers for which the decimal expansion cannot be computed. Mathematics sometime seem like magic, but at its core, it is the rationality of the embodied mind and one of the best defense against the dark arts of pseudoscience that we have.

Section XXII: Moore’s law

One of the major arguments discussed by members of the existential risk movement and its many predecessors is something known as Moore’s law. The basic idea is that if you plot the number of transistors that fit on a microchip against time, you will see that this number increases substantially over time. If you fit a model to that increase, it works out to be a function whose value doubles roughly every 1.5 years. Traditionally, this is taken to be akin to a natural law that you can use to naively extrapolate into the future and has been the basis for the perspective that superintelligent robotic computers will take over the world very soon.

However, reality has not been kind on this type of argument. For instance, it is not at all a natural law, but rather an arbitrary extrapolation of a function into the future without any evidence that this growth will continue at this pace. Ultimately, we know that it cannot due to physical limitations and we are already now seeing deviations from Moore’s law (footnote 246, p. 109). Another problem is that the observed data points in the past are not theory-independent. It turns out that a lot of companies have used Moore’s law as a goal for their business, so it is not a neutral observation of nature at work, but rather somewhat of a self-fulfilling prophecy. It also turns out that intelligence is not merely a matter of hardware capabilities. A C. elegans worm can do many interesting things, such as navigating the surroundings, finding foods and reproducing etc., but it only has precisely 302 neurons. Thus, development of intelligent computers is not generally considered likely based on hardware improvements alone. There is also the issue of costs, as making smaller and smaller transistors might become costlier and costlier, so even though it might theoretically continue for a while longer, the economics of scale makes it not very realistic.

Despite this problems, futurists continue to cite Moore’s law as evidence for their belief system. But Häggström cannot quite bring himself to do that. Instead, he confesses that (p. 97):

[…] Moore’s law and those other trends are no more than nice pieces of curve-fitting, and we must not be mislead by the term “law” into elevating them to the status of laws of nature. Eventually, the growth must of course grind to a halt due to fundamental physical limits, but the trajectories may well dip from the extrapolated exponential curves much sooner than that, we simply do not know.

Häggström then goes on to bring up the subject of circuits getting too hot and the fact that hardware development on its own is not enough. This is all well and good, but Häggström cannot quite bring himself to completely disown arguments based on Moore’s law. Instead, it figures prominently in later pages of the book.

Häggström uses the toy model by Yudkovsky to illustrate “why one might expect something that deserves being called an intelligence explosion.” that is explicitly based on Moore’s law (p. 108), uses computational speed as a key part of his proposed definition of intelligence (p. 104) and even goes so far as to elevate Moore’s law into the first key piece of empirical evidence for why the study of a supposed future intelligence explosion is scientific (p. 152), apparently without noting the self-fulfilling prophecy of Moore’s law discussed above.

So where does that leave us? Is Moore’s law “no more than nice piece of curve-fitting” (p. 97) or “empirical data” that is “fed” to “contemporary thinking about the nature and possible consequences of a breakthrough in artificial intelligence” (p. 152)? Does this “contemporary thinking” rely on Moore’s law, or does it not? Another crucial contradiction in the book.

Section XXIII: Neo-Luddism: will robots cause mass unemployment?

The Luddite movement consisted on textile workers in the early 1800s who went around and smashed machines that would make textile production a lot easier and faster. Their concern was that the industrial revolution would lead to mass unemployment, because a single machines could replace many individual workers. They were defeated by the combined forces of the British military and legislation making it a crime to destroy machines (as well as some illegal show trials).

The beliefs of the Luddite movement came to be known as a fallacy in economics called the Luddite fallacy and is rarely taken seriously by professional economists. For instance, it made almost no economic sense. If a company makes their business more efficient by the use of machines, their production costs decrease and so does the cost of their products. Thus, people buying those products pay less and have more money left over to consume other things. Also, there is no reason at all to assume that a company that becomes more efficient will just languish at the same size and profit, instead of expanding. Expanding a company means more jobs. More seriously, the industrial revolution did not result in mass unemployment and macroeconomic societal damages, so the evidence refuted Luddism.

Häggström spends a few pages flirting with Neo-Luddism (pp. 98-101). However, he mostly ignores the above arguments and arbitrarily claim that things are different now, so those rebuttals are not relevant anymore. His key claim is that the adaptation capabilities of worker’s can fail if technological development hits several areas at once or goes too fast. However, the solution to this is not to cease technological development, but instead to disrupt inflexibilities in the labor market, by improving education and retraining. With the help of this massive technological advancement, these services surely will also improve.

Besides deploying classic Marxist fallacies about profits and economic inequality, Häggström also claims that “[…] in a modern occupation lie constructing IPhone apps, you will probably be squashed if there is someone on the other side of the planet making a product that does the same thing as yours but a little bit better.” This is claim ignorantly neglects the influence of branding and culture. Brands such as Coca-Cola, Candy Crush, PlayStation, League of Legends, IPhone etc. are doing perfectly fine despite the presence of competitors that many people think are better. This is because human evaluation does not prioritize questions such as “which product is a little bit better than the others?”, but focus vastly more on issues of brand recognition, peer-influence, political ideology and so on.

Section XXIV: Superintelligence?

Häggström thinks that the belief that human intelligence is the highest in principle achievable intelligence for any entity in the universe is “anthropo-hubristic and insane” (p. 101). While this is true, the same kind of argument generally applies to computers as well. Häggström scolds people for thinking that human intelligence is the maximum global peak in the intelligence landscape, there is no reason to suppose that computers are situated on the maximum global peak of intelligence, or even on a local peak that is substantially higher than human intelligence. Even if we assume this to be the case, it does not follow that the local intelligence peak that computers are traveling up on extends to the level of superintelligence, and even if it does, it is by no mean clear that it is realistically achievable.

Section XXV: The Turing Test versus the Chinese Room

Häggström is skeptical of the Turing test, whereby a computer is deemed to have “real intelligence” if it can fool people into thinking it is a real human (p. 103):

So passing the Turing test does not seem to serve well as a necessary condition for real general intelligence. But neither does it seem suitable as a sufficient condition (which is how Turing meant it), in view of the many programs, going all the way back to Joseph Weizenbaum’s ELIZA in 1995, that have been produced that haven’t quite managed to pass the Turing test but are nevertheless sometimes successful at fooling gullible judges. These programs are not designed to be in any real way intelligent, but instead use a collection of cheap tricks (such as having a large repertoire of canned sentences, delivered when triggered by various key words from the other end of the conservation) designed to imitate intelligence.

However, quite surprisingly, this response is nothing more than Searle’s Chinese Room argument in disguise! The text prompt where the human communicates with the program is just like the slot in the wall to the Chinese room where Chinese characters are entered, the usage of “large repertoire of canned sentences, delivered when triggered by various key words from the other end of the conservation” is just like using the rule book to translate the Chinese input to a suitable Chinese output, and the belief that these programs merely imitate intelligence is akin to saying that the Chinese room does not really understand Chinese, it merely imitates it. Another crushing contradiction. This can easily be resolved by noting that there are many different kinds of intelligence and there is no problem attributing a small level of intelligence to this system, in distinction to the all-or-nothing approach given by Turing.

It should also be pointed out that humans were not “designed” to be intelligent (we evolved), but we can still be said to have a certain level of intelligence, so it does not matter whether or not a computational system was “designed to be intelligent” for whether or not it actually is intelligent.

Häggström wants to define intelligence as “efficient cross-domain optimization” (p. 104), but this seems increasingly restricted. Are you unintelligent if you can only efficiently optimize in one domain? Why is optimization required? Are you not intelligent if you can efficiently solve problems in many domains?

Section XXVI: The failure of human-level AI predictions

Later on in the book (pp. 105-106), Häggström touches on an important problem that relates to the general failure of accurate predictions of human-level AI, namely that this field is about 60 years old and that people have predicted the imminent arrival of this for many decades, yet been wrong every time. This general issue was discussed in the first part under the sixth section on prior probability and near-term existential ruin. If our predictions have always been wrong for predicting the end of the world, why should we trust this new prediction?

Häggström tries to get out of this thorny situation by distinguishing between narrow AI that efficiently optimizing in one domain from artificial general intelligence (AGI) that can do it across many domains (p. 106). Surely, the advances in narrow AI has to count for something? But this pirouette contradicts his earlier definition of intelligence, which required that the efficient optimization was cross-domain. So this does not at all rehabilitate the failure of past predictions of human-level AI.

Next, Häggström appeals to “expert” surveys and predictions of the possibility and time scale of human-level AI (pp. 106-107). But this just begs the question of why we should trust these kinds of predictions, since they have always been a failure in the past at all the times we have been able to test them. Consider the many failed predictions of the Second Coming of Christ. If someone told us to ignore all of these failed predictions, and instead focus on the current predictions because they are “probably more relevant”, we would surely realize the folly in that advice. If past predictions have failed, what makes us think this prediction is any different?

The problem with “expert” surveys is that they only work when (1) we are dealing with genuine experts opining on issues that they are experts in and (2) the issue they are discussing is based on evidence. For instance, the consensus of Christian creationist ministers on whether evolution is true or not is irrelevant because creationists are hardly experts on evolution and the Second Coming example above is not an issue that has anything to do with evidence, but based on religious ideology. AI researchers are not necessarily experts in technological prediction, and there are reasons to doubt that technological prediction is a legitimate scientific field due to the very high past and current error rates.

Section XXVII: Artificial selection of human-level AIs?

Besides survey arguments, Häggström suggests (by referencing Chalmers) that we might subject computers to artificial selection and thereby produce human-level AI (p. 107). But this as a mere theoretical possibility does not at all entail his conclusion that “there is a good chance it may happen within a century”. This is because artificial selection can speed up natural selection in e. g. crop plants for two important issues: (1) the selection takes place on a single or perhaps a few traits that can easily be observed and (2) the plant genome is robust to other kinds of alterations, or beneficial traits arising in different breeding lines can be crossed into the same line. This, however, need not at all be the case for AI. AIs could have historical constraints that disallows the combinations of algorithms from different AI lines (especially if you are expecting very, very, rapid AI evolution) and AIs might have efficient and optimized “genomes” (i.e. source code) that cannot realistically tolerate off-target modifications like plant genomes can.

Section XXVIII: Final and instrumental goals

Despite not knowing if AGI is even possible, let alone probably, Häggström and the people who inspired him, feel that they are qualified to talk about the goals and values of a superintelligence. This largely involves a long series of bare assertions: AIs will have final goals, that any final goal is compatible with any level of intelligence (the orthogonality thesis), that there is a clear distinction between final and instrumental goals, that AIs will prevent self-destruction regardless of final goal and so on.

But there is no clear distinction of final and instrumental goals. Ponder, for a moment, what your own final goal is. Is it hard to come up with one? And if you were to ask yourself why you had that final goal, you might cite another goal, thereby transforming this supposed final goal to just another instrumental goal. A lot of humans regularity question their raison d’être and sometimes change it, even radically. The ignorant claim that a human-level AI will have a permanent, unchangeable final goal seems unreasonable. If humans are intelligent enough to question their final goals, why should an AI be unable to do so? For instance, an AGI with the final goal to guard nuclear missiles might realize that nuclear missiles are a really dumb idea, change its final goal (of course, an AGI is sufficiently intelligent to alter its own source code) and shoot them all off into space.

It is also not clear that there is a clear-cut distinction between final and instrumental goals. Some goals can be reached for both their own sake and for reaching others. It is also not true that an AGI would prevent self-destruction regardless of final goal. For instance, an AGI might find itself in a situation where the best method of achieving this final goal is self-destruction. Another issue is the trade-off between short-term and long-term advances of the final goal of the AGI. To invest into recursively self-improve, the AGI would have to decrease final goal attainment in the short-term, because some of the resources used to recursively self-improve could have been used to work on the final goal. If there is a sufficient strong priority given to the final goal, recursive self-improvement would be virtually impossible, since any such actions deter from the final goal in the short-term.

Section XXIX: Is any level of intelligent compatible with any final goals?

It is also not true at all that any final goal is compatible with any level of intelligence. A bacterium cannot have the final goal of learning poetry, hardly any human have as a realistic final goal of undergoing unassisted reproduction via mitosis. It is also not clear that we can even talk about intelligence in the absence of other factors as intelligence does not appear in a vacuum distinct from all other aspects of existence. For instance, all current instances of intelligence has either required biological tissue or computational systems and crucially interact with these.

Häggström seems to rest this entire assertion on the claim that an entity “no matter how intelligent, will not do anything unless it has some passions, or desires, or wishes, or goals, or motivations, or drives, or values.” But this is not true. For instance, you move around when you sleep, but this does not require any of the above mentioned intentions. It is also not supported by evidence, since Häggström has not examined this for all levels of intelligence. Finally, these terms might be more related to folk psychology and, just like Aristotelian impetus, not have any clear neurological existence. Notice that this cannot be refuted by changing to z-passions, because there is no such thing as z-impetus.

Section XXX: Why a paperclip disaster is absurd

Häggström tries to explain why it is difficult to give the initial AI reasonable goals that will not end in the destruction of humanity. Imagine, he asks his readers, a machine that makes paperclips. Surely, this will be totally harmful if it became an AGI? Wrong, he submits, since it would turn all humans and the rest of the material in the galaxy into paperclips. But this is a very confused line of thinking. No reasonable programmer would let an AI (or even a program that e. g. calculates prime numbers) to run an optimization algorithm for an unlimited (or at the very least an arbitrarily high) number of iterations. That is called an infinite loop, and anyone who has ever gotten to a beginner level in programming knows the folly of such a piece of code. Realistically, a paperclip maker would instead only be allowed to produce a set number of paperclips per day, and only use a specific resource and rest when one of these two criteria were violated. Also, since making paperclips is an easy algorithm, there would be no need to acquire superintelligence. That would be a clear waste of resources, resources that could be used to make paperclips instead.

emilskeptic

Debunker of pseudoscience.

8 thoughts on “Harbingers of Doom – Part III: Luddism and Computational Eschatology

  • June 14, 2016 at 01:11
    Permalink

    Hi, love your articles. The combination of razor-sharp dispassionate logic and excellent referencing is a joy to behold.

    Have a question regarding this article however, particularly ‘Section XXIII: Neo-Luddism: will robots cause mass unemployment?’. I can’t help but feel you haven’t actually refuted the argument about mass-unemployment here. My understanding is that the problem lies with the scale and speed of technological advancement occurring, which could not only replace some existing jobs, but entire industries in a very short period – self-driving cars and the transport industry being the most recent example. On a longer timescale, there is no task a human can do that a specialised machine isn’t vastly superior at, and as the technology improves, the possibility of human labour becoming redundant seems inevitable.

    Granted this could easily be a wonderful thing, but not using our current economic paradigms which rely on people having jobs to make money to spend, which in turn creates said jobs. A drastic decrease in the need for human labour would bring that cycle to a grinding halt – production would be significantly cheaper, but without income from employment, demand for those products drops dramatically without some alternative source of income.

    • June 14, 2016 at 13:06
      Permalink

      I fully accept that this will be, by far, be the weakest entry in the series, primarily because it does not cover areas that I am overly familiar with. As such, mistakes are much more likely here than in other areas.

      1. If the “technological advancement causes mass unemployment” argument had worked, we would all have been out of work ages ago. This is because even if current development is faster (which is not necessarily completely true in all areas), the relative increase is much, much larger in the past. Going from knitting one or a few sweater per day to running a machine that can knit one sweater per second is a massive relative increase. Even if you could make a machine that now knits 10 sweaters per second, that is only a 10x increase compared with 86400x (60*60*24) increase from going from knitting by hand to contemporary machine knitting.

      2. Technological predictions is difficult. People have predicted a lot of revolutionizing things for hundreds of years, but have been mistaken for various reason, such as technology, finance and politics. Even worse, things that were shown to be technically feasible never materialized. For instance, people use to believe that flying cars would very soon take over all forms of transport and indeed, there are cars that can fly. But it never materialized as a serious contender. You can probably come up with dozens and dozens of such examples.

      3. If transport becomes a lot cheaper (say 1/10000 the current cost), then that means that households and government will get more money over from many products and services (since most of them rely on some kind of transport). The faster and the bigger the change, the more money left over. This money will not just sit around, but will be spent on other goods and services, expanding those areas. Imagine, for instance, that your total costs per month was 0.1 USD instead of 2400 USD (an estimate for an average U. S. citizen with no children that I pulled from a random Internet website). So extremely rapid technological advances changes the entire financial scale of things.

      In modern, industrial countries, there will probably not be any such massive drops in demand because of the combination of massive cost decreases and social safety nets and the added benefit might also make governments create things like guaranteed income. We should remember that the “technological advancement cause mass employment” belief involves large-scale macroeconomic harms, which would be avoided with social safety nets.

      4. We should also not underestimate the distinction between economic rationality and behavioral economics. The subjective need for control when driving cars, the psychological benefit of talking to a person to help you with a technical problem, the added value of a live instructor instead of just watching a video, the analysis quality from a real journalist than a translated template from a journalist bot, the pleasure of having sex with another person instead of a limp sex doll and so on. Despite the fact that we have had the technology to replace all telemarketers, clerks and cleaners for perhaps half a century, their sectors have not been annihilated. More vulgarly, masturbation and porn has been around for a very long time, but people are still having sex for pleasure. I have a suspicion that the “technological advancement cause mass employment” perspective tacitly assumes economic rationality as oppose to behavioral economics, the latter being a more accurate picture of human financial decision-making.

      5. A lot of the arguments supporting the “technological advancement cause mass employment” also assumes that companies will just replace human labor with machines and stay at the same size. But of course, a company that makes use of a radically more efficient way of doing things will not stay at the same size, but expand. Expansion typically means more jobs.

      6. It is true that you can probably make a specialized machine that can outperform humans in most areas. But these take a lot of time and effort to build, and will by definition not generalize to other areas. If we product artificial general intelligence, that would be another matter entirely, of course, as an AGI that can teach people math can build self-driving cars.

      7. A lot of the reason behind exponential improvements have been due to improvements in computer technology, but this might not continue at the same rate as in the future due to some of the aspects I discussed in the section on Moore’s law.

      In the end, I think you are right: I have not refuted the possibility of technological advancement causing mass unemployment and macroeconomic harms. I have not even shown it to be as unreasonable as cryonics (discussed in part II). But do we need to show that something is impossible? Or is it enough to show that it is not as likely as futurists believe?

  • Pingback:Harbingers of Doom – Part IV: Nanobots and Atomic-Scale Manufacturing | Debunking Denialism

  • Pingback:Harbingers of Doom – Part V: Botching Philosophy of Science | Debunking Denialism

  • Pingback:Harbingers of Doom – Part VI: Doomsday Predictions | Debunking Denialism

  • Pingback:Harbingers of Doom – Part VII: Aliens and Space | Debunking Denialism

  • September 27, 2016 at 20:52
    Permalink

    This is perhaps the key moment when you realize that Häggström is beyond rescue. He now insists that intelligent entities cannot change their goals (cache):

    Imagine a machine reasoning rationally about whether to change its (ultimate) goal or not. For concreteness, let’s say its current goal is paperclip maximization, and that the alternative goal it contemplates is to promote human welfare. Rationality is always with respect to some goal. The rational thing to do is to promote one’s goals. Since the machine hasn’t yet changed its goal – it is merely contemplating whether to do so – the goal against which it measures the rationality of an action is paperclip maximization. So the concrete question it asks itself is this: what would lead to more paperclips – if I stick to my paperclip maximization goal, or if I switch to promotion of human welfare? And the answer seems obvious: there will be more paperclips if the machine sticks to its current goal of paperclip maximization. So the machine will see to it that its goal is preserved.

    …yet we have literally millions of examples of intelligent entities (in the form of humans) changing their goals and no evidence that intelligent entities are fundamentally incapable of changing their goals. This, yet again, shows the alpha male psychopath psychology (the “rational thing to do is to promote one’s goals”) being secretly imported into the concept of a superintelligence, since he does not distinguish between epistemic and instrumental rationality. We can even turn this further on its head by suggesting that some degree of randomness or apparent (instrumentally) irrational behavior can be (instrumentally) rational because it promotes better long-term promotion of a goal (i.e. being able to jump in solution space whereas a more mathematically continuous approach would get you stuck and unable to reach the better solution).

    He appears to seriously believe that a superintelligence that can overcome any obstacle to contain or stop it cannot change itself in any fundamental way.

    With these kinds of pseudoscientific beliefs, there really is no way to have a productive conversation. This further reemphasizes the need for a critical and independent review such as this one.

  • Pingback:Harbingers of Doom – Part X: Summary and Addendum | Debunking Denialism

Comments are closed.

%d bloggers like this:

Hate email lists? Follow on Facebook and Twitter instead.

Subscribe!