Harbingers of Doom – Part IV: Nanobots and Atomic-Scale Manufacturing

Here be dragons?

Will 3D printing make gun regulation impossible because people can print their own metal guns? Will you never shop food again, but merely download a 3D printing plan for sandwiches and cake? Will you be able to put together any arbitrary substance using atomically precise manufacturing? Is it feasible to use mechanical tools to place atom-by-atom onto a growing substance? Or does this ignore the massive number of atoms required just to make a few grams and that the nanoscale is strongly impacted by thermal noise and intermolecular forces? Is chemical reactions as easy as putting two atoms together or does the system require more? Is the ribosome a case of atomically precise manufacturing? Or is it a messy biological enzyme system that does not involve atom-by-atom assembly, contributes to a stunning error rate of perhaps 30% for protein synthesis and folding and is nothing like a machine? Being a cellular organelle, does this limit the capacity and range of the products that ribosomes and ribosome-like structures can produce? Perhaps more importantly, will self-replicating nanobots consume all life on earth?

Previously, we have debunked fanciful stories about dragons on medieval maps, fearmongering about molecular biology, anti-psychiatry attacks on social anxiety and medications, heritability and embryo selection of IQ, radical life extension, the denial of mind-brain physicalism, destructive teleportation, mind uploading, cryonics and wild speculations about technology-induced mass unemployment and superintelligent artificial general intelligence.

In this fourth installment, we take a closer look at the promises and perils of 3D printing, the alleged feasibility of atomically precise manufacturing, the biological details of the ribosome and protein synthesis, as well as the supposed future existence of self-replicating nanobots and whether or not they are likely to kill all life on earth.

Section XXXI: Why bother 3D printing stuff that can more easily gotten in other ways?

Häggström conjures up a wide range of wonders from the emerging technology of 3D printers (p. 128), such as “sandwich, a pair of sneakers or a kitchen table” or even cars. But Häggström ignores issues such as shoe fitting and the social aspects of preparing and consuming food. It is also unclear how e. g. a submarine sandwich would be done in a 3D printer since it contains a wide range of materials that are not easily constructed in the 3D printer paradigm. For instance, how do you 3D print slices of onions or the appropriate texture of chicken? These technical difficulties might very well be solved in the future. However, there has to be an argument for it, not merely a naive appeal to future technology. This way of thinking was criticized by Häggström in the section on geoengineering discussed in the first part of this articles series.

To drive this point come, consider the journalist Helen Ubiñas who managed to buy an AR-15 semiautomatic rifle in Philadelphia (a similar weapon to the one used in the Orlando mass shooting) in a just 7 minutes (Ubiñas, 2016). If you can legally buy a semiautomatic rifle in 7 minutes at the store, why bother spending a ton of money on a 3D printer, materials and printing it at home? Even if we assume a considerable drop in the cost of a 3D printer, the ease at which one can obtain a weapon is startling. This is not the case in other countries, of course, but then if guns can be successfully regulated, then so can 3D printers.

Häggström also seems concerned about intellectual property rights (p. 128), but despite the advances in file sharing and free streaming services, movies and television series are still being produced at a large scale. People use to predict that the VHS player would be the doom of the movie industry since people could just record the movies from the television. Similar sentiments were expressed on the CD, portable media players, illegal file-sharing, online streaming etc. Turns out that none of these fears turned out to be true. So why should we be concerned now?

Section XXXII: 3D printed guns do not have to be made entirely out of metal

Häggström makes a big deal out of the fact that some people have managed to produce guns or weapon parts out of metal with 3D printers, such as M1911 and the lower receivers of certain rifles (p. 128). But 3D printed weapons do not have to be made entirely from metal. They can also be made from plastics. For instance, the gun known as The Liberator, is made almost entirely from ABS plastics with the only metal part being the firing pin and can fire at least once without breaking (Morelle, 2013). Newer generations of 3D printed plastic weapons can fire almost 20 bullets without breaking, although even these weapons contains some metal parts (Greenberg, 2014).

Section XXXIII: 3D printed guns still require metal bullets

But even if you have made your stealthy plastic weapon with a 3D printer to avoid detection from magnetometer-type scanners, you are still going to need bullets. However, these bullets probably cannot be made from plastic, as the force and heat from the act of firing the weapon will destroy or at the very least compromise the plastic bullet. Needless to say, a bullet made out of metal will also have a lot more stopping power. Metal bullets could then be detected, confiscated and regulated, regardless of how many lower receivers to the AR-15 semi-automatic rifle you make with your 3D printer (p. 128).

Section XXXIV: 3D printed guns out of plastic can still be caught by scanners

Let us, for the sake of argument, assume that you now have a high-performance plastic gun with non-metal bullets that are of similar power and capacity as metal bullets. Can you now sneak past scanners at government buildings, airports and so on? Not so fast. Although you might be able to get past magnetometer-type scanners, the security industry has not been a passive onlooker as weapon technology has developed. Quite the opposite, as they countered it by developing more sophisticated scanners that not only detect metals, but also non-metal objects. These includes millimeter wave scanners and backscatter X-ray scanners (Hasler, 2010; Accardo and Chaudhry, 2014). There are also various technologies under development, such as terahertz scanners (Ma et al., 2013; Palmer, 2013). These are by no means perfect, but neither are magnetometers. They do, however, provide various methods to counter the alleged looming threat of 3D printed weapons. It is more reasonable to think about this as a technological arms race between criminals and law enforcement.

Section XXXV: Atomically precise manufacturing is probably impossible

Atomically precise manufacturing is the idea that you can use machines to put together substances in an atom-by-atom fashion (p. 129). However, atoms are very small. For instance, a carbon atom is about 70 picometers. A piece of carbon with the mass of 12 grams contains 6.022*1023 atoms. If you add a billion atoms per second, then it would take over (6.022*10^23)/1000000000/3600/24/365 = 19 million years to produce these 12 grams. Thus, this calculation seems to rule out effective atomically precise manufacturing and this is openly conceded by Häggström (p. 134), but he has a few ideas on how to get around this (but see Section XL).

Another problem is that proponents of atomically precise manufacturing seem to extrapolate from the macroscopic world to the nanoscale without careful considerations of problems such as size differences between fingers and cargo, Brownian motions and intermolecular forces (Moscatelli, 2013). If you want to use a mechanical procedure to place single atoms, there is going to be interactions between the mechanical fingers and the cargo (“sticky fingers”) and the fingers will have to be considerably larger than that of an individual atom to be stable and durable to thermal noise and the environment (~10-100 nanometers versus ~10-100 picometers), which means that the fingers (and you need several of them) will be several orders of magnitude larger than the cargo (“fat fingers”), which are very difficult problems to get around (Moscatelli, 2013). Furthermore, chemical reactions are more complicated than just putting two atoms close to each other. A lot of reactions require activation energy to even get started, only a small proportion of all imagined chemical combinations is thermodynamically possible and kinetically realistic.

Against these lethal objections, Häggström decides that these are straw man arguments (p. 131) and that proponents of atomically precise manufacturing never meant that they thought you could use mechanistic systems to place atom-by-atom onto a growing substance and they only ever meant using enzymatic reactions (pp. 131-132). This is not only blatantly false, but it is of no help to invoke enzymes. Here is Drexler (Baum, 2003), the proponent hailed by Häggström:

These nanofactories contain no enzymes, no living cells, no swarms of roaming, replicating nanobots. Instead, they use computers for digitally precise control, conveyors for parts transport, and positioning devices of assorted sizes to assemble small parts into larger parts, building macroscopic products. The smallest devices position molecular parts to assemble structures through mechanosynthesis–‘machine-phase’ chemistry.

It is clear that Drexler does have in mind the mechanical placing of atoms (or group of atoms) by mechanical tools and not enzymes.

Proponents of atomically precise manufacturing dismisses a lot of these concerns by falsely pointing to cellular structures called ribosomes that synthesize proteins by one amino acid at a time. They claim that since ribosomes are obvious cases of atomically precise manufacturing, humans can imitate ribosomes and solve the major problems with the idea of this manufacturing technique (pp. 131-132). However, ribosomes are definitely not atomically precise manufacturers and ribosome-like systems radically decrease the range of possible products. The next four sections go into additional details for why referencing ribosomes is of no help to the proponents of atomically precise manufacturing.

Section XXXVI: Ribosomes do not assemble atoms

Compared with the size of a carbon atom (~ 70 picometers), a ribosome is a relatively large structure (about 30 nanometers, or over 400x larger) in the cell that makes proteins from amino acids. However, this is nothing at all like the general idea behind atomically precise manufacturing. The ribosome consists of two subunits that are both associated with the mRNA, the amino acid that will be added to the growing polypeptide chain is not an atom but a molecule and it is also not alone by itself but attached to a tRNA (~80 ribonucleotides or so in size). Thus, a ribosome bears no resemblance to the “fingers” envisioned by the proponents of the idea of atomically precise manufacturing that place atom-by-atom.

Section XXXVII: Ribosomes are not precise

A machine that performs atomically precise manufacturing has to be, by definition, precise. However, ribosomes are not terribly precise as about 30% of all ribosomal products are non-functional (Yewdell et al., 1996; Schubert et al., 2000; Bourdetsky et al., 2014;) and called defective ribosomal products or DRiPS. Not all of them are due to translation errors, since they can also be due to incorrect protein folding, but these two contributing factors are hard to disentangle as translation errors also independently cause incorrect protein folding.

How can this be? Wouldn’t evolution have optimized ribosomes to be much more effective and a lot less wasteful? That is a common belief, but evolution can only optimize efficiently when optimization is realistically possible. In this case, optimization is prevented by three major factors.

First, thermodynamic processes prevent a highly efficient process, since on the level of single cellular structures and proteins, there is a lot of jiggling going around, which makes the process error-prone.

Second, it is not a top priority to make protein synthesis have an extremely high level of correctness, because you can just break down the defective proteins and start again, especially since there is already such a system for breaking down old or broken proteins.

Third, vertebrates have co-opted these DRiPS in the adaptive immune response to train the immune system to distinguish self from non-self. It presents chopped up self-peptides from DRiPS to this arm of the immune system and kill or inactivate those immune cells that are self-reactive. This is not a perfect system since autoimmune diseases do exist that involve the adaptive immune system, but it works reasonably well in most vertebrates. Therefore, having a substantially higher protein synthesis accuracy could likely compromise this system. Thus, this works as a historical constraint limiting the potential for evolution towards a large increased accuracy rate.

Section XXXVIII: Ribosomes are not machine manufacturers

It is tempting to compare molecular structures in the cell with machines. Ribosomes are compared with “factories”, motor proteins are compared with “cars”, importins are compared with “trains”, cytoskeleton is compared with “a road system”, mitochondria are compared with “power plants”, “batteries” or “furnaces” and so on. This is natural, because humans tend to see design and intention in a lot of things around us, whether they are the result of intentional design or not. This is one of the factors that makes the pseudoscience of creationism appear intuitive, but we know that scientific research has shown that the diversity of life is a result of evolution. So we must resist the machine analogies, because they can often hinder understanding more than they help. In particular, the machine analogy is inappropriate because of several of the issues discussed in neighboring sections, such as rate of defective products, limitations and so on.

A great example of where the machine analogy is unhelpful is motor proteins. Creationist and less knowledgeable science enthusiasts share simplistic visualizations where a single motor protein carries a cargo alone and walks in a perfectly coordinated step-by-step procedure on a piece of cytoskeleton in an otherwise uncrowded cellular neighborhood. However, in reality, the cell is crowded, thermal movements have a large influence and single motor proteins move much more erratic than those primitive visualizations imply (Zimmer, 2014; Erkell 2009). While it is true that the cargo movement is the result of many motor proteins working together, a single motor protein can wobble, move back a step or even fall off. The illusion of design is created by the average effect across all engaged motor proteins.

Here are two more accurate visualization of motor proteins in the cell, and here is the simplistic one that is often abused by creationists. Even the more accurate ones do not show examples where it takes a step back or falls off, so they have not quite been able to fully appreciate that the cargo movement is the average effect and that individual motor proteins can mess up.

Section XXXIX: Ribosome-like systems severely limit production capabilities

At the core of a ribosome is a ribozyme, which is an RNA molecule that is also an enzyme. So one can think of a ribosome as a very large enzyme that catalyzes a condensation reaction between the most recently added amino acid to the growing polypeptide chain and the amino acid that is currently getting added to it. These kinds of enzymes only work in a specific range of environments such as temperature, pH, salinity and so on. Another factor is that it requires an aqueous solution and that condensation reaction between two amino acids that form a peptide bond itself produces a molecule of water every time it happens. So synthesizing a protein that is made up out of 300 amino acids involves making 290 peptide bonds and thus produce 290 molecules of water. Thus, any product that one would like to make using ribosomes have to be proteins and ribosomal-like systems that might be able to use enzymatic reactions to create things other than proteins have to be able to handle an aqueous environment and its restrictions. This severely limits the production capacity and range of possible products.

Section XL: Generalized self-replicating nanobots are unrealistic

Taken together, these arguments make the idea of atomically precise manufacturing extremely unlikely. Typically, most proponents think the scale issue (Section XXXV) is the most severe problem and largely ignore the above sections about ribosomes. Instead of rejecting their unreasonable belief and changing their minds to conform to the evidence, they invent an even more unlikely proposal to solve it, namely self-replicating nanobots.

Before we get into the details of this proposal, we should note that this does not substantially help proponents of atomically precise manufacturing. This is because they commit a statistical error called the fallacy of conjunction. Let P(A) be the probability of atomically precise manufacturing and P(B) be the probability of self-replicating nanobots. Then it is the case that P(A)P(B) < P(A) as long as P(B) < 1. In other words, the combined belief in atomically precise manufacturing and self-replicating nanobots is less likely than the sole belief in atomically precise manufacturing. Since self-replicating nanobots are themselves unlikely, P(A)P(B) is likely to be << P(A).

So why are self-replicating nanobots unlikely? This is because they would have to carry out both self-replication, a variety of mechanical or enzymatic fingering and be resistant to environmental challenges. You can find RNA molecules that can self-replicate and catalyze specific enzymatic reactions at the same time, but their catalytic ability is extremely specific (Lincoln and Joyce, 2009; Robertson and Joyce, 2012; Robertson and Joyce, 2014), essentially ruling out broad-purpose enzymatic function.

Häggström mindlessly repeats the classic Feynman phrase that “there’s plenty of room at the bottom” (p. 129, 131), but it is not merely a matter of there being room at the nanoscale, but that the room can be used for something productive. It is probably possible to construct a very wide range of fine, nanoscale non-protein structures, but those structures are unlikely to be robust enough, strong enough or enzymatically varied enough to fulfill the fantastical beliefs of proponents of self-replicating nanobot-mediated atomically precise manufacturing.

As if all of this was not enough, Häggström puts forward the idea of “grey goo” (pp. 134-139), which is essentially involve self-replication nanobots that mutate and destroy all life on the plant. This is clearly a ludicrous idea because of all the above issues, but there are even more problems. Pathogens are highly host-specific and because of the great diversity of life on earth, it is unlikely that even nanobots could manage to kill it all. What works against humans does not need to work against lizards or bacteria. Outside the host, pathogens are very sensitive to environmental factors. While it is true that some pathogens can form spores or otherwise go into a durable state, this adds another function besides self-replication and host lethality that needs to be accomplished on the nanoscale, making the combination even less likely. Out of all the alleged threats to humanity, grey goo is far, far down the list.


Accardo, J., & Chaudhry, M. A. (2014). Radiation exposure and privacy concerns surrounding full-body scanners in airports. Journal of Radiation Research and Applied Sciences, 7(2), 198-200.

Baum. R, (2003). Nanotechnology: Drexler and Smalley make the case for and against “molecular assemblers”. Chemical and Engineering News. 81(4). 37-42.

Bourdetsky, D., Schmelzer, C. E. H., & Admon, A. (2014). The nature and extent of contributions by defective ribosome products to the HLA peptidome. Proceedings of the National Academy of Sciences, 111(16), E1591-E1599.

Erkell. L. J. (2009). Djuren och människan om den moderna biologin och dess världsbild. Lund: Studentlitteratur.

Greenberg, A. (2014). The Bullet That Could Make 3-D Printed Guns Practical Deadly Weapons. Wired. Accessed: 2016-06-20.

Hasler, J. P. (2010). The Truth About TSA Airport Scanning. Popular Mechanics. Accessed: 2016-06-20.

Lincoln, T. A., & Joyce, G. F. (2009). Self-Sustained Replication of an RNA Enzyme. Science, 323(5918), 1229-1232.

Ma, Y., Huang, M., Ryu, S., Bark, C. W., Eom, C.-B., Irvin, P., & Levy, J. (2013). Broadband Terahertz Generation and Detection at 10 nm Scale. Nano Letters, 13(6), 2884-2888.

Morelle, R. (2013). Working gun made with 3D printer. BBC News. Accessed: 2016-06-20.

Moscatelli, A. (2013). The struggle for control. Nat Nano, 8(12), 888-890.

Palmer, J. (2013). Terahertz scanner reveals hidden fresco at Louvre. BBC News. Accessed: 2016-06-20.

Robertson, M. P., & Joyce, G. F. (2012). The Origins of the RNA World. Cold Spring Harbor Perspectives in Biology, 4(5).

Robertson, Michael P., & Joyce, Gerald F. (2014). Highly Efficient Self-Replicating RNA Enzymes. Chemistry & Biology, 21(2), 238-245.

Schubert, U., Antón, L. C., Gibbs, J., Norbury, C. C., Yewdell, J. W., & Bennink, J. R. (2000). Rapid degradation of a large fraction of newly synthesized proteins by proteasomes. Nature, 404(6779), 770-774.

Ubiñas, H. (2016). I bought an AR-15 semi-automatic rifle in Philly in 7 minutes . Accessed: 2016-06-19 (cached).

Yewdell, J. W., Antón, L. C., & Bennink, J. R. (1996). Defective ribosomal products (DRiPs): a major source of antigenic peptides for MHC class I molecules? The Journal of Immunology, 157(5), 1823-1826.

Zimmer. C. (2014). Watch Proteins Do the Jitterbug. New York Times. Accessed: 2016-06-19 (cache).

Emil Karlsson

Debunker of pseudoscience.

14 thoughts on “Harbingers of Doom – Part IV: Nanobots and Atomic-Scale Manufacturing

  • June 24, 2016 at 02:22

    * Grey Goo:

    Here I very much agree:
    As this scenario is often presented in a lot of places it is really just completely over the top unrealistic.

    * Self replicating nanobots:

    Eric Drexler himself abandoned the concept of self replication nanobots (molecular assemblers) before he wrote his technical book Nanosystems (no nanobots in there). This quite a while ago. The continuing association of them with him bugged him so much that he wrote a blog-post regarding that matter (can’t find it right now) and his newest book “Radical Abundance”. He currently favors the idea of incremental technology improvement towards nanofactories. There are several alternative strategies to reach super-massive parallelism without resorting to ultra-compact full self replication capability.

    * Atomically precise manufacturing

    I too dislike that biological nanosystems are often cited as an proof of principle example for advanced atomically precise manufacturing. Biological nanosystems create atomically precise bonding topologies but they work
    A) with high error rates that can just somehow be handled with a lot of active repair and retry in the system
    B) with big fat enzymes

    The proposed advanced diamondoid nanosystems in contrast work very differently
    A) with extremely low error rates
    B) with sharp tips where multiple of these tips can reach in to the same (partially passivated) atom / surface spot
    C) with reactions driven by mechanical forces applied to the tips from the nano-machinery behind instead of a catalyzing local (electro)chemical environment (like in an enzyme)

    Due to these huge differences from the possibility of biological nanosystems one can not conclude the possibility of advanced diamondoid nanosystems. But just as much one also cannot conclude their impossibility.

    The present example of biological nanosystems is not what led to the newer detailed ideas of advanced atomically precise manufacturing. It is the question about what one could do if one just uses the periodic table of elements as a construction set. Specifically ignoring the place where evolution got terminally stuck.
    Obviously it is necessary to carefully abide the more fundamental principles of quantum and classical physics. In Nanosystems E. Drexler has done this to his best of his ability – that is he made consistent use of pessimistic worst case estimations thereby deliberately way underestimating capabilities.

    The steps Eric Drexler currently (in Radical Abundance) proposes to get to advanced atomically precise nanofactories are:
    (1) structural DNA nanotechnology and usage of artificially designed predictably folding proteins
    (2) in solution mechanosynthesis remotely similar bio-minerallisation
    (3) mechanosynthesis in vacuum

    Regarding (1) there has been really amazing experimental progress over the past two years. (It’s super-massively parallel by nature.)
    Regarding (2) AFAIK there still is tremendous opportunity for research here
    Regarding (3) this has been demonstrated experimentally but very slow (Si 300K) and by now also theoretically

    Here’s an important theoretical paper about the a full cycle reusable tool-tip set that both circumvents
    * the fat fingers problem and
    * the sticky fingers problem

    Albeit this video:
    depicts the ideas of advanced atomically precise manufacturing relatively well it’s unsurprising that many people (especially scientists) find it rather not to be credible. Building such a system directly is correctly judged ludacris. It is supposed to be an endpoint after a long incremental chain of small technological improvement steps. It doesn’t help credibility that it takes quite a bit to explain that there actually is a good reason why one quite reliably can make such a wide stride in predicting certain parts of future technology.

    It seems you’ve read quite a bit about the topic.
    I personally haven’t read your main reference “Here be Dragons” but I feel the true nature of APM (like portrayed in Nanosystems) might be a bit misrepresented in there. You might want to extend your reading list with “Radical Abundance”
    and snoop a bit around in the easier to read sections of this here:
    (this is the preliminary version of Nanosystems) To gain even more Viewpoints.

    • June 24, 2016 at 12:26

      Thank you for your detailed explanations. I would not be surprised if the book I am critically evaluating does not accurately represent Drexler, partly because the fact that it is only ~250 pages (~4 pages on APM) and partly because I find some problems with the book.

      But I am curious: if Drexler and other proponents of atomically precise manufacturing does not rely on self-replicating nanobots, how can the problem of scale be handled?

      For instance, a carbon atom is about 70 picometers. A piece of carbon with the mass of 12 grams contains 6.022*1023 atoms. If you add a billion atoms per second, then it would take over (6.022*10^23)/1000000000/3600/24/365 = 19 million years to produce these 12 grams.

      You suggest that there “are several alternative strategies to reach super-massive parallelism without resorting to ultra-compact full self replication capability”, but this is a bit unclear.

    • June 24, 2016 at 21:04

      Methods for reaching super massive parallelism include:

      1) bottom up self assembly
      2) top down lithography
      3) a form of partial self replication called “exponential assembly”
      4) maybe there are even more – I don’t know

      Regarding (1) there has been done some great work “mis”using DNA as structural building material. Just by choosing and mixing the right combination of short DNA snippets (oligomeres) and consecutive application of a controlled thermal profile over time some macroscopic amounts of relatively stiff and complex atomically precise parts have been formed. Massively parallel.

      Among the parts that have been made there where hinge mechanisms (with somewhat controllable motion). Hinge mechanisms are still far away from complex interlinked nanosystems but nonetheless an impressive first step. Also experimentally demonstrated was the possibility to further self assemble already self assembled structural DNA parts (which have a complementary stiff shape) in a secondary step. This allows a bit bigger and more complex assemblies.
      The DNA structures where also self assembled reaching wide into the micrometer scale. The resulting structures may be usable as an atomically precise plug-board. Self assembled structures are atomically precise but typically quite error rich which will make further progress quite challenging.

      Work like this has been done at the Wyss Institute, at the Ohio State University and TU München.
      (The short DNA snippets interlink in a 3D-chain-link-fence fashion either in a cartesian or in a hexagonal form. It started with 2D DNA origami.)

      Regarding (2) Microchip production is another self replication free example of massively parallel manufacturing. By now the available methods allow to reach down deep into the nanoscale. AFAIK cutting edge is around ~10nm for electrical chips. For mechanical MEMS chips the minimal feature is quite a bit bigger though. Microchip production is not atomically precise. But since the minimal feature size is beginning to become smaller than the maximal size for the products of bottom up self assembly methodologies (1) chip production methodologies may provide groove patterns for rough self alignment of smaller scale self assembled structures.

      Regarding (3) this is basically a strongly weakened version of full ultra-compact self assembly.
      It may be applied in an early or later stage meaning softer or stiffer materials.
      The main characteristics are:
      * all replication units are immobile and move in conjunction on a common chip surface.
      * all replication units are minimal (basically just linkage mechanisms) as much as possible is supplied from the chips surface (global movement, energy, …) .
      * all replication units use rather view but complex prefabricated parts e.g. with method (1)
      There is a Zyvex page explaining the idea in more detail. LINK
      It is quite an interesting idea but its practicability remains to be seen.
      (The naming “exponential assembly” is quite unfortunate since there is also “convergent assembly” an other unrelated concept of atomically precise manufacturing.)

      About atom placement frequency in far advanced nanofactories.
      Closer analysis tells that a good operation frequency should be in the MHz range. Mechanically enforced chemistry allows to crank up the reaction frequency enough (several orders of magnitude) to compensate for the much lower spacial density of reaction sites (again “fat fingers” are unproblematic in this context too). Even with worst case assumptions all the way through it seems one ends up with quite reasonable production times and waste heat levels of kg/hour and kW.

    • June 24, 2016 at 21:28

      1) bottom up self assembly
      2) top down lithography
      3) a form of partial self replication called “exponential assembly”

      These do not seem to be completely fleshed out. Do you think they will be able to work at a rate higher than 1 billion atoms / second? A rate that is several, several orders of magnitude larger? This seems to be a physical requirement for APM to work at all in any practically relevant way.

      (1) This sound suspiciously close to things we might call “chemistry”, (some form of) “DNA replication” or (less controlled version of) “polymerase chain reaction”. In what ways does this suggestion differ?

      (2) You seem to acknowledge that this would not be APM. How would the problem of heat generation be tackled? If you pack a lot of small microchips together…

      (3) It is not completely clear to me how this suggestion solves the scale problem.

    • June 25, 2016 at 10:20

      A note on nomenclature – so that we talk about the same thing:
      Normal chemistry is a form of atomically precise manufacturing (APM) since the products (molecules) are atomically precise. That is why I use the term “advanced atomically precise manufacturing” for things that go beyond that. If I want to specifically point to things like depicted in the productive nanosystems video I usually use terms like “diamondoid metamaterial APM”.

      >> These do not seem to be completely fleshed out.

      Well if they where we wouldn’t need to do any research for ways to get to the “as sensible determined” target point anymore but go ahead and start development right away :). But actually with these “non-replicative approaches for reaching massive parallelism” things are slowly getting to begin being more fleshed out allowing a more engineering like approach.

      I think I wasn’t clear about the the point in time when to switch to tooltip based and force applying chemistry (mechanosynthesis).
      * The obsolete ultra-compact replicative nanobot concept assumes to first switch to mechanosynthesis and then to massive replication.
      * The incremental improvement pathway idea (with the three steps of materials I mentioned in my first post) in contrast favors massively parallel production (without replication) of sufficiently stiff nano mechanisms (all made in one swoop at the same time). With advancing (conventional) capabilities one gets to the point where all of the mechanisms become capable of performing simple forms of (unconventional) mechanosynthesis (at the same time). (The capability of picking and placing pre-produced parts is likely to emerge before.)

      (1) >> This sound suspiciously close to things we might call “chemistry”

      The first step of (1): The production of the short DNA snippets (oligos) from base reagents is conventional chemistry
      (formation of covalent bonds without force by chemical affinity and possibly catalysis).
      The companies producing DNA oglios (this is no research anymore) use “Oglionucleotide synthesis”
      “Polymerase chain reaction” is I think used for copying long strands of information carrying DNA and is not used for production of DNA oglios. There is no replication or copying of DNA involved. Not that it matters. After bootstrapping has progressed far enough all DNA structures can be removed from the further advanced APM systems.

      The second step of (1): Massively parallel self assembly of DNA oglios to somewhat stiff nano-structures also does not use any force to form bonds yet. One may subsume DNA folding (shape complementarity + weak hydrogen bonds) under chemistry – its definitely not mechanosynthesis – but the structures built can be used for the very earliest forms of massively parallel mechanosynthesis (not “normal” chemistry anymore) – those will have very low throughput in the beginning not due to lack of spacial density but due to lack of placement frequency. But this can be continuously cranked up by slowly switching to better building materials, building environments and system designs.

      (2) Most certainly chip production is not APM and as (1) it is only useful for bootstrapping towards advanced APM in an early stage where throughput is still rather low – theres no incentive to stack silicon chips. To bootstrap to more massive parallelity up into the third dimension the more replication like concept (3) might come in handy. Pick and place of relatively large complex conventionally prefabricated parts upward an convergent assembly stack starting from an already massively parallel 2D system should be relative quick and not prohibitively complex – that is not to say easy.

      >> Do you think they will be able to work at a rate higher than 1 billion atoms / second?

      I assume you refer to my last paragraph where I mention 1MHz and advanced APM that uses force force chemistry.
      I made a bit of a jump to the last paragraph since I focused only on methods about how to get to the necessary massive parallelism in space and not on the three steps of change of character of technology (I mentioned earlier in my first post) that allow speedup of reaction frequency (beside other things).

      The fine grained details about the order of events is still unclear.
      Lager scale orders of events are more predictable e.g. that rudimentary mechanical nano-robotic is necessary before going to vacuum systems. Since self assembly and vacuum systems don’t mix well.

      About the atom placement frequency in advanced diamondoid APM systems:
      Assuming f_0 = 1MHz per mechanosynthesis core how many cores (N_core) does one need to reach the desired throughput of Q_0 = 1kg/h ?
      N_core = Q_0 / (m_C * f_0) = ~1.4*10^15 cores (1.4 Petacore). (m_C … mass of carbon atom)
      A core size of ~(32nm)^3 = ~32000(nm^3) seems to be a sensible guess for advanced APM systems.
      All the cores together then take a volume of size ~45(mm^3) = ~ 45microliters. This can be spread out plenty to remove high levels of waste heat. The effective atom placement frequency in this system is f_0*N_core = 1.4*10^21 atoms per second (1.4YHz – quite mind boggling) (>> 10^9 Atoms/second). Early mechanosynthetic systems (low temporal frequency – likely 2D but already massively parallel) will be several orders of magnitude lower in throughput.

  • June 25, 2016 at 10:56

    Correction: Conventional chemistry can be seen as a form of APM only in the the right context.
    E.g. looking at small scale single molecules as the products or looking at larger scale self assembly with unambiguous outcomes (using base parts that are already atomically precise). Otherwise disordered plastics with somewhat random polymerization points would fall into the class of APM which they clearly do not.

  • June 26, 2016 at 08:37

    Sorry one more correction: It should be 1.4ZHz not 1.4YHz effective frequency – I’ve mixed up the prefixes – the numbers should be correct.

  • Pingback: Harbingers of Doom – Part V: Botching Philosophy of Science | Debunking Denialism

  • Pingback: Harbingers of Doom – Part VI: Doomsday Predictions | Debunking Denialism

  • Pingback: Harbingers of Doom – Part VII: Aliens and Space | Debunking Denialism

  • Pingback: Harbingers of Doom – Part VIII: Existential Risk and Pascal’s Wager | Debunking Denialism

  • Pingback: Harbingers of Doom – Part IX: The Pseudoscience Question | Debunking Denialism

  • Pingback: Harbingers of Doom – Part X: Summary and Addendum | Debunking Denialism

Comments are closed.

%d bloggers like this: