Eleventh in an ongoing series about the places where science and religion meet. The previous episode is here.
For many of the people who find Bostrom’s logic persuasive, the underlying reason is a concept called the “technological singularity.” Named by mathematician John von Neumann after the physicists’ term “singularity,” meaning the collapse of time and space into a single, inescapable, dimensionless point (popularly called a black hole), it refers to a hypothetical time where the power of technology will become virtually infinite.
Why should we think anything like this is coming our way? In 1965, computer company Intel’s co-founder Gordon Moore noticed an interesting phenomenon in his industry: The number of transistors in an integrated circuit was doubling roughly every two years. This concept was soon dubbed “Moore’s Law,” and was rapidly generalized to the idea that “computing power” doubles every two years, and from there to the even more loosely defined idea that “technological progress” doubles every two years.
The original idea has been (more or less) demonstrated as reliably true in the subsequent decades. In its more general form, it seems to roughly match what we know of technological progress, which was quite slow for the first hundreds of thousands of years of human existence, with advances such as the discovery of fire, or of stone tools, or of steel, coming at very widely spaced intervals. Later, the pace picked up quite a bit, with things such as aqueducts, gunpowder, and the printing press arriving within the span of just a few thousand years. Then, in just the twentieth century alone, we got the airplane, television, the electric refrigerator, computers, rockets, plastics, nuclear power, and the compact disc. Today, things that were firmly in the realm of science-fiction a single generation ago—e.g., flat video screens (televisions), hand-held computing and communication devices (smartphones)—are already ubiquitous, while the truly space-aged technology of self-driving cars is poised to take over the roadways.
If you plot the graph of technological progress, it looks exponential. It is long and nearly horizontal extending into the past, it curves rapidly upward in the present, and many people expect that it will be nearly vertical at some point in the near future. The question is, what happens then? The technological singularity is the idea that at some point, perhaps even in the next few decades, computing power will essentially become infinite. If that does happen, assuming we can simulate one person, we can presumably simulate as many as we want. This is why Bostrom believes there will eventually be many more simulated people than non-simulated people. (It is important to note, however, that Bostrom does not require belief in an incipient technological singularity for his hypothesis, it just makes it dramatically more likely and in a much shorter time frame.) That, however, is far from the only dramatic consequence of the technological singularity, which posits technology so advanced it will essentially be “supernaturally” powerful.
It is difficult to say which science-fiction author first transformed the concept of the technological singularity into a portrait of a “silicon god,” a computer so advanced that it gains godlike powers. One of the best-known early stories was produced by humanist and science-fiction grand master Isaac Asimov in 1956. Called “The Last Question,” it depicts a series of increasingly vast, complex, and advanced computers that labor for eons in order to produce an answer to the question “How can entropy be reversed?” After trillions of years, as the universe is dying, the ultimate computer finally responds with God’s opening words from the Bible’s Book of Genesis, “Let there be light!” (The entire scenario was memorably parodied in author Douglas Adams’s classic absurdist novel The Hitchhiker’s Guide to the Galaxy, where a planet-sized computer labors for eons, only to reveal the answer to the ultimate question of life, the universe, and everything to be the number 42.)
In terms of performing science fiction’s core magic of making something on the far borders of imaginability seem real, vivid, and believable, one would be hard-pressed to outdo author David Zindell’s 1998 book Neverness, and its sequel trilogy, A Requiem For Homo Sapiens. The term “silicon god,” in fact, was coined by Zindell, as the name of a character in his books, one out of an entire pantheon of contentious cybernetic deities.
Neverness follows the adventures and misadventures of mathematician and deep-space pilot Mallory Ringess, as he searches for the source of a mysterious blight wiping out entire stars in the far depths of space. On one of his trips he encounters a mysterious woman, who appears to live all alone in an artificially constructed solar system. He eventually comes to learn that she was once a famous human being, the only female member of a secretive clan called the “warrior-poets,” and the most deadly among them. Having fled her life with her comrades, she retired to a distant solar system, and began technologically augmenting herself.
Over the years, the computer that had now become an integral part of herself had grown larger and more powerful, until the entire solar system she lived in had been reformed into her storage space and processors. Her augmented self is so powerful, in fact, that her capacities appear, from Mallory’s point of view, to be supernatural in scope. As is finally revealed, the computer has gained such an advanced level of consciousness that it is not even aware of him in any significant sense; the woman he has apparently been interacting with is just an unconscious projection of the computer’s former self, triggered merely as an automatic reaction to Mallory’s presence.
Neverness is science fiction, but it no longer seems as fanciful as when it was first written. The achievement of godlike technology is predicted in light of advances that are unimaginable now. But those advances might come, not in the far distant future, but within our own lifetimes. The most aggressive predictions place them only a few years from now. Some more cautious prognosticators place it as much as a hundred or a hundred and fifty years away, but in the most aggressive predictions, it happens as soon as the year 2020, which (as of this writing) is only months away.
There are two main reasons those who believe in the singularity cite for their faith in it. The first, discussed earlier, is Moore’s Law, the exponential, runaway growth of technology. Exponential functions, when plotted on graphs are asymptotic, they resemble straight lines at the extremes of the graph. As depicted by Ray Kurzweil in a series of charts representing all human technological progress throughout time, the trends seem clear. Going toward the past, the line of technological progress is almost a straight horizontal line, representing very slow progress. Going forward, the line of technological progress is hypothesized to become nearly vertical, representing unimaginably fast progress. We are somewhere in the “elbow” of the diagram, where it switches over as it slings us around the curve. The vertical portion is the singularity.
The other reason for belief in the singularity is something called technological bootstrapping. “Bootstrapping” is an idiom so far removed, at this point, from its origins, that it may be worth recalling what it actually means. A bootstrap is a heavy lace on a boot, used to tie it up and keep it closed. The idiom “pull yourself up by your bootstraps” represents the notion that you could lift your own self into the air, and over an obstacle, just by tugging on your bootstraps. Interpreted literally, it is nonsense, but it gradually transformed, in the jovial American imagination, from an expression of derision to an exhortation (often aimed at someone whose lack of progress could most reasonably be blamed on lack of opportunities, but who instead was now being chastised for lack of grit, determination, and drive). In America, even the magic trick of pulling oneself up by one’s own bootstraps is held to be within one’s grasp.
The technological appropriation of the term describes a similar magic trick, but a very real one. A bootstrapped system is one that first builds (or loads) a limited, basic set of tools, and then immediately uses those tools (the “bootstraps,” if you will) to build (or load) the next, more advanced, more complete set of tools. This is a commonly used technique to, for example, create or distribute programming languages. The initial program writes the basic structures of the language, the more advanced structures are written in the language itself, and the program eventually produces its own compiler and interpreter.
The idea of bootstrapped artificial intelligence (AI) is this: A robot is built (or a computer program is written) of modest intelligence. Rather than being turned solely to a mundane task, such as building a car, or doing accounting, it is programmed toward the goal of replicating and improving on itself, to build a better, smarter robot (or program). This successor is better at its job, which is the same job, and builds its own successor faster and at a greater level of advancement. Soon, the ninth or tenth generation robot is of superhuman intelligence, and its successor a few more iterations down the line possesses levels of genius that might as well be infinite. Or, to express it another way, at some point people will eventually build a computer that is smarter than we are, a computer, moreover, capable of designing and building an even better computer all by itself. Once that happens, so the theory goes, control of technological progress passes out of our hands, and a new superhuman intelligence essentially wills itself into being.
For some, this prospect is amazing. These optimistic futurists, led by their apostle, Ray Kurzweil, envision a future where every human ill is solved by advanced technology. Disease will be a thing of the past, as will competition over scarce resources. Fine meals of every description will be instantly replicated, without need for farms or slaughterhouses, at the touch of a button. Missing limbs and failing organs will be regrown in a lab, or replaced with superior cybernetic replacements. Everything from cars to household objects will have a brain, and it will all be powered by a new, non-polluting, cheap, and powerful energy source.
So far, this all seems quite close to what we already have in reality, or at least the closely foreseeable future. Much of it already exists, at least in the form of early prototypes. This technologically augmented good life is just a prelude, however, and likely to be a brief one, at least according to Kurzweil. Fast on its heels will come the true promise of the singularity, when human beings and computers merge into one, as all the people then alive (potentially just a few years from now) voluntarily scan their brains and personalities, and transfer them into files that will encode human personalities in digital form, just like an mp3 file digitally encodes a song. Then, with computers of infinite power to serve as both our hosts and our playthings, we will all live forever as godlike, autonomous programs, in computer-generated wonderlands of our own choosing, digital paradises limited only by our own infinitely augmented imaginations.
The fact that this sounds more like a religion than a science has not been lost on many people. Even Kurzweil, a lifelong atheist raised in a nontheistic Unitarian church, does not shy away from comparisons between his faith and traditional religions, proudly claiming the religion-echoing term “Singularitarian” as a self-description, and describing his belief system as a new perspective on “the issues that traditional religions have attempted to address: the nature of mortality and immortality, the purpose of our lives, and intelligence in the universe.”
Lest it be lost, it bears noting that Kurzweil is no random crank toiling away in the obscure depths of the internet. A computer scientist, and one of America’s most noted inventors, he created both the optical character-recognition system used for scanning books into computers, and (in close collaboration with musical genius Stevie Wonder) the Kurzweil K250, an advanced synthesizer much beloved by professional musicians. A National Medal prizewinner and a bestselling author, he is a figure of influence and respect. Yet a growing number of people consider him dangerously deluded, and his vision of the “singularitarian” future wildly over-optimistic.
References
Kurzweil, Ray, The Singularity Is Near: When Humans Transcend Biology, Penguin, New York, 2005.
Quinion, Michael, “Boot,” World Wide Words, February 2, 2002.
Bostrom, Nick, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, New York, 2014.
Grossman, Lev, “2045: The Year Man Becomes Immortal,” Time, February 10, 2011.
Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.
Wow. It is like nobody has ever heard of the S-curve – the mathematical function describing the impact of EVERY technology that has ever existed. Every exponential eventually flattens out as it saturated its problem space.
There’s a weird sort of Reverse Catch-22 in this thinking: the singularity is inevitable because technology will go vertical, and tech will go vertical because of the singularity.
This really is a religious argument with the false appearance of an empirical basis.
Kurzweil’s singularity can never exist because it’s a technological impossibility, but this impossibility has nothing to do with technology. Let me explain.
David Chalmers, the Australian philosopher who specializes in theories of mind, famously described the “hard” and “easy” problems of explaining consciousness. The easy problem, he says, is to build a mechanical model of the way the brain learns, speaks, responds to stimuli, and so on. This is “easy” because it extends the familiar methods of materialist approaches to natural phenomena. Contemporary psychology is making great strides toward solving this problem.
The hard problem, in contrast, is explaining subjective states of consciousness, or “qualia.” All of us live in a private world of subjective experience, a snowglobe of emotion and states of will, aversion, and desire that so far has not reduced to any identifiable mechanism. Chalmers’ view is that internal states of consciousness may never reduce to mechanical brain states because the attempt fundamentally mistakes the nature of consciousness.
A mechanistic explanation for consciousness can never exist, in other words, because it is a mechanical impossibility, but this impossibility has nothing to do with mechanism.
Now let’s turn back to Kurzweil.
The technological singularity imagined by Kurzweil is not a technological event. In one sense that’s no great revelation: as you point out, Kurzweil appears to borrow heavily, if perhaps unconsciously, from religious tropes. Here, a kind of millenialism seems to emerge; the silicon rapture, perhaps. But that’s not what I mean. Kurzweil’s singularity is not technological because he frames it as an anthropological event that exists on the same asymptotic graph of technological progress that contains the wheel, the atom bomb, and the vuvuzela. That fundamentally misunderstands what I will call “the ‘hard’ problem of technology.”
Martin Heidegger’s 1954 essay “The Question Concerning Technology” distinguishes between technology and what Heidegger calls “the essence of technology.” “The essence of technology,” he writes, “is by no means anything technological”: “we shall never experience our relationship to the essence of technology so long as we merely conceive and push forward the technological, put up with it, or evade it.” When we talk about Kurzweil’s singularity in terms of Moore’s Law, etc., we are doing just that.
But the essence of technology, says Heidegger, is a mode of understanding, a way of viewing the world that features instrumentality but doesn’t reduce to it. Technology is not a toolkit but a mindset that continuously exceeds human control; it is a continuous bringing to light of natural possibility. In Heidegger’s phrasing, “Technology is a mode of revealing. Technology comes to presence in the realm where revealing and unconcealment take place, where aletheia, truth, happens.”
Heidegger is a tough nut to crack, and it’s all too easy to lapse into obscurantist hand-waving. But the plain sense of the thing is this: the essence of technology is the calling forth of natural potential via the technological imagination, which enframes nature and precedes any particular act of mechanical invention. My claim, in brief, is that Kurzweil’s singularity, both because it eclipses nature and because it posits a technological imagination perpetually exceeding itself in infinite recursion, is a technological impossibility. Building a super-charged AI is the easy problem; getting around creative intentionality framed by and coevolved with nature is the hard problem. Or, as Heidegger writes, “The merely instrumental, merely anthropological definition of technology is therefore in principle untenable. And it may not be rounded out by being referred to some metaphysical or religious explanation that undergirds it.”
A necessary footnote, I suppose, is the enormous danger of making the attempt. As Heidegger mentions repeatedly in his essay, the real danger of technology isn’t the bad stuff we make, which we can (mostly) control; it’s the propensity to imagine it, which apparently we cannot. If some rough beast of a sentient super-AI starts slouching toward Bethlehem with murder on its mind, it will be our own fault:
“What is dangerous is not technology. Technology is not demonic; but its essence is mysterious. The essence of technology, as a destining of revealing, is the danger. […] The threat to man does not come in the first instance from the potentially lethal machines and apparatus of technology. The actual threat has already afflicted man in his essence.”
“A robot is built (or a computer program is written) of modest intelligence. Rather than being turned solely to a mundane task, such as building a car, or doing accounting, it is programmed toward the goal of replicating and improving on itself, to build a better, smarter robot (or program).”
The software would be doing the improving, because just hardware can’t improve itself. So why not just let the software build its own hardware? Calling it ‘a robot’ sounds clickbaity, which is the main problem we have with this topic.