The simulation theory, however, does not have to be turtles all the way down. For example, imagine that somewhere along the chain of simulators, perhaps directly above us (what Bostrom calls “below”), or perhaps much further on up toward the top, we reach an entity we might call the “maximally simple simulator,” an entity of pure and limitless intellect, unbounded in time (and therefore eternal), with no body at all, in a universe containing nothing else but itself, the simplest possible universe.
The setup of Pascal’s Wager, as this argument is generally known, is quite similar in form to Newcomb’s paradox. The glass box with the visible $1000 bill is your ordinary life on earth: you know it exists, and is yours to spend. The opaque box is your eternal reward. It might be empty, or it might be filled with a vast reward far beyond the one in the glass box. You will discover which one is the case only when you die and the box is opened. Do you take the glass box with the known, but finite reward, or the opaque box that could have nothing or everything inside it?
In 1989, Star Trek: The Next Generation, the second major iteration of the durable televised Star Trek science fiction franchise, introduced a terrifying new villain called “the Borg.” An unhallowed melding of a humanlike life form with cybernetic technology, the individual members of the Borg were born, raised, lived, and presumably died entirely surrounded by technological innovations. There was no such thing as “natural childbirth” for them, they were cloned mechanically, nurtured in artificial wombs, and raised to maturity in pods. An implacable collective intelligence, they mercilessly converted any creatures they encountered into extensions of themselves, cannibalizing their planets for raw materials, and sucking other intelligent lifeforms into the inescapable machine.
At root, Bostrom’s argument hinges around a single controversial question. Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, that this becomes possible?
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.
In the year 1999 CE, just on the cusp of a new millennium, the then Wachowski Brothers released “The Matrix,” one of the most influential, imitated, and widely discussed movies of its times. It was only four years later, in 2003 CE, that philosopher Nick Bostrom of Oxford University introduced an argument that it is not only possible we are living inside a computer simulation, it is actually significantly likely. Although it may have sounded like a high-concept science-fiction thriller, the argument drew upon well-established lines of logic and a widely held series of assumptions.
Did Nick Bostrom, professor of philosophy at Oxford University, provide the first convincing modern proof of the probable existence of God? At first glance it seems more than unlikely. Bostrom—best known for his notorious theory that the world exists only on a giant computer—isn’t a notably religious man. What’s more, philosophers and theologians have argued for thousands of years whether God exists; whether the existence of God can be proven; and whether demonstrating proof of God’s existence is something we should even try to pursue. Despite all this, in the year 2003, when Bostrom published a new theory detailing the strong probability that God does in fact exist, nobody noticed (except David Pearce).
Is transhumanism just dangerous over-confidence in technology?
To construct a superintelligence, we would have to understand human intelligence at a deep level. It’s doubtful we’ll ever be able to do this.
On Superintelligence: Paths, Dangers, and Strategies (2014) with the author. What can we predict about, and how can we control in advance, the motivations of the entity likely to result from eventual advances in machine learning? Also with guest Luke Muehlhauser.
On Superintelligence: Paths, Dangers, and Strategies (2014) with the author. What can we predict about, and how can we control in advance, the motivations of the entity likely to result from eventual advances in machine learning? Also with guest Luke Muehlhauser. Learn more.
End song: “Volcano,” by Mark Linsenmayer, recorded in 1992 and released on the album Spanish Armada: Songs of Love and Related Neuroses.
More on David Brin’s novel Existence, plus Nick Bostrom’s essay “Why I Want to Be a Posthuman When I Grow Up” (2006). With guest Brian Casey.
Continuing discussion of David Brin’s novel Existence (without him) and adding Nick Bostrom’s essay “Why I Want to Be a Posthuman When I Grow Up” (2006). Are our present human capabilities sufficient for meeting the challenges our civilization will face? Should we devote our technology to artificially enhancing our abilities, or would that be a crime against nature, a God-play that would probably lead to disaster? With guest Brian Casey.
End song: “Waygo” from The MayTricks (1992).