At root, Bostrom’s argument hinges around a single controversial question. Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, that this becomes possible?
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.
In the year 1999 CE, just on the cusp of a new millennium, the then Wachowski Brothers released “The Matrix,” one of the most influential, imitated, and widely discussed movies of its times. It was only four years later, in 2003 CE, that philosopher Nick Bostrom of Oxford University introduced an argument that it is not only possible we are living inside a computer simulation, it is actually significantly likely. Although it may have sounded like a high-concept science-fiction thriller, the argument drew upon well-established lines of logic and a widely held series of assumptions.
Did Nick Bostrom, professor of philosophy at Oxford University, provide the first convincing modern proof of the probable existence of God? At first glance it seems more than unlikely. Bostrom—best known for his notorious theory that the world exists only on a giant computer—isn’t a notably religious man. What’s more, philosophers and theologians have argued for thousands of years whether God exists; whether the existence of God can be proven; and whether demonstrating proof of God’s existence is something we should even try to pursue. Despite all this, in the year 2003, when Bostrom published a new theory detailing the strong probability that God does in fact exist, nobody noticed (except David Pearce).
Is transhumanism just dangerous over-confidence in technology?
To construct a superintelligence, we would have to understand human intelligence at a deep level. It’s doubtful we’ll ever be able to do this.
On Superintelligence: Paths, Dangers, and Strategies (2014) with the author. What can we predict about, and how can we control in advance, the motivations of the entity likely to result from eventual advances in machine learning? Also with guest Luke Muehlhauser.
On Superintelligence: Paths, Dangers, and Strategies (2014) with the author. What can we predict about, and how can we control in advance, the motivations of the entity likely to result from eventual advances in machine learning? Also with guest Luke Muehlhauser. Learn more.
End song: “Volcano,” by Mark Linsenmayer, recorded in 1992 and released on the album Spanish Armada: Songs of Love and Related Neuroses.
More on David Brin’s novel Existence, plus Nick Bostrom’s essay “Why I Want to Be a Posthuman When I Grow Up” (2006). With guest Brian Casey.
Continuing discussion of David Brin’s novel Existence (without him) and adding Nick Bostrom’s essay “Why I Want to Be a Posthuman When I Grow Up” (2006). Are our present human capabilities sufficient for meeting the challenges our civilization will face? Should we devote our technology to artificially enhancing our abilities, or would that be a crime against nature, a God-play that would probably lead to disaster? With guest Brian Casey.
End song: “Waygo” from The MayTricks (1992).