Seventh in an ongoing series about the places where science and religion meet. The previous episode is here.
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.
The reasoning goes as follows. Suppose you could line up all the people who have ever existed, and all the people who will ever exist. Setting aside the obvious impossibilities of this scenario, what could we say about a random person X chosen from this group? Probabilistically speaking, a randomly selected person is likely to be characteristic of the group as a whole. Thus, if 90 percent of the people in the line are right-handed, then the random person is 90 percent likely to be right-handed. If 90 percent of the people identify as heterosexual, then the random person is 90 percent likely to identify as heterosexual. However, the probability of the random person being both right-handed and heterosexual-identified is only 81 percent (the product of the two probabilities, assuming they are entirely independent).
According to Bostrom’s logic, you should view yourself as a random person selected from the line. In general, all other things being equal, any given trait you have is likely to be a trait shared with the majority of other people. (This does not imply, however, that every trait you have is shared with the majority of other people.) Given that, Bostrom claims, you are entitled to infer things about the majority of people from your own personal traits.
The tricky thing about this, however, is whether you are actually legitimately allowed to consider yourself as chosen at random. What is your reference group? If the imaginary lineup included not just people, but also ducks, would you be justified in saying that, based on the fact that you are human, that most creatures in the line are likely to be human? Or does the fact that you are able to ponder the question at all predetermine that your proper reference group is humans only, not humans and ducks?
This may seem like an absurd detour, but it is founded in a well-established school of thought called Bayesian logic, which uses the laws of probability to yield unexpected and sometimes counterintuitive conclusions about the world. For example, if I have a box with three red and one green ball in it, traditional probability tells us that my chances of randomly drawing a red ball out of the box is three out of four, or .75 (75 percent). On the other hand, Bayesian logic tells me that if I reach into a box, knowing nothing about it other than that it is filled with different colored balls, and draw out a red ball, then I am justified in expecting that the box originally had more red balls than any other color (or, at least as many). After all, if the box had only one red ball, and a hundred white balls, then it would be very odd that I had somehow managed to draw the one red one.
Mathematically speaking, we would say that the probability of my hypothesis (there were at least as many red balls as balls of any other color in the box) being true, given the evidence (I drew out a red ball after a random choice) is equal to the probability of the evidence being true, given the hypothesis (in this case, if there were at least as many red balls as any other color, then it would be likely I would draw out a red one) divided by the probability of the evidence considered by itself. It is a little tricky to put exact numbers on it in the case that I do not know the total number of balls. However, with a little tweaking we can come up with a more exact scenario.
As in our original example, let us have a box with three balls in it. These balls might be either white or red. I reach in, and randomly draw out a red ball. Now let us consider a hypothesis is that three of the four balls originally in the box were red. According to Bayes, the probability of my hypothesis being true, given the evidence, is ( ¾ x ¼ ) / ½ = ⅜. We already knew the probability of my evidence, given the hypothesis, was ¾. The hypothesis itself was true in ¼ of all possible scenarios from all white balls to all red balls. And the overall probability of getting a red ball was one half, again considering all possible scenarios. So the overall probability of the hypothesis being true, given the evidence, is ⅜, or a little less than half.
If my hypothesis was all red balls, then the probability of evidence, given hypothesis, would go up to 1, but the probability of the hypothesis itself would go down, to 1/16, so the final probability of the hypothesis, given the evidence, would be one eighth, or one third as probable as the first hypothesis. On the other hand, if my hypothesis was all white balls, then the probability of the evidence, given the hypothesis, would be zero, so there would be no chance the hypothesis would be true, given the evidence. That makes sense.
Bayesian logic is pretty air tight mathematically, and aligns well with intuitions for simple, well-understood systems like the one in the example. But it takes us some odder places when we start thinking “outside the box.” For example, consider the “anthropic principle,” an attempt to explain why the planet we live on is so exactly perfect for life. If the Earth had been a little hotter, or a little colder, life on Earth could never have survived. Is it very unlikely that the conditions here be perfect?
The anthropic principle says “no.” Given that we are around to ponder the question, we know human beings exist, and given that human beings exist, we know life exists, and given that life exists, we know it must exist in a place that is ideal for life. So, given that life exists, it is necessary that the Earth (or a place like it) exists. In Bayesian terms, the probability of the evidence independent of the hypothesis is nil. So the hypothesis, given the evidence, is overwhelmingly sure, it is in fact necessarily true.
One might note that this gets us no closer to understanding why life exists, or how probable it is that life exists at all. It does, however, demonstrate that the finely tuned appropriateness of the Earth for life and the presence of life on Earth cannot properly be considered as separate miracles; the fact of the second entails the first, and to consider the miracle of the second is therefore already a contemplation of the first.
The anthropic principle is widely accepted, if somewhat unsatisfying. A more controversial application of Bayesian logic is the one we have already seen, that applies to personal traits. It starts with the idea that the event of your birth is like a ball being drawn out of a box. All other things being equal, we expect the ball color to match the majority of balls. Accordingly, all other things being equal, you should expect your personal traits to match those of the majority of people. But what if you miss one, two, or all three of those traits? Are you just a statistical anomaly, an outlier?
This is where the idea of reference group comes in. We assume the balls in the box are chosen randomly. But if you look in the box and seek out the green ones, then you are guaranteed a green one (if there is at least one in the box) no matter how few of them there are. Similarly, if you are pre-selected in some way as a person, it ruins your statistical normalcy. It might be that to read this post you have to be a literate speaker of English. So therefore, even though Chinese are the majority ethnicity in the world, they are in the minority of English speakers. They are not the majority in the reference group.
So, in order for Bostrom to conclude that it is likely that we are living in a simulation, two things must be true. First, humans and simulated humans must form one, united reference group—they must be enough of the same thing that we cannot say, a priori, that a human chosen at random is definitely a real human or a definitely a robot. In other words, there cannot be two reference groups of humans, one inside the computer and one outside. There must be one reference group, made of two sets of people, but where which set a person belongs to is not something we can tell by examining the person.The second thing that must be true is that there must eventually be more of the simulated humans (total) than the non-simulated ones (considered all throughout history and the future).
Let us assume, for the sake of argument, that if we could simulate people we would, and if we can someday, we will, and furthermore, that we will simulate as many of them as we can. If we accept those assumptions, Bostrom’s argument then rests entirely on the questions of whether it ever will become possible for us to fully simulate humans, and if so, will it ever be possible for us to simulate a large number of them.
It might seem like the answer to the first entails the answer to the second, but it is not quite so. True, if the answer to the first is “no,” the second is obviously “no” as well. It is possible, however, that we could fully simulate one human being and still not be able to simulate very many of them, because of limited resources. If it turns out to be highly resource intensive to simulate human beings, perhaps we might ever only be able to simulate one at a time. Then, by Bostrom’s own logic, it would be very unlikely that any randomly selected person would be simulated. So unless we can someday simulate large numbers of people, it is unlikely, by Bostrom’s own argument, that you or I might be simulated.
McGrayne, Sharon Bertsch, The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy, Yale University Press, New Haven, 2011.
Silver, Nate, The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, The Penguin Press, London, 2012.
Ratzsch, Del and Koperski, Jeffrey, “Teleological Arguments for God’s Existence,” The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.)