Third in a series about the intersection between religion and technology. The previous essay is here.
The word “simulation” means an imitation, something that duplicates aspects of something else; from the Latin root similis, to be “like” something. In computer science, it means the re-creation of a physical object or system in the form of computer-generated data. One of the first things people discovered when modern computing became a reality is that it’s relatively easy to simulate laws of physics, representing aspects of the real world.
In a sense, this began long before computers. One of the reasons the mathematical innovations of Galileo and Newton were such significant events in world history was that they made it possible to accurately predict the trajectory of cannonballs for warfare—a pen-and-paper simulation. Rather than having to waste a number of expensive cannonballs in trial-and-error targeting, you could simulate the outcomes with mathematics. Then, when you actually fired the cannonball, you would have a fairly accurate sense of where it would land.
Simulating things mentally is arguably what we do whenever we imagine a physical interaction. If you pictured, even briefly, in your head, the cannonball being fired from the cannon, and tracing out a graceful arc, first rising, then falling, on its path of destruction, then you were making a mental simulation. Our mental simulations, however, tend to have a limited reach and accuracy.
The main thing the modern computer adds, other than the ability to make calculations millions of times faster than a human being, is the ability to see an accurate real-time visualization of the results. For example, program into your computer the ability to draw circles and straight lines. Draw a line for the ground, and a circle representing a ball. Add in some basic rules of physics, and some reasonable numbers for gravity, the springiness of the ball, and the dampening effect of friction, and the computer will output an eerily realistic animation of the ball bouncing.
Simulations can be as fine-grained or as rough-hewn as you choose. Billiard balls are simulated particularly often, because they are large, simple objects designed to behave as consistently as possible. They can be represented with a high degree of accuracy by the computer as a single, unbroken, perfect sphere, a very easy object to simulate.
More complex objects can be significantly harder. The flight of a golf ball is affected both by the pattern of dimples in its surface, and the material composing its core. An accurate simulation of a golf ball would need to account for both aspects, as well as many others. In computer-generated imagery for movies and video games, material substances such as powders, liquids, hair, feathers, fabrics, and metallic alloys can be quite complex and resource-intensive to simulate. All, however, can be and are handled by today’s CGI (computer-generated imagery).
In a sense, all of this, and the idea of computerized simulation itself, is a legacy of computer pioneer Alan Turing. In 1936, he invented the idea of a new kind of machine, the basic concept of which eventually became the modern computer. His machine was very simple: It consisted of an infinitely long strip of tape, divided into discrete squares that could have a symbol marked on them, a device for reading the symbol on the squares, a way to change the symbol on the current square, a way to move the tape in either direction, and an ordered set of rules to follow. In other words, this conceptual computer had a place to store information (the tape) or what we would now call “memory,” a way to accept input (reading the tape), a way to display output (writing to the tape), and an operating system (the ordered rules the machine starts with).
What Turing proved was that this kind of machine, although very simple, was very powerful. It could be designed to solve any problem of arithmetic, for instance. You just needed to have a way to encode the input numbers as the proper sequence of symbols, define the operation (addition, subtraction, multiplication, division) as another sequence, and then have the right set of permanent rules in the machine to respond. This was big news for the time, since the reliable, high-speed performance of mathematical calculations was labor intensive, and had innumerable applications, both military and civilian.
There was yet another level to Turing’s discovery, however. He also proved that a variation on his simple machine, capable of performing these kinds of calculations, could also “simulate” any other machine capable of doing the same work. In essence, what he showed is that there is a certain class of things that we now call “Universal Turing Machines.” Such a machine might be made out of a strip of tape and a reader/writer, as described above. It might be a steam-powered, mechanical “Difference Engine,” such as the one associated with Turing’s predecessors, Ada Lovelace and Charles Babbage. Or it might be the latest high-definition laptop computer. It could be constructed with gears and punch cards, or powered by water dropping into buckets. It could even be a person working out problems by hand with pen and paper, as long as that person accurately and reliably followed the rules. It could function painfully slowly, or incredibly quickly. At a certain conceptual level, however, all such machines are the same. Any one of them, given enough time and resources, could figure out the same problems as any of the others, and any one of them could simulate any of the others.
Thus, any new computer on the market today can be programmed to simulate any other computer (given that its memory and processing requirements do not exceed the resources of the new machine). My new HP laptop can simulate an original Apple Macintosh computer from 1985, a Nintendo game console from 1995, a room-sized IBM from the 1960s, or even Turing’s original tape-and-reader combination. Once the old software is translated into the new environment, it will function exactly as it would have on its original operating system.
Things that a Turing machine can handle are classified as “computable,” and given enough time and storage space, any Turing machine can handle any computable problem. So, if figuring out what trajectories elementary particles will take is a computable problem, then a computer can handle it. And interestingly enough, when you get down to that level of fine detail, the simulation task becomes easier again, at least in some ways. This is because modern science tells us that all the myriad substances in the world are composed of a limited number of basic building blocks called elements, and that all elements are made up of the same three fundamental particles: protons, neutrons, and electrons (there are exceptions to these rules, but not typically found on Earth outside of a scientific laboratory).
All of the exceedingly complex behaviors and characteristics of macroscopic substances are reducible to the rule-governed interactions of these elementary particles. Simulating matter at the subatomic level is much harder in one sense, because of the billions and trillions of individual particles and interactions to simulate, but easier in another sense, because the behavior of those particles is well understood and consistent, and because all the complexities of the macroscopic world emerge naturally from those interactions. They do not have to be specifically programmed for, they come along for free.
It has not proven possible to directly observe anything below the level of fundamental particles, but scientists have hypothesized even more basic entities called quarks, which fit together, three at a time, to form particles. Interestingly enough, as far as its knowable presence in the universe is concerned, a quark is nothing more or less than a thing than obeys certain basic rules.
Another way to put this is all we know (and theoretically, all we can know) about quarks is that they follow certain basic well-defined rules of behavior. There is a principle called quantum indeterminacy, which means that, as far as well can tell, there are aspects to what the quarks do that are completely random. However, they’re random within well-defined and statistically regular parameters. In a given situation, we might not know if a particular quark is going to do A or B, but we know A is twice as likely as B (and action C is not possible at all).
What this means is that we can accurately simulate quarks and the elementary particles they compose, just by knowing the rules. Since all we know about quarks is what rules they follow, anything that follows that rule is indistinguishable from a quark, as far as we can tell. By the duck rule (“if it looks like a duck, and swims like a duck, and quacks like a duck, it is a duck”), it may actually BE a quark. Theoretically, a quark could be composed of anything at all, and the macroscopic universe would look and function just the same.
A quark could be a tiny bit of fundamental matter (whatever that might be), but it could just as easily be a rule programmed into a computer—or perhaps even a coherent thought in the mind of an all-powerful intellect. Or, as a memorable cartoon in Randall Monroe’s online comic XKCD proposed, it could be a portion of the progression of a pattern of rocks on an infinite, featureless plain. As long as it somehow encoded the same rules of interaction, it would support the same macroscopic world. This, at any rate, is the concept behind the simulated universe.
- Lake, Adam, Game Programming Gems 8, Cengage Learning PTR, Boston, 2010.
- Leavitt, David, The Man Who Knew Too Much: Alan Turing and the Invention of the Computer, W.W. Norton, 2006.
- Sears, Francis W., Mark W. Zemansky and Hugh D. Young, University Physics, Seventh Edition, Addison-Wesley, Reading, 1987.
- Hawking, Stephen W. and Leonard Mlodinow, The Grand Design, Bantam Books, New York, 2010.
Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.
Um, I’m confused. I used to do computational physics, and I never found the universe “easy to simulate on a computer.” Sure, we can fake the simple stuff, but even three gravitating objects rapidly leads to chaos if we don’t get all the digits of precision right.
And quantum physics is worse. Yes, there are mathematical rules governing quarks, but we can’t solve them exactly. We can’t even solve them “perturbativey” using successive approximations. We have to cheat, and how we didn’t miss anything.
Has anyone actually PROVED we can simulate the universe with less energy that it takes to create one? Or is that just a naive assumption?
Thanks for following the series and offering a substantive critique. You are correct that this represents a “toy box” version of simulation that absolutely wouldn’t hold up to recreating a full version of our universe –at least as it appears to us.
It’s looking ahead a little, but future essays are going to explore this problem and some of the proposed solutions. One of them is the possibility of a universe deliberately made to fool us, where only small parts of it are fully built out, thus requiring less computing power. The idea of a deliberately deceptive universe may seem like a crazy, far-fetched idea, but we already build such universes –we call them “immersive video games.”
The second solution is more controversial. It is based on an idea called the “technological singularity,” which extrapolates current trends in computing power to suggest that we’ll have access to what amounts to infinite amounts of computing power in the near future. Again, this sounds crazy to many people, but there are many others, some with very good scientific reputations, who believe it. However, as I’ll explore further in future installments, belief in the singularity is essentially an item of faith.
Finally, we could potentially abandon Bostrom’s assumptions that our simulators are creatures like ourselves, in a universe like ours, where technology works the way ours does. Of course, doing that raises a whole new set of objections –mainly in the way it flings opens the door to all kinds of farfetched scenarios.
Thanks for reading, and I hope you stick with the series! I welcome any future critiques you have (particularly given your background in the sciences).