The paired opposite to reductionism is called emergentism, and in recent years it has begun to gain an increasing number of advocates. In summary, it means that the whole is more than the sum of its parts. Unexpected behaviors and properties can emerge, even from simple well-understood parts, at high enough levels of organization… Some of the ways emergentists have proposed creating artificial intelligence include building or simulating artificial neural nets, or using quantum computers, which take advantage of wave-particle duality and superimposition to perform fuzzy logic. Others reject the entire idea of shortcuts to emulating human intelligence, in favor of simply duplicating the entire fine structure of the human brain in virtual form –something not possible today, but perhaps in the future.
At root, Bostrom’s argument hinges around a single controversial question. Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, that this becomes possible?
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.
Although there have been attempts at creating true simulations of intelligence, machines that can learn and respond appropriately to unbounded input, they have not, as of the time of this writing, progressed significantly far in the way of believably duplicating human interactions (although they have mastered tasks as diverse as as playing chess, competing on the television game show Jeopardy, and identifying other robots as robots). Are these major steps on the pathway, or deceptive dead ends? Could technology ever improve to the point where it could convincingly simulate, not you perhaps, but other people, in all their deep, multifaceted, and endlessly surprising soulfulness? Is true artificial intelligence, to the point that computers could believably create people, actually achievable?
The technological ability to emulate a convincing world is plausible in the not-so-distant future. We additionally know that the motivation to create one already exists, given the huge popularity of video games, and the amount of money and effort put into making them. A big difference, however, between a current-day video game and this potential game of tomorrow, is that the player of a current game knows she is playing a game. Could we really be in a game and not know it?
We all have a solipsistic experience nightly, when we sleep and dream. Each night we inhabit a universe which seems to us, convincingly at the time, to have a wealth of external people and places in it. But all of those people and places are created inside our brains solely for the benefit of the dreamer. In the modern world, however, we can place an additional, familiar experience of a solipsistic reality next to that of the dream: the single-player video game.
One of the first things people discovered when modern computing became a reality is that it’s relatively easy to simulate laws of physics, representing aspects of the real world. This theoretically enables an approach to simulation that builds an entire universe from basic building blocks. A quark could be a tiny bit of fundamental matter (whatever that might be), but it could just as easily be a rule programmed into a computer—or perhaps even a coherent thought in the mind of an all-powerful intellect.
In the year 1999 CE, just on the cusp of a new millennium, the then Wachowski Brothers released “The Matrix,” one of the most influential, imitated, and widely discussed movies of its times. It was only four years later, in 2003 CE, that philosopher Nick Bostrom of Oxford University introduced an argument that it is not only possible we are living inside a computer simulation, it is actually significantly likely. Although it may have sounded like a high-concept science-fiction thriller, the argument drew upon well-established lines of logic and a widely held series of assumptions.
Did Nick Bostrom, professor of philosophy at Oxford University, provide the first convincing modern proof of the probable existence of God? At first glance it seems more than unlikely. Bostrom—best known for his notorious theory that the world exists only on a giant computer—isn’t a notably religious man. What’s more, philosophers and theologians have argued for thousands of years whether God exists; whether the existence of God can be proven; and whether demonstrating proof of God’s existence is something we should even try to pursue. Despite all this, in the year 2003, when Bostrom published a new theory detailing the strong probability that God does in fact exist, nobody noticed (except David Pearce).
Randolph Bourne died 100 years ago this December at the age of 32. While his legacy lives on, to properly pay homage to his work we can recover the spirit of his prophetic philosophy, which has too often been overlooked or misapprehended.
Chaucer’s philosophical exploration of human nature takes a dark turn in “The Pardoner’s Tale,” a greedy man’s proud confession of his own corruption.
I wanted to remind you if you’re a fan of the podcast to go to the iTunes store and leave us a nice rating or a review. I noticed that we now for the first time have a 4 1/2 star overall average instead of a 5 star one. I think this is not uncommon when one’s exposure gets large Continue Reading …
An excerpt from the recently released book Rethinking Health Care Ethics, which explores, for an audience including health professionals, the limits of formal/philosophical ethics in helping them understand the ethical dimensions of their work.
The excerpt focuses on the distinction between formal and informal ethical discourse and the implications of that distinction for day-to-day clinical practice.
In our last two articles, we’ve explored one book in the exciting new field of cognitive science of religion. And we’ve seen how one of the findings in this area is that belief in God, or something like God, is natural to us, given the types of minds we have. Of course, this doesn’t show that one ought to believe Continue Reading …
An excerpt from the recently released book The Character Gap: How Good Are We?, which explores what “character” really means in today’s world and how good our character tends to be.
The excerpt focuses on the powerful impact that empathy can have on helping people in need.
In our last article, we explored some recent findings in the cognitive science of religion (CSR). We saw how current research suggests that belief in God, or something like God, comes naturally to most human beings, most of the time, in virtue of the types of brains we have. I’d like to explore Justin L. Barrett’s arguments on this front Continue Reading …
Can philosophy avoid theoretical speculation to focus solely on pursuit of the good life, or is that goal inherently problematic? Confining oneself to a particular branch of philosophy is something one should outgrow.
Because the ordinary is always at hand, it is, in fact, too familiar for us to perceive it and become fully aware of it. The ordinary is what most needs to be discovered and yet is something that can never be approached, since to do so is to immediately change it. Art of the Ordinary explores how philosophical questions can be revealed in surprising places—as in a stand-up comic’s routine, for instance, or a Brillo box, or a Hollywood movie.
“Belief in God is an almost inevitable consequence of the kind of minds we have.” —Justin L. Barrett
“Faith is not to be contrasted with knowledge: faith (at least in paradigmatic instances) is knowledge, knowledge of a certain special kind.” —Alvin Plantinga