It is possible, given that we still understand so little of the brain, that it has evolved in such a way that it does bridge the gap between the subatomic world and the macroscopic world? Perhaps the free will of the quark is transmitted up through the intermediary of the brain and into the otherwise deterministic macroscopic world. But if this is true, does it preclude the possibility of a truly living simulation? Are the human beings inside the computer doomed to be dead, deterministic automata, lacking the quantum free will of the real ones?
The paired opposite to reductionism is called emergentism, and in recent years it has begun to gain an increasing number of advocates. In summary, it means that the whole is more than the sum of its parts. Unexpected behaviors and properties can emerge, even from simple well-understood parts, at high enough levels of organization… Some of the ways emergentists have proposed creating artificial intelligence include building or simulating artificial neural nets, or using quantum computers, which take advantage of wave-particle duality and superimposition to perform fuzzy logic. Others reject the entire idea of shortcuts to emulating human intelligence, in favor of simply duplicating the entire fine structure of the human brain in virtual form –something not possible today, but perhaps in the future.
At root, Bostrom’s argument hinges around a single controversial question. Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, that this becomes possible?
We left off last week with the question of how much weight we should give to Nick Bostrom’s argument that we are not only possibly simulated, but likely to be so. This argument, or at least our representation of it, rests on two key claims: first, that our descendants will be able to create people just like ourselves; and second, that they will create a lot of them. The argument is compelling only in the case that both are true.
Although there have been attempts at creating true simulations of intelligence, machines that can learn and respond appropriately to unbounded input, they have not, as of the time of this writing, progressed significantly far in the way of believably duplicating human interactions (although they have mastered tasks as diverse as as playing chess, competing on the television game show Jeopardy, and identifying other robots as robots). Are these major steps on the pathway, or deceptive dead ends? Could technology ever improve to the point where it could convincingly simulate, not you perhaps, but other people, in all their deep, multifaceted, and endlessly surprising soulfulness? Is true artificial intelligence, to the point that computers could believably create people, actually achievable?
The technological ability to emulate a convincing world is plausible in the not-so-distant future. We additionally know that the motivation to create one already exists, given the huge popularity of video games, and the amount of money and effort put into making them. A big difference, however, between a current-day video game and this potential game of tomorrow, is that the player of a current game knows she is playing a game. Could we really be in a game and not know it?
We all have a solipsistic experience nightly, when we sleep and dream. Each night we inhabit a universe which seems to us, convincingly at the time, to have a wealth of external people and places in it. But all of those people and places are created inside our brains solely for the benefit of the dreamer. In the modern world, however, we can place an additional, familiar experience of a solipsistic reality next to that of the dream: the single-player video game.
One of the first things people discovered when modern computing became a reality is that it’s relatively easy to simulate laws of physics, representing aspects of the real world. This theoretically enables an approach to simulation that builds an entire universe from basic building blocks. A quark could be a tiny bit of fundamental matter (whatever that might be), but it could just as easily be a rule programmed into a computer—or perhaps even a coherent thought in the mind of an all-powerful intellect.
In the year 1999 CE, just on the cusp of a new millennium, the then Wachowski Brothers released “The Matrix,” one of the most influential, imitated, and widely discussed movies of its times. It was only four years later, in 2003 CE, that philosopher Nick Bostrom of Oxford University introduced an argument that it is not only possible we are living inside a computer simulation, it is actually significantly likely. Although it may have sounded like a high-concept science-fiction thriller, the argument drew upon well-established lines of logic and a widely held series of assumptions.
Did Nick Bostrom, professor of philosophy at Oxford University, provide the first convincing modern proof of the probable existence of God? At first glance it seems more than unlikely. Bostrom—best known for his notorious theory that the world exists only on a giant computer—isn’t a notably religious man. What’s more, philosophers and theologians have argued for thousands of years whether God exists; whether the existence of God can be proven; and whether demonstrating proof of God’s existence is something we should even try to pursue. Despite all this, in the year 2003, when Bostrom published a new theory detailing the strong probability that God does in fact exist, nobody noticed (except David Pearce).
Randolph Bourne died 100 years ago this December at the age of 32. While his legacy lives on, to properly pay homage to his work we can recover the spirit of his prophetic philosophy, which has too often been overlooked or misapprehended.
Chaucer’s philosophical exploration of human nature takes a dark turn in “The Pardoner’s Tale,” a greedy man’s proud confession of his own corruption.
Despite its many obvious shortcomings, the human mind is a remarkably productive generator of categories—from the Linnaean system of taxonomy to two drinkers in a bar arguing over India Pale Ales, and indeed the person sitting next to them who complains to the bartender about “hipsters” taking over his favorite bar. This natural proclivity to set out a number of Continue Reading …
An excerpt from the recently released book Rethinking Health Care Ethics, which explores, for an audience including health professionals, the limits of formal/philosophical ethics in helping them understand the ethical dimensions of their work.
The excerpt focuses on the distinction between formal and informal ethical discourse and the implications of that distinction for day-to-day clinical practice.
In our last two articles, we’ve explored one book in the exciting new field of cognitive science of religion. And we’ve seen how one of the findings in this area is that belief in God, or something like God, is natural to us, given the types of minds we have. Of course, this doesn’t show that one ought to believe Continue Reading …
In our last article, we explored some recent findings in the cognitive science of religion (CSR). We saw how current research suggests that belief in God, or something like God, comes naturally to most human beings, most of the time, in virtue of the types of brains we have. I’d like to explore Justin L. Barrett’s arguments on this front Continue Reading …
The 2016 US presidential election and the Trump presidency have shown us that a variety of epistemically perilous conditions are far too common in the thought and behavior of Americans. Philosophy has a role to play in addressing this.
“Belief in God is an almost inevitable consequence of the kind of minds we have.” —Justin L. Barrett
John Woo is synonymous with Hollywood blockbuster action films, but his films are actually more about ethics than explosions. His 1989 masterpiece The Killer is a Confucian action film.
“Faith is not to be contrasted with knowledge: faith (at least in paradigmatic instances) is knowledge, knowledge of a certain special kind.” —Alvin Plantinga