Eighth in an ongoing series about the places where science and religion meet. The previous episode is here.
At this point we have delayed the crux of the matter long enough. At root, Bostrom’s argument hinges on a single controversial question: Is it possible to truly create or simulate a person? Is there any point, with any level of technology, no matter how advanced, at which this becomes possible?
The first use of the word “robot” was in a 1920 play by Czechoslovakian writer Karel Čapek, but the concept of artificially created life, or mechanical imitations of life, has ancient roots. The ancient Greeks had the myth of Pygmalion, the sculptor whose statue turns into an idealized woman, while the ancient Egyptians and ancient Chinese are said to have actually built mechanical birds and animals capable of movement. The modern idea of artificial intelligence, however, traces most directly back to the brilliant philosopher and father of modern computing, Alan Turing.
Turing believed that human intelligence was nothing more than a series of advanced calculations by a complex organic computer, the human brain; different in scale from the calculations performed by his computers, but not different in kind. A sufficiently advanced computer, programmed properly, could think in a way no different from how a human being thinks. In other words, he believed the human brain to be a Turing Machine.
In defense of this claim, Turing devised a famous thought experiment that took a pragmatic approach to the entire concept of human thought, under what we have previously referred to as the “duck principle.” The duck principle is well-known to most people: “If it looks like a duck, and walks like a duck, and quacks like a duck, it’s a duck.” In other words, what defines “duckhood” is having all the characteristics of a duck. Similarly, Turing believed that what defines conscious intelligence is simply the ability to appear conscious and intelligent. Thus, he devised the famous Turing Test. If a computer could carry on an extended conversation with a human being, without the human being suspecting it was a computer, there would be no reason to deny the computer was intelligent.
Computers are submitted to compete in contests organized around the Turing Test every year, and there are many people who still ascribe to the theory that intelligence is nothing different than the right computer with the right programming. There has also, however, been resistance to the idea. Philosopher John Searle formulated one of the most famous counter-arguments, called the Chinese Room.
The idea of the Chinese Room is this: Picture a room with a person concealed inside. Writing in Chinese is passed into the room, writing in Chinese is passed back out. Based on getting appropriate responses to questions passed into the room, Chinese speakers conclude that the person inside the room must understand Chinese. The analogies to the Turing Test are obvious and intentional. In the Turing Test, the judges have a conversation with an unknown interlocutor and try to determine if that entity is human or machine. In the Chinese Room, the judges try to understand if there is a speaker of Chinese inside the room.
As it turns out, inside the room is someone who speaks no Chinese at all, armed only with examples of writing to copy, and a huge book of rules covering a wide range of possible input-to-output mappings. When she gets an incoming question, she just looks it up in the book, copies the appropriate response and sends it out. Although it may be farfetched to expect any set of rules could cover all possible question-answer combinations, it is (and this is the important part) no more farfetched than expecting that any set of programs could encode all possible interactions in the Turing Test setup.
Here, Searle argues, an understanding of Chinese is simulated (or, more properly, emulated), but the person in the room does not know Chinese. He considers it nonsensical to claim that the book of rules “knows” Chinese, or that some noncorporeal spirit in the room knows Chinese. The program is just faking impressive results. One might as well claim an encyclopedia is intelligent, or that Wikipedia is passing the Turing Test, because it can provide appropriate answers to a wide range of questions. This is artificial intelligence of the sort displayed by Siri, the digital assistant, or at a grander scale, by Watson, the IBM computer successfully programmed to win the Jeopardy game show: impressive, but not remotely human. True intelligence, so the claim goes, is not just clever programming scaled up. It is different not just in scale but also in kind. But what would compose such a difference?
One key aspect to Turing’s concept of computing is that it is profoundly reductionist. Reductionism is a philosophical orientation that claims that all complex phenomena can be completely and adequately understood as the rule-based extensions of simpler and more fundamental phenomena, entities, and behaviors. Thought can be reduced to computations, and computations can be reduced to computer logic, and computer logic can be reduced to a set of Boolean choices between whether a particular byte is on or off at any given moment in time.
Reductionism has long been the dominant outlook in science, the drive to explain more and more complex and diverse worldly phenomena by recourse to more and more simple and unified theoretical foundations. Thus, sociology is held to reduce to psychology, psychology to biology, biology to chemistry, chemistry to physics, physics to molecular physics, molecular physics to atomic physics, atomic physics to particle physics, and particle physics to quantum mechanics. Understand the properties of enough quarks gathered together, so goes the concept, and you understand everything of importance and significance about the world. This has been a long held truism animating the scientific world for centuries. But when it comes to the human mind, an increasing number of people are questioning whether it is merely a truism, or whether it is actually true.
REFERENCES
Cole, David, "The Chinese Room Argument," The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.)
Van Riel, Raphael and Van Gulick, Robert, "Scientific Reduction," The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.)
Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.
It’s not just that AI falls short of the liberal-humanist or Cartesian model of the mind. The liberal-humanist model itself may fall short of reality.
The claim that human thought is irreducibly free or that it consists of some substance separate from material reality is, similarly, either equally reductionist or fundamentally incompatible with physical laws. Plus, as mentioned, there is mounting evidence that human agency may actually lie far closer to a deterministic system than previously thought. The “hard problem” of consciousness may not have been solved, certainly, but it is an inescapable fact that minds require brains. When a brain dies, consciousness goes wherever light goes when the lightbulb is turned off. There is no evidence to support the view that that is anywhere.
Again, it’s not that AI is becoming more human; our understanding of the brain seems to be tending increasingly toward mechanism. From a functionalist perspective the differences are not even particularly important for the question of the basis of consciousness. If it quacks like a duck, walks like a duck, behaves like a duck, and thinks like a duck — well, what else is there to being a duck?
Now, to be sure, there is something called qualia, or irreducibly subjective mental experience. But there’s no reason to infer from the existence of qualia that human consciousness has some transcendental origin. Even modern panpsychism, which in a sense updates Descartes by locating consciousness outside the body, just moves agency into ordinary matter.
Now, don’t mistake me: I am a staunch defender of the liberal-humanist model of agency and consciousness. Untethered from assumptions of a transcendentalist basis of consciousness, it is all too easy to slide from viewing minds as machines to viewing people as commodities, tools, or worse.
But there’s no going back to the garden. We will have to invent our souls for ourselves. Which, I would argue, is precisely what pop culture representations of AI have done since _Frankenstein_.
Thanks so much for your comments 🙂 I hope you stick with the series, I’m going to directly address this topic a little bit later on.