Podcast: Play in new window | Download (Duration: 31:19 — 28.7MB)
This is a 31-minute preview of a 2 hr, 20-minute episode.
Discussing articles by Alan Turing, Gilbert Ryle, Thomas Nagel, John Searle, and Dan Dennett.
What is this mind stuff, and how can it "be" the brain? Can computers think? No? What if they're really sexified? Then can they think? Can the mind be a computer? Can it be a room with a guy in it that doesn't speak Chinese? Can science completely understand it? ...The mind, that is, not the room, or Chinese. What is it like to be a bat? What about a weevil? Do you even know what a weevil is, really? Then how do you know it's not a mind? Hmmmm? Is guest podcaster Marco Wise a robot? Even his wife cannot be sure!
We introduce the mind/body problem and the wackiness that it engenders by breezing through several articles, which you may read along with us:
1. Alan Turing’s 1950 paper “Computing Machinery and Intelligence."
2. A chapter of Gilbert Ryle's 1949 book The Concept of Mind called "Descartes' Myth."
3. Thomas Nagel's 1974 essay "What Is It Like to Be a Bat?"
4. John Searle's Chinese Room argument, discussed in a 1980 piece, "Minds, Brains and Programs."
5. Daniel C. Dennett's "Quining Qualia."
Some additional resources that we talk about: David Chalmers's "Consciousness and its Place in Nature, " Frank Jackson's "Epiphenomenal Qualia", Paul Churchland's Matter and Consciousness,Jerry Fodor's "The Mind-Body Problem," Zoltan Torey's The Crucible of Consciousness,
and the Stanford Encyclopedia of Philosophy's long entry on the Chinese Room argument.
End Song: "No Mind" from 1998’s Mark Lint and the Fake Johnson Trio; the whole album is now free online.
Wes is my hero in this episode. Way to say f-ck no! to that bullshit, man. It was refreshing to have someone express what I was thinking. (Namely that qualia, i.e. experiences, really exist, and brain states do not explain them.)
In fact I would go farther: there is *nothing but* quaiia (or anyway perceptions, emotions and thoughts) arising in this moment. If you tell me about atoms, that is (a) some concepts and thoughts arising now, and it relates to (b) experiments that we perceive through some sense data.
Consciousness and the phenomena that arise in it are all we know and ever have and ever will.
How is the Chinese Room different from the semantic emptiness of the rest of existence? I receive strange inputs and use whatever rulebook I trust or am compelled to use … and I react as best I can. Am I real? The Chinese Room is less a response to Turing than a metaphor for how we live within the room of our skulls adrift in a meaningless cosmos.
In any case, computer programs and the machines they run on happen to do a great deal more than answer anticipated questions. Their joints are syntactic but they can attach semantic meaning to sensory input and invent new signifiers that are just as apt as ours. The rest is a difference of dexterity and complexity, not kind. Meantime, Searle has said the mind-body connection will one day be solved, we just don’t have the necessary biological knowledge yet. But he rejects the same “wait and see” reply when the problem is phrased computationally instead of biologically. How can he do that? Most of our biological experimentation today requires computation first.
http://www.thecritique.com/news/the-imitation-game-the-philosophical-legacy-of-alan-turing/
I was so eager to hear you guys’ take on these topics that for the first time ever, I shelled out the dough to buy the episode on iTunes (I do give you money every month, but only a buck, so I normally just listen to the free episodes).
I was a little disappointed right off the bat that you talked about how you had trouble finding anything more recent than the 1990s on the subject, which immediately let me know that you hadn’t read Douglas Hofstadter’s brilliant “I Am a Strange Loop”. I also see that no one has mentioned Hofstadter in the comments, but there is a trackback that leads me to learn that Mark subsequently wrote a blog entry on the topic. Very cool–I will have to read that forthwith.
The Chinese Room thought experiment, I find maddening that so many smart people take it seriously. I agree that it essentially begs the question; but one of the fundamental reasons, I don’t think I quite heard any of you articulate. Namely, that (in my view at least), for a thought experiment to be useful, we have to be able to actually imagine that the scenario depicted could happen, at least in theory.
But the problem here is that even if we partially fudge this and say that the book can be infinitely long/large, and we either have it memorized or we don’t have to worry about how long it takes for us to give a response, the whole idea is still incoherent. Even an actual god could not write a book that a human could use to engage in conversation (at least, not any kind of sophisticated conversation) in a language he or she does not understand. That’s not how language works, and it’s not how the thought process to engage in conversation with someone works.
So fundamentally, Searle’s idea of “simulated intelligence” wouldn’t work, and therefore one cannot draw any conclusions from the fact that there is no conscious understanding of the interaction on the part of the person in the room exchanging symbols with the help of the book.
If and when there is AI on the level that would pass a rigorous Turing test with Chinese speakers (or anyone else), it will not just be providing preprogrammed responses to the input it receives. There are an infinite number of novel sentences in any language, and the AI’s ability to nimbly respond to them as a human would will involve something like a flexible neural network that can learn as it goes, and come up with novel utterances itself.