Podcast: Play in new window | Download (Duration: 31:19 — 28.7MB)
This is a 31-minute preview of a 2 hr, 20-minute episode.
Discussing articles by Alan Turing, Gilbert Ryle, Thomas Nagel, John Searle, and Dan Dennett.
What is this mind stuff, and how can it "be" the brain? Can computers think? No? What if they're really sexified? Then can they think? Can the mind be a computer? Can it be a room with a guy in it that doesn't speak Chinese? Can science completely understand it? ...The mind, that is, not the room, or Chinese. What is it like to be a bat? What about a weevil? Do you even know what a weevil is, really? Then how do you know it's not a mind? Hmmmm? Is guest podcaster Marco Wise a robot? Even his wife cannot be sure!
We introduce the mind/body problem and the wackiness that it engenders by breezing through several articles, which you may read along with us:
1. Alan Turing’s 1950 paper “Computing Machinery and Intelligence."
2. A chapter of Gilbert Ryle's 1949 book The Concept of Mind called "Descartes' Myth."
3. Thomas Nagel's 1974 essay "What Is It Like to Be a Bat?"
4. John Searle's Chinese Room argument, discussed in a 1980 piece, "Minds, Brains and Programs."
5. Daniel C. Dennett's "Quining Qualia."
Some additional resources that we talk about: David Chalmers's "Consciousness and its Place in Nature, " Frank Jackson's "Epiphenomenal Qualia", Paul Churchland's Matter and Consciousness,Jerry Fodor's "The Mind-Body Problem," Zoltan Torey's The Crucible of Consciousness,and the Stanford Encyclopedia of Philosophy's long entry on the Chinese Room argument.
End Song: "No Mind" from 1998’s Mark Lint and the Fake Johnson Trio; the whole album is now free online.
Marco Wise says
I don’t think I’m a robot, but I’ll double check with my programmers.
Another hilarious and thoughtful episode!
Anyway, you’re right that probably nobody reads the papers beforehand, so it might be best to accept this reality and speak about the issues from a more educational and accessible viewpoint: that of assuming nobody has read the papers, or is even aware of the ideas and debates involved. Lots more orientation would help those of us whose exposure to the subject matter is very limited.
Tom Corwin says
I’m surprised that you don’t refer to David Chalmers’s book The Conscious Mind. He’s got some highly entertaining ideas about the mind/body problem, and seems to be a totally erudite philosopher. I like the arguments he makes for panpsychicism!
(Although I was sort of shocked to see his unconventional haircut and wacky clothes).
Mark Linsenmayer says
Yes, there was way too much to talk about, though I did read part of that and mention it somewhere near the end. We’ll definitely have more mind-related talk in the future, and Chalmers is a prime guy to cover.
Jon Nixon says
Thanks for this podcast – I thought this was the best one so far (probably because I’m an IT Engineer not a philosopher 😉
Someone implied that Dennett’s idea of the Cartesian theatre was a straw man and that no one believes in the homunculus… but I don’t quite follow that. Isn’t the Cartesian theatre exactly what any dualist *must* believe in because they have to draw a line somewhere and have information passing backwards and forwards between the body and the mind?
Another thing you started to touch on in the podcast was the problem of defining terms – what are “mind”, “thought”, “consciousness”, “intelligence” and “understanding” and how are they different? It sounds to me as if the Turing test is about “intelligence” and Searle’s Chinese Room argument is about “understanding”, and neither are about “consciousness” – so none of them directly relate to each other.
Mark Linsenmayer says
Re. the terminology issue, I think the intuition is that the reason that we care about all these terms is the same, i.e. what qualifies someone to stand in a moral relation with us, so presumably something passing the Turing test, if it didn’t have an inner life, is still something we could turn off or destroy without guilt. On one of the blog topics here, I’ve argued for the possibility that maybe consciousness as we conceive it isn’t necessary for having emotions and suffering which themselves might be sufficient for morality, but that’s not a view we dove into. So, I take your point, but I also think we were fine to gloss over it; it’s important not to semi-arbitrarily lay out definitions at the beginning of your account if you want to actually be getting at what’s important; coming up with the definition becomes the philosophical project itself (a la Socrates). I’ll let Wes respond re. the Cartesian theater, though he has put some comments about this on my earlier blog post here on Dennett and I think the one on Chalmers too.
Brian Loftus says
I haven’t read all the literature on the Chinese room thought experiment, but what kept coming to mind was this;
If the person in the room sat there and read through all his/her notes in English and compared them to the symbols, he/she would begin to come to an understanding of what they are doing in the symbol manipulation by dint of “doing it”, or at the very least comparison. Eventually they would derive semantic meaning from the syntax. While some of their assumptions would be wrong, for the most part they would be right in gathering understanding through context.
Whether they get an understanding at the immediate moment of pushing out the symbol or from years of studying all the symbols by comparing to the English instructions the result is still the same, a blossoming of understanding. Wouldn’t this break down Searle’s assumption the person has no semantic understanding or does Searle leave some room for this learning curve? And if there is a learning curve made from doing symbool manipulation day in and day out, doesn’t that defeat the point he is making because this learning possibility is a criteria for consciousness?
Jon Nixon says
Hi Brian –
I think the point of the Chinese Room is that the man is like a CPU. The argument has force because the CPU has no grasp of semantics – it simply manipulates symbols.
I don’t think the man could learn Chinese because he has no idea what *any* of the characters mean, so he has no starting point.
But even if there were clues and he eventually learned Chinese, that would be because he already had a grasp of semantics in another language. A CPU couldn’t do that.
So, it doesn’t defeat the argument, it just means that the experiment doesn’t stretch that far.
I’m pretty sure the argument fails anyway – if the room was to pass the Turing test it would have to give consistent and sensible answers to any question you could think of, including introspective ones. It would have to be clear that the machine was learning, and had personal memories, relationships, opinions and goals which evolved over time. When it got that complicated how could you be sure the entire system (not hust the man acting as a CPU) didn’t understand what it was talking about.
The same is true of zombies. The idea is that they act like us but are not conscious, and if thats possible it means consciousness is “something extra”. Well, that would be true if you accept that its possible – personally I can’t imagine something being able to act in that way without being consciouss so I think the basic premise is false.
Seth Paskin says
Not that it’s directly related, but check this out:
It’s the annual competition to see if anyone can write a program to pass the Turing test. No one ever has, but they give a consolation prize to the best effort. Take a look at the rules and transcripts from previous competitions to get a sense of what AI types think the whole thing is about and, more importantly, what kinds of questions they think they can ask to determine whether the interlocutor is a machine or a person. Not surprisingly, memory and knowledge of current events/culture figure strongly.
Jon Nixon says
Hi Seth – I think that was directly related – and wow – when you see those transcripts you realize how far they still have to go with this! And it brings home John Searle’s point about syntax and semantics.
Another issue I have with this is that what the Turing Test is looking for is humanoid intelligence. I’m sure my dog is intelligent and conciouss – but she couldn’t pass the Turing Test. Also, in the podcast, some suggested that a program would have to be fast to be intelligent – I’m not sure any of this is true.
If someone wrote a program which had goals and the ability to learn and plan and could introspectively examine its own experience – couldn’t that be intelligent and conciouss even if it was completely alien to us?
I was really looking forward to this episode and I was by turns infuriated and illuminated. I have some comments based upon my understanding of mainly psychology and neuroscience. I was interested in Wes’s defence of the Hard Problem and how it would never be solved. I think that the more neuroscience evolves the more that the Hard Problem would appear to be a category mistake (insofar as I understand the term).
My approach to this is as follows:
1. Neuropsychology has established that reported brain states are preceded by neuron firing (this includes reports of qualia, emotional states) I am unsure of the evidence regarding abstract thought.
2. The mind is correlated with neuron firing. To argue against this is to argue against anasthetics, head injuries, strokes etc whereby the mind and behaviour can be seen to relate to structural changes in the brain.
3. decisions that the mind makes are associated with deep level processing by the brain. This processing is an example of many brain ‘modules’ comparing inputs (sensory information) with memory. This is what the brain does and it’s processing involves billions of excitatory and inhibitory neurons involved in an enormous comparatory algorithm/s. I would argue that this process is always unconscious and a conscious decision is always a confabulation that the mind constructs after the fact. For example, when I go mountain biking if I am on an unfamiliar track which forks left or right I will make a decision to go left or right. When asked as to why I made a particular decision what can I answer? There was no deliberate decision, the process was made, motor neurons fired and my body went on the left or right path. If considering it later I will say that I chose. What I am really saying is that billions of neurons in several connected modules in the brain used sense data to compare against memory (ie twice before when choosing the uphill path I was able to slow down before hitting a rock; once before I took the downhill path nd ran into a dropoff at too much speed: causing an embarrasing and painful fall) therefore the calculus was made to turn uphill.
This occurs so fast that the body is moving towards one path before the awareness of the decision is constructed in the conscious mind (this has been demonstrated experimentally…just not on a bike). Therefore, any decision that is perceived consciously in this process is a confabulation.
In the case of a ‘conscious decision’, the same process occurs – for a mathematics problem MRI scans demonstrate that neuronal firing occurs in cognitive centres which correlates with the mental process.
4. Free will is part of this process: to choose something is to have the brain engage in an enormous comparative calculation with memory which occurs subconsciously – the mind can only ever perceive this as a decision that is made consciously as it has no access to unconscious processes- to say otherwise would be to say that mind ‘substance’ interacts with neuronal tissue physically but also temporally – that is, reaches back in time to enact the neuronal correlate. Surely, using ockahams razor, there is no need to construct a weird duality to explain the mind – it is a construct post hoc after neuronal firing. It doesn’t negate free will either – just because the brain computes unconsciously doesn’t mean the end of free will – it means changing your understanding of what free will is. If you don’t think that the computatitive power of billions of neurons enacting thousands of algorithms to compare a near infinite amount of possible outcomes is not free will then I’d like to see a better definition – it just moves the computation from a mysterious mind substance to an actual physical thing. If it offends your sensibilities than think of how beautiful this system is….
5. To explain dualism in decision making the mind must travel back in time to effect the neuronal firing . This would mean some pretty weird quantum smoke and mirrors. Why resort to this? Just accept that the mind occurs post hoc from neuronal firing. Remember that Descarte was religious and didn’t have to refer to reality or probability in his philosophy.
6. The mystery of what the mind ‘is’ … well this is, of course, a mystery and maybe this is what Wes refers to. Penrose postulated that that consciousness is a quantum effect steming from neuronal structures (cellular organelles called microtubules). This is clearly just a theory but I’d bet that the mind is more likely to be the result of an effect such as this than some quasi-religious “we can never know” statement. It might just be that our brains can not regard the problem – in the same way that that we cannot access our unconscious neuronal processes that create our mind.
7. If you still insist that your mind and brain are separate have a few glasses of wine and you’ll see how the mind is affected by physical processes. Furthermore, have a stroke and you’ll see how your mind cannot influence the brain. To see in yourself how you can be aware but not conscious throw your self out of a plane (with a parachute on) It is the only time I have experienced awareness with out any consciousness (self regard or reasoning power) it’s possible that some brain pathologies would also result in a loss of self regard without a loss in awareness and some strokes appear to do this.
To sum up I do think that mind/brain duality is a category mistake; because in the philosophical literature that I have seen the mind is viewed as a whole and is not reductive. The mind in neuroscience, psychology, medicine and neurology is reducible and can be seen to result from physical processes. In the end, I guess, whether there is a problem results from how you choose to think about it.
Wes Alwan says
Thanks for your comment. I don’t think the hard problem is a category mistake, and I tried to lay that out in the podcast. If you look to responses to Ryle in the secondary literature, you’ll find more developed critiques. There are some strong arguments that point to problems with certain kinds of dualism (there are problems with every theory in this area as far as I’m concerned); I don’t think the “category mistake” argument — which claims that dualism is not even a theory — is one of them. Even if dualism doesn’t work, it is in fact a theory. See also the debate between Nagel/Chalmers/McGinn (hard problem advocates) and eliminativists (closer to your position).
No one is arguing that brain states aren’t correlated to mental states. There is no serious philosophical position on this issue that argues against neuroscience, or against wine making one drunk by affecting the brain, or against the fact that without brains we could not have minds. (Descartes himself was interested in the brain, and how it was that the brain caused consciousness, hence the unfortunate pineal gland; and contra the ad hominem, his being religious (if he really was) does not disqualify him from thinking about the problem). So that there is a relationship isn’t a controversy. The question is what the nature of that relationship is. That brain and mental states are related does not imply reducibility (where all elements in one domain can be mapped to another in a way that eliminates the need to talk of the first domain). Reducibility is one major alternatives for this relationship; some very prominent philosophers for it, some against; but they all agree on the details of neuroscience. (Incidentally, I’m currently a psychology student and have a significant neuroscience library — a major interest of mine and something I have spent a great deal of time studying, so I’ve made a poor impression if I seem to be arguing against the science of the brain). The major alternatives for that relationship are described pretty well here, despite the fact that Chalmers is arguing against some of the alternatives and for others: http://consc.net/papers/nature.html. I tend to like Chalmers’ monist position. You’ll find references here to the major figures who make arguments more in line with your view.
Dualism does not require back-in-time causal effects; many dualist positions are not interactionist at all. Dualism does not in fact imply that the mind affects the brain (although there are forms of dualism that make this argument — see the Chalmers summary).
Anyway, my apologies — I don’t have the time to detail my position in full here (that will take some time, and I’m working on it). But there’s a large body of interesting literature out there arguing for the eliminativist position and for its alternatives. I think it’s helpful to our understanding of the problem to take each of these positions seriously. As for solving the problem, there are strong arguments to the effect that the problem is not in principle solvable — they are not religious or “quasi-religious” arguments, but reasonable positions to which philosophers give serious consideration, whether or not they endorse them.
cheers for the reply, my post was hasty (which shows) a problem when you wake at 2 am thinking about this. I listened to the episode again (the third time!) and I read a lot more of this in detail and the more I read the more I’m convinced that philosophy skirts around the science and the less the science can explain the ‘what it is like to be’ problem. I guess my irritation about this subject lies in the fact that philosophy approaches problems very differently to science and clearly I approach this from a science based angle. Wes, I take your point about how philosophers take their points of view and now i am taking more of a humanities approach when reading the literature.
I do have a question after reading a few different accounts of the colour blind physicist – is this a problem of symbology not of knowledge? The argument goes that if the physicist knows everything about red then become colour sighted would give her more knowledge. I don’t think that the qualia ‘red’ is information. The information is supplied as the wavelength, the first time the brain registers this wavelength as a neuaral impulse it is stored permanently as a neural connection – that is the information (the knowledge if you will)…when the mind wishes to ‘imagine’ the colour by recall; or when the shade is seen again, the neural input is compared against the stored information. The qualia of red or grey (in the colourblind) is not information it is a symbol that stands for the information… in the same way that a geologist learns nothing more about rocks when he sees a warning sign on the road with a picture of rocks on the road. If the qualia of a colour is knowledge then think of the implications: I could show a subject the complete amount of wavelengths of light in the visible sprectrum in, lets say, 10 seconds by use of a dial that turns down the frequency of light from violet to red – does this mean that I have just exposed the subject to close to infinite knowledge? It is not possible for a finite brain to store as knowledge the amount of qualia that it is possible to experience.
So my contention is the knowledge is the stored wavelength (the redness module in the brain) the qualia is the brain comparing the input with the memory of red. In the colour blind scientist she accesses a new symbol for the information already stored in her brain and every time she sees red the new symbol stands for that stored wavelength. Is the symbol knowledge? To argue this is to say that when learning the german word for red (‘rot’, by the way) you gain more knowledge about the wavelength. I tried this out today and learnt the japanese word for red – it han’t changed my qualia and I would argue it hasn’t given me any more information about electromagnetic radiation. I haven’t seen an attack on the colurblind physicist argument anywhere so please let me know if I’ve presented a perfectly rotten argument that’s been discounted elsewhere. It is a question of semantics, I suppose; as to whether qualia is knowledge or not.
Anyway, thanks again for your time – I am now getting heavily immersed in reading over this topic from a philosophical bent rather than a psychological approach.
David Emerson says
Wes, Seth, and Mark:
First, thanks for all of the informative and entertaining podcasts. I am making my way through all of your episodes, and I am thoroughly enjoying the experience. I felt compelled to skip way ahead to this “mind episode” out of a longstanding interest in the topic. Over the course of the dozen or so episodes that I have listened to up to this point, I got the impression that all three of you were, at least, agnostic (possibly atheist). Perhaps I was wrong; nonetheless, I was surprised at the resistance I heard (from Wes and Seth, in particular) to a purely materialistic explanation of mind. The matter seems to be variously named depending on how mystical one wishes to appear — be it: the semantics from syntax problem, the mind-body problem, or even the need for hormonal influence or subatomic interactions to create a conscious mind (as I believe Wes suggested in desperation). I guess I was caught off-guard at the continued need for the “ghost in the machine.”
My own willingness to toss-out the idea of an “immaterial mind” probably makes me a pragmatist (we shall see, as I listen to your episodes on pragmatism). But, other than making us feel special — which is probably not a philosophically defensible reason — what purpose does the “immaterial mind” serve. You cannot study it. You can say just about anything you want about it [which is good for your mission of staring a new religion 😉 ]. Personally, I have found the “immaterial mind” to be a philosophical and scientific blind alley. One’s willingness to toss-out the “immaterial mind” can be established by a thought experiment that should be “right up your alley”:
Assume a teleportation machine has been developed that does nothing more than tear you apart molecule by molecule and reassemble you in the exact same configuration (down to the atom). The benefits are obvious: visit friends and relatives in far away locations whenever you have a free evening, catch a concert in a European capital and be home in time for bed, satisfy your craving for Thai food by going to Thailand for dinner, etc. Would you ever set foot in such a machine? For a more accomplished and artistic thought experiment see polish sci-fi writer, Stanislaw Lem’s tale “The Princess Ineffable.” (Lem often plays with this idea of minds/individuals instantiated in miniature or computerized form. Several of these tales are found in Hofstadter and Dennett’s “The Minds I.” “The Cyberiad” is another gem.)
Second, I wanted to suggest a future episode. Perhaps the least “blind alley” of the last 150 years has involved evolutionary theory. My experience with philosophy to date is that it has been inadequately influenced by evolutionary theory. I rarely hear the philosophically inclined asking evolutionary questions. The matter of “What is it like to be a bat?” for example, is ripe for such discussion. (Disclaimer: I have not actually read the article yet). Humans and bats surely share an ancestral, primitive mind. Having the addition of sonar sense data would not seem to render the entire bat “mind” a mysterious “black box.” The fairly recent development of Dual Inheritance Theory (some prefer Memetics or Evolutionary Psychology) is pure “mind candy” for folks like us. It posits that humans enjoy two inheritances: one genetic and the other cultural. These two different, but co-evolving, means of inheritance are largely what make us human. Animals, with few exceptions, do not have a means of cultural inheritance. From this divergence (starting with tool-making hominids) comes the evolution of language, social life, and ethics — in other words, MIND. One of my most enjoyable reads of the last several years was Richerson and Boyd’s “Not By Genes Alone.” If you have not yet been exposed to these ideas, I think that you guys will be intrigued by the explanatory power of this developing field. And I’m sure that you guys could find many other excellent sources for these ideas. If Darwin’s work was appropriately entitled “On the Origin of Species,” then work in this area could be entitled “On the Origin of Culture” — or more boldly, “On the Origin of Mind.” (Sorry Wes, that may be more hubristic than “Consciousness Explained”). While this episode on mind (which I enjoyed) was interdisciplinary involving AI, Cog Sci, and Philosophy; the one I am imagining would be interdisciplinary involving Evolutionary Biology, Cultural Anthropology, and Philosophy. It would be great fun.
Keep up the great work!
Mark Linsenmayer says
You know, one point we really didn’t go into was substance dualism, which is what you and wily q here are objecting to, and property dualism, which says that the mental and physical refer to the same thing, but via an irreducible difference in types of properties. To me, it’s an epistemic issue: we can’t reduce how things appear to us in 1st person point of view to how things are as described in a 3rd person scientific point of view. It’s difficult for me to get more specific than this because of my ambivalence towards ontology, i.e. the practice of coming up with a list of the kinds of stuff that there is in the universe. For ontology as a concept to make sense to me at all, it has to be a subsection of phenomenology, i.e. a description of our experience, i.e. the universe as we understand it. Accepting this, then I can’t see how the first-person point of view can be eliminated; even if my experience is entirely constituted by biological functions, there’s still an element that would have to go into any ontology I can construct that isn’t just “atom” or “cell” or whatever. That at least is the path as I see it, and to the extent that I don’t really understand ontology, then I don’t understand how best to work out the mind/body problem. But, as a pragmatist, the issue causes me little problem; when it is useful for me to think about myself psychologically in terms of “what I really want” and “what am I trying to remember right now?” or things like that, then I think of them in mental terms, and if I’m trying to get rid of a headache via drugs, then I think of my pain in physical terms. No conflict there, and arguably no real ontological commitment.
Wes Alwan says
Hi David, just a correction — I didn’t appeal to hormonal influence or subatomic interactions. I’m not sure what you mean by the former — while hormones don’t say anything about the mind body problem, they are extremely important to the interaction of the brain and the rest of the body. And again, those who accept the hard problem and are property dualists or monists do not reject the fact that brain states cause mental states, or that the mind is a substance that persists when the body is no more. I’ts unfortunate if we conveyed this as the meat of the problem. All parties agree on neuroscientific explanations (and usually are all heavily interested and immersed in the research). This isn’t really a religion/science dispute. Saying that we cannot study the mind scientifically and therefore that we must ignore the hard problem presupposes that scientific explanation must be capable of solving every problem and be applicable to every domain containing problems that interest us — which is a blind assumption and not itself rational or scientific. Again, we all agree that scientific explanation is capable in principle of solving all scientific problems. The question is whether the hard mind-body problem is actually scientific. I don’t think it is, for reasons I tried to lay out in the podcast and that a number of prominent philosophers (including Nagel) have expressed better than I. I say all these things not as a mystic but someone who loves science, has a degree in the history of science, went to philosophy grad school initially to study foundations of physics, interned in nuclear physics at the naval research laboratory, and has studied neuroscience at the graduate level (I bring out these anti-ad-hominems now only in desperation at having my motives construed as religious and anti-scientific). And it is precisely a love of rational inquiry — and science — that should prevent us from appealing to the universality applicability of the empirical sciences when there are reasons to suggest otherwise. If this is our premise, then yes — there simply are no philosophical problems. They dissolve under the assumption that they are merely scientific problems, and that we have been deluded into believing otherwise by “category mistakes” or mistakes of grammar (which are by the way, merely genetic fallacies, forms of ad hominem which presuppose the emptiness of questions and attempt to explain their genesis — the psychological confusion that would lead to their being seen as interesting at all). If I thought that were the case, I wouldn’t supplement neuroscience by wasting my time with philosophy. But if there’s an argument to be made to that effect, it ought to be made not an appeal to the mere fact of the existence and science and an irrational exuberance about what it can do. Wrapping oneself in the mantle of science in order to cure oneself of funding something puzzling is doing neither science nor philosophy.
David Emerson says
Thanks for the replies and your patience with me. I’ve got some reading to do to appreciate this property dualism bit. It is counterintuitive that properties (which must inhere in some substance) could be dualistic without substance itself being dualistic. It still seems like a reach for something special — a “Ghost in the Machine.” However, when I compare Physical Science and Social Science — namely their methods, successes, and difficulties it seems plausible that different rules are at play in each. More reading and thinking on my part is clearly called for.
Wes, you are right, my characterization of your comment was somewhat out of context. You and Seth were doubting whether “strong AI” was possible outside of interactions that occur in flesh (hormonal, developmental, subatomic, etc.). Personally, I don’t see what is so special about “wetware” as compared to “software” and “hardware.” To me it is all about information processing. However, this is likely a related, but separate, matter from the property dualism issue.
Thanks again… more to come I’m sure.
David Emerson says
An expression of Property Dualism?:
“… Prehistoric civilizations explained all natural events — especially catastrophes — in terms of the purposes of supernatural agents. Today, religions continue to do so. In each of the revolutions in Western science, the greatest obstacle to scientific advance has been the conviction that only purposes or meanings that made things intelligible could really explain them. The history of natural science is one of continually increasing explanatory scope and augmenting predictive power. Science has achieved that by successively eliminating meaning, purpose, or significance from nature. … Now the only arena in which explanations appeal to purposes, goals, intentions, and meaning is their “home base,” human action.
The record of the history of science requires every social scientist to face the question, Why should human behavior be an exception to this alleged pattern? Why should meaning, purpose, goal, and intention, which have no role elsewhere in science, have the central place they occupy in social science? The obvious answer is that people, unlike (most) animals, vegetables, and minerals, have minds, beliefs, desires, intentions, goals, and purposes. These things give their lives and actions meaning, significance, make them intelligible. But what is so different about minds from everything else under heaven and earth that makes the approach to understanding people so different or so much more difficult than everything else?” (from Alexander Rosenberg’s “The Philosophy of Social Science)
Indeed. And how best to study the Mind? Social Science, Cognitive Science, Philosophy, Religion, AI, Neuroscience, Evolutionary Biology?!
A dialectic on consciousness; or, why Dualism pisses me off:
Armchair Philosopher: The physical brain cannot be the seat of consciousness.
WQ: Why not?
AP: Well, I take the George Romero defence. Imagine a possible world where there are zombies with identical physical states of the brain but no consciousness… you with me?
WQ: You mean populated with a world of Paris Hiltons?
AP: Sort of… now the possible existence of these entities means that that there is an extra piece of substance that must be added to the brain to explain consciousness.
WQ: So lets get this straight… you can sit in an armchair and think of a possible world where there is an entity with neuropsychic processes but no consciousness and therefore it renders the notion of physical explanation in this world inadequate?
AP: Damn straight!
WQ: OK… well how about I say to you that your argument is predicated upon the notion that the mind and brain are already separate and that it is the best example of a self licking icecream that I have ever seen…
AP: Do you have a PhD in Philosophy?
AP (Sits back smugly): Well this should be interesting…
WQ: Well…I ‘ll argue the opposite: Consciousness is a result of neuropsychic processes, therefore if you postulate a alternate world with identical brains and identical neuropsychic processes then consciousness will result… no zombies can exist. Furthermore, I can demonstrate a particular form of brain injury where in this world consciousness can disappear and the neuropsychic correlate is also lost at the same time. Furthermore, I can postulate an alternate world where there is a zombie: but working backwards from this lack of consciousness it can only be explained if several brain modules involved with abstraction and association are lost… it does not correlate with identical brain processes in this world. Therefore, consciousness is a product of material physical processes.
AP: But all you have done is create an argument to support your contention by creating a metaphysical construct with no relation to the real world.. and furthermore is predicated by the very premise you have postulated?
WQ: Errr yeah… just like you did. Maybe you should have said: I believe there is substance dualism and I have an argument to support it which only works if the initial premise is true, which it is because I believe it.
AP: I’m not sure if that works.
WQ: Well I’m not a philosopher…I’m sure there is a massive formal logic argument that supports your contention that is lost on me, but if you’re going to invent stuff then it probably shouldn’t resort to internal premises to support it.
AP: But zombies are so cool…..
Wes Alwan says
@John Nixon: see my comment here: http://partiallyexaminedlife.com/2011/03/31/notes-on-dennetts-breaking-the-spell-part-1/
I’m planning on debating some materialists soon, and here’s what I’m probably going to say:
Let’s assume the following proposition: if consciousness exists, then it is reducible to properties of the brain. From this it follows that all aspects of consciousness are reducible to mechanisms within some “brain system.” My belief that I am conscious can be reduced to one of these mechanisms, and because of this my belief that I am conscious is only as reliable as the integrity of my brain system, which is not reliable since there is no verification that it has mechanisms that correlate to accurate thoughts. If my belief that I am conscious is fallible, then why believe in consciousness at all? Isn’t it simplest to propose that the brain is not privileged in any way, rather than come up with some arbitrary condition that our brain meets so that we can deem it with some special condition called “consciousness”? Dennett simply has personification precede consciousness with his theory that “competing processes” produce consciousness, so he fails in trying to find a non-arbitrary condition.
Consciousness is only worth believing in if you believe that having an experience produces infallible knowledge that you are having that experience. Otherwise, it is simplest to assume that consciousness is an illusion. The idea that consciousness is an illusion never made sense to me, because illusions generally involve some form of inaccurate modelling, not some form of inaccurate perception of our thoughts or experience. How can an illusion exist without some form of perception?
quick advice –
you may have intelligent things to say – I just stopped after 15 minutes… you were talking about yourselves. I hope you started discussing the material within the first hour!
A shame, this format was interesting – it’s ok to be less formal than philosophy bite, but that’s just self-indulgent. And it makes me angry enough (you stole 15 minutes of my life) for me to take a minute to scold you. And as scolding is pointless and you won’t care, you may at least improve your approach in the future.
Oh my God, I am still listening to you three talking – you’re dropping names now – without talking about any substance yet. Yuk yuk yuk! get me out of here!
of course, do erase my comment – for your own sake, take it into account! unless you believe in the goodness of your format. (You’re just mistaken about what a good website on philosophy is in this case – but hey, I’m in favour of freedom of expression, so keep things are “truly perplexing” without having named the issue and say how your friends and family “don’t get it”. you do.)
Seth Paskin says
We could erase it – but why would we? You gave us a listen, you didn’t like it, so be it. We can’t be all things to all people. I will, however, question your anger – I don’t think 15 minutes of our humble philosophy podcast should incite that emotion – as well as your need to scold. You could simply have remained silent and found a philosophy podcast suited to your tastes. Or you could have taken more than 15 minutes to check out a little more of the nearly 100 hours of audio we’ve posted over three years, the more than 500 posts and more than 3500 comments on our website or our 1000+ strong Facebook page. You might have found something to like, or at least mitigate the tone of the message.
Graham Warner says
I’ve just listened to this podcast, but I haven’t had time to read the comments online , so I’m sorry if i’m repeating what others have already written.
There’s a common misstatement of Searle’s view that was repeated in the podcast, that he believes that consciousness can only be based on biological material. It’s important to challenge that misunderstanding, not only because it’s wrong, but also because if you read him that way, it shows that you haven’t understodd the core of his argument.
First, show that he doesn’t that the “biology only” view is not Searle’s; he’s made it very clear that he rejects such a view. In Searle’s view, a machine (as commonly understood) could be consicious, but if and only if it shared the same causal powers as the brain – which, he is convinced, no digital computer can do, in principle. At most, a digital info processing machine can only simulate consiousness, which no more creates ontologically subjective > conciousness than a simulation of fire is capable of burning.
He’s clarified this point several times, for example:
“Biological naturalism” at http://socrates.berkeley.edu/~jsearle/articles.html accessed 24/03/2011
2. The neuronal basis of consciousness. All conscious states are caused by lower level brain processes,
We do not know all the details of exactly how consciousness is caused by brain processes, but there is no doubt that it is in fact. The thesis that all of our conscious states, from feeling thirsty to experiencing mystical ecstasies are caused by brain processes is now established by an overwhelming amount of evidence. Indeed the currently most exciting research in the biological sciences is to try to figure out exactly how it works. What are the neuronal correlates of consciousness and how do they function to cause conscious states?
The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially.
Searle seems to regard Dan Dennett is a main instigator of the common misunderstanding of his view;
[In the Mind’s I, Dennett writes:
Searle is not even in the same discussion. He claims that organic brains are required to “produce” consciousness – at one point he actually said brains “secrete” consciousness, as if it were some kind of magical goo-…”
One thing we’re sure about, though, is that John Searle’s idea that what you call “biological material” is a necessity for agency (or consciousness) is a nonstarter (p187)
The problem with both these attributions is that they are misrepresentations and misquotations of my views. I have never maintained that “organic brains are required to produce consciousness. We do know that certain brain functions are sufficient for consciousness, but we have no way of knowing at present whether they are also necessary. And I have never maintained the absurd view tat “brains ‘secrete’ consciousness” It is no surprise that Dennett gives no source for these quotations because there are none.
(Searle, John (1997) The Mystery of Consciousness. London: Granta p131,
Much of this exchange , though not the above section is also online at http://www.nybooks.com/articles/archives/1995/dec/21/the-mystery-of-consciousness-an-exchange/)
Mystery of Consciousness p203
In a section of the conclusion addressing questions that come up again and again in debates and correspondence;
“3. But I thought your view was that brain tissue is necessary for consciousness“
No, this has never been my view. Rather, I point out that some brain processes are sufficient to cause consciousness. This is just a fact about how nature works. From this it follows trivially that any other system that did it causally would have to have at least the equivalent threshold casual powers. But this may or may not require neural tissue. We just do not know. The interest of this consequence is that it excludes, for example, formal computer programs, because they have no causal powers in addition to those of the implementing medium.”
“5. […]You keep on talking about brain processes causing consciousness. But why the obsession with just brain processes? If neuron firings can cause consciousness the why couldn’t information cause consciousness. Indeed you haven’t shown what it is about neuron firings that is so special, and for all we know, it might be the information contained in the neuron firings.
“Information” does not name a real physical feature of the real world in the way that neuron firings, and for that matter consciousness, are real physical features of the world. … Except for those forms of information that are part of the thoughts of a conscious person, information is observer-relative. Information is anything we can count or use as information. To take a textbook example, the tree rings contain information about the age of the tree. But what fact about the tree rings makes them informative? The only physical fact is that there is an exact covariance between the number of rings and the age of the tree in years. Someone who kno3s about the physical facts can infer the one from the other. But notice, you could just as well say that the age of the tree in years contains information about the number of rings in the tree stump. To put this point briefly, “information” in this sense does not name a real causal feature of the world on a par with three stumps or sunshine. Information could not be the general cause of consciousness, because information is not a separate causal feature of the world like gravity or electromagnetism.” ,end quote>
I seems to me that we can understand Searle’s point best through his book ‘The Construction of Social Reality’ which seems at first sight to be about a different topic altogether. Its argument is that there are some aspects of the world that are just brute material facts – material objects in themselves. Then there are those things that are socially constructed, like money, political office and power, languages, games, nations, all kinds of institutional facts. This can include tools; a piece of sharp glass can be just silicon in a certain shape (it’s nature independent of social definition), or it can be a knife, in which case we have defined it’s role or nature as a tool in a social context. Now, the critical point is that, for Searle, data processing belongs in the second camp; it is socially constructed. It has to be, because all it is the manipulation of symbols, in the form of the data and program code. If so, all it can produce is a description, a model or a metaphor for the subject it describes, which in this case is consciousness. And descriptions do not have the same causal power as the thing describes; again, hydrology simulations don’t wet anything, thermodynamics models don’t actually get hot, and data processing-based models of consciousness don’t become conscious. They are not even the sort of thing that could be conscious, because they are part of socially defined reality (forms of communication or of language, perhaps) not material causal aspects of the material world. I understand the Chinese Room as just one part of this whole argument.
Now, I don’t know whether Searle is right here, but I know that we can’t even challenge him unless we understand exactly what he’s saying. When I think that he is, I think that my intuition is that the workings of the brain that produce consciousness is like levers and gears, while digital computing is like semaphore signals or letters – different in principles. When I think he’s wrong, I feel that it’s important that the material workings of the brain that give rise to its causal powers actual embody information in the patterns of their configurations. And yet, Searle could agree with that, but still say that the informational patterns in themselves are necessary but far from sufficient for consciousness; we need there to be real material devices (biological, electronic or otherwise) whose configurations reproduce the critical parts of the brains causal powers. A functionalist could claim that it’s the informational patterns only that really matter, but that’s exactly what needs to be shown.
Mark Linsenmayer says
Thanks for the clarification, Graham. The source of the confusion was likely me, from reading a good chunk of Searle’s recent “mind” book as supplementary material for this ‘cast. I hope we can get back to this topic with some detail in a future episode, reading some Chalmers, Searle, and Dreyfus at least, but it’s not yet on the schedule.
After listening to John Searle’s lectures on Philosophy of Mind on youtube, it appears to be to be incredulous how much we are struggling with this mind/body problem trying to defend our cultural perspective of materialism. John Searle does a tremendous job of destroying reductive materialism, but then simply asserts that it is correct! He makes no compelling or logical argument, just shows that the other views are wrong and then asserts that they are wrong in their arguments, yet correct in their conclusions. The guys in the Dennett camp correct in their thinking. If we stand on a position of materialism (which we all do), then there cannot be free will or consciousness. If there is free will or consciousness, then the concept of materialism is wrong. This is the correct balance of the equation. The fact is that we want to say that this isn’t so, because we experience consciousness and so the power of reductionist materialism. What if It is our view of materialism that is wrong, not our experience of consciousness. This is where the categorical error is occurring, yet this conclusion is so far out of line with our current world view that no one seems to be able to entertain it.
The world does not depend on human freedom, it was around for much longer than us, and it will be here after us. There is no reason to begin an examination of the ontology of the world with the human will. A view of animal intentionality is entirely reconcilable with determinism, what it can not be reconciled with are rampant christian ethics that the church has spent thousands of years burying their way in to every aspect of our unfortunate culture, and which dictates the proceedings of even much of secular philosophy. Look at how avowed atheist Sam Harris spends his time attempting to reconcile these same old values that the ten commandments have already done a much better job of.
Stephen Sage says
Good episode, guys. I majored in philosophy and studied mostly mind and language, so it was fun to hear you get into it.
I don’t feel that Wes gave a clear argument against Ryle’s dispositional behaviorism (and you are not alone in that!). Why it is problematic that the analysis of mental states involves analyses of other mental states when it is acknowledged that mental states relate to one another? Folk psychology is such that one must learn the vocabularies of belief and desire together (like most of us) or not at all (like the autistic). Knowing that Smith believes that it is raining outside does not allow you to predict (with maximal accuracy) whether Smith will use an umbrella unless you know whether he desires to stay dry. Thus, the behavioral-dispositional analysis doesn’t allow you to define a mental state in isolation. Understanding why this is not a problem requires, I think, recalling Wittgenstein’s discussion of reference. You don’t learn the vocabulary of belief by introspecting on a belief, pointing inwardly to it, and having an ah-ha moment of reference-fixation. Why think that when you said, “Smith believes it is raining,” you were positing a state, i.e. making an ontological commitment?
Your criticism that Ryle does not want to give a causal explanation, whereas dualists do, seems backward. Since it is incoherent to propose that two substances with nothing in common can causally interact, the substance dualist’s so-called causal explanation is a non-starter. Investigating the physical causes of behavior poses no conceptual/logical problem, whereas the substance dualist’s causal explanation does. If behavioral dispositions are what ground mental vocabulary, i.e. if folk psychology is basically a tool for predicting behavior, as Dennett believes, then we will understand the mental as we come to understand the physical
I think you’re missing what I find compelling about the Turing test. The point is not that the Turing test should be used as a definite criterion of intelligence (for that reason it should never have been called a test). Instead, ask yourself what it is about a person that makes you consider him/her intelligent (I’ll bet your answer is mostly behavioral) and apply those same criteria to machines. Brain-meat is certainly no part of the definition. This brings us to the circularity of the Chinese Room Argument. In the section of “Minds, Brains, and Programs” on the so-called “combination” response, Searle straightforwardly says that if a perfectly intelligent-seeming machine were revealed as a machine rather than a human, we would say that it was not intelligent. This is because the machine does not achieve intelligence biologically. This is obviously circular in a debate in which the opposition believes that intelligence is functional (and thus multiply realizable) or alternatively that intelligence consists in capacities for certain kinds of behavior (rather than in following some privileged set of rules).
The blind color-scientist doesn’t gain any propositional knowledge when she sees red for the first time, even though her new experience might, e.g., enable her to identify colors faster, appreciate cinematography, etc. There is no propositional knowledge to be had about what it is like to be a bat. This is just obvious since I can’t even say what it’s like to be me. Roughly, if there’s nothing to say about it, i.e. there are no intelligible propositions characterizing it, then there’s no propositional knowledge about it. We define a quale as the character of an experience, yet experience “itself” cannot be adequately characterized! Why claim that experience has a specific, known character when, at the same time, experience can’t be specifically characterized? This is the absurdity of “explaining” subjective consciousness when subjective consciousness is, in practice, defined as ineffable. It can’t be our project to explain how physical states determine the subjective character of private experience when characterizing one’s experience puts the experience into words, i.e. puts the experience into common, public terms, thus losing the claim to subjectivity (you can see the relevance of Wittgenstein’s private language argument). The acknowledgment that there is no clear and intelligible way to characterize the “content” of unique, private, subjective experience (as opposed to bare affirmations of its existence) is why there is no “hard” problem. However, it is not impossible to explain what it is about us that makes us conscious; just look at the differences between sighted and blind people, for instance. You cannot assume that experiences are individuated exclusively by the subject’s apprehension of them. Seeing red is different from seeing blue because our bodies react differently to red and blue objects, end of story. The fact that we can imagine red objects being green poses no philosophical problem (let alone a scientific one!), and neither does the fact that we can imagine a conscious person being unconscious.
I suspect the reason most people want to “reduce” mental states to physical states is precisely to verify them as real, not to eliminate them! It’s often ignored that reductions can be vindicative or eliminative. Biological research vindicates life even though we explain living systems in terms of non-living parts; it eliminated élan vital because élan vital was not one of those parts. Fortunately, consciousness is just the property of being conscious rather than some mysterious folk-theoretic posit like élan vital (or minds or deities), so no one seriously proposes its elimination.
Geoffrey Clements says
I know I’m jumping way back but I started at episode 0 and this is where I am currently. Someday I’ll catch up.
The chinese room annoys me because it doesn’t really answer anything. All it is is a convoluted communications channel from the person asking questions to the native speaker who wrote the original codebook. There is no real difference between the three situations: 1) there is a native chinese speaker in the room who gets the question, reads it, answers it and hands the answer out. 2) someone who doesn’t speak chinese receives the question, hands it to a native chinese speaker who answers it and hands it back to the non-native speaker who then submits the answer. 3) a non-chinese speaker receives the question, looks up the answer in a code book, answers the question and returns the answer. All of the intelligence of the room is encoded in the native speaker answering the questions. Even if the answers are coming out of a code book the native speaker created some time earlier. It’s just a communications channel.
No one thinks a TV has intelligence even though it “tells” quite compelling and complex stories all on it’s own. It also is just a communications channel between the actors, writers, directors, etc who created the show. The intelligence is in the people creating the show (chinese answers). Not the communication mechanism.
The question then becomes how exactly the “intelligent” entity is distinct from the vessel of communication, the most simple thought experiment being related to your idea about television sets: what if we were able to create robots physically indistinguishable from real people that are able to tell these convincing complex tales, what is intelligence then?
I don’t think Searle just naively wants you to believe that the Chinese room thought experiment is a real living human being. But he is skeptical about what it is that separates us from the Chinese room other than the once abstract notion of intelligence which is now quickly becoming naturalized along with the rest of the world.
Geoffrey Clements says
Actually I have a problem with the word (label) intelligence and even more so the word consciousness. Both imply a kind of extra something outside physical behavior. But what if we are all zombies? Just biological machines showing complex behaviors. Complex enough that we can reason that we have some special ability called consciousness or intelligence?
There is plenty of evidence from psychological experiments that show that we are reactive and programmable in a machine like way. Maybe intelligence and consciousness are just labels for complex behavior that we use because we are SO far away from understanding how our brains really work. Because we don’t understand, we use the magical terms intelligence and consciousness. (I say magical because anything we don’t understand is indistinguishable from magic. 🙂 )
I’m not sure what you’re getting at? You seem to imply that consciousness has both been written out of the picture in terms of a naturalist explanation of the world that no longer includes any reference to the now defunct human mind in the same way as physics did away with the luminous aether, and also that consciousness is this kind of mystical supernatural inexplicable phenomenon that is so potent it necessarily evades all of our attempts at pinning it down. But unfortunately for this latter view as you yourself made sure to point out, we are making such great strides in the fields of neuroscience, biology, and psychology, and also cosmology and astrophysics, that I find it hard to believe there is anything really distinct about the human mind as yet to be vastly differentiated from the rest of the highly complex if not infinite world.
At the same time, all of this argument on both of our parts goes excavating through such cavernous sections of rational thought that would seem to carve out inwardly an endless cascading stream of symbolic relations which can not be swept into some simple Wittgensteinian anti-system of the being of bodies and languages. There is a third component of negation that itself makes for the constituency of thought’s functional affect on these other kinds of physical phenomena, you do not only think about these things from an uninvolved, detached perspective, but also end up doing something to them in return with every thought. For one thing these bodies and languages will always be culpable to being subsumed by another broader system of meta-languages and meta-bodies in which all kinds of other languages and bodies can become manifest.
Daniel Gee says
I’m not a philosopher, I’m a computer scientist. I’m also an atheist. So, I listened to the episode, and I’ve heard about the Mind/Body problem before, but I think what the episode failed to cover is why the Mind/Body Problem is a problem that needs to be talked about at all. In the sense of, “how are you sure that it’s a real problem?”. wily_quixote asked in this direction but didn’t seem to ask the direct enough question: What hard evidence is there to show that there’s a Mind/Body Problem at all?
The Problem Of Evil, for example, isn’t a problem to anyone who doesn’t happen to suppose an omnipotent and omnibenevolent universe creator. Any debate about it is already assuming that such a creature exists and then saying why there’s evil in the world anyway. The alternate explanation is, of course, that whatever caused the universe lacked one or both of those properties and so there happens to be some evil here and there.
I mean, at the end Seth even said, “It’s a mystery to me, and I think that it’s a fundamentally deep and challenging philosophical question”, and that’s what I think that the rest of you are also mostly thinking. How do you know you’re asking the right question though? You don’t need to think about Theodicy if you don’t propose a christian or christian-like god.
Similarly, the Mind/Body Problem is a problem that comes up because of the assumption that there’s this “mind” entity that exists somehow separately from the physical brain that’s hosting it. But… Why? Why do we assume that? What evidence do we have for that? “Extraordinary Claims require extraordinary evidence”, and all that, and the claim that you’ve got a mind, or spirit, or soul, or something like that is pretty extraordinary as claims go. You need some pretty solid evidence, you can’t just use personal testimony with people saying “I believe in a Mind”, because people believe in ghosts and demons too.
Now, at one point you say that it’s impossible to deny a conscious experience. I don’t deny it. I’m not saying that people can’t be intelligent and rational. I’m saying, is there any evidence that consciousness isn’t simply an emergent property of the physical, biochemical and electrical interactions of the human brain. You used lightning at one point in an example, and I think it’s a good example to come back to: The Ancient Greeks thought that Zeus caused lightning, but it never was Zeus, they just didn’t understand lightning. It was then said that (paraphrased) “you can deduce lightning, you can’t deduce consciousness”. Except that the ancient Greeks couldn’t deduce lightning. They couldn’t do it not because it was impossible, but because they didn’t know enough. Why is it supposed now that we can’t deduce consciousness *ever* simply because we don’t know enough now?
Another thing I would like to say is that it kept being brought up that you can’t look at a brain scan and know that a person is thinking “Red” until they tell you they’re thinking Red and you record that result and then they think about it later and you check the new scan against your list of old results and see it in your list. And… somehow that made a Mind into a non-physical entity that’s mysterious. That’s absurd. Every sensory device needs to be calibrated. Every single one of them. When you hook up an electronic scale to a computer, you need to put a known weight on it and calibrate it so the software knows what level of electrical signal equates to what weight. That doesn’t mean gravity doesn’t exist. Even your own eyes need to learn to identify colors so that you can use the words that others are using to talk about colors.
Similarly, that you can’t point to a specific cell in the brain, or a specific cell cluster even, and say for sure that, “everything he knows about horses is in those cells” doesn’t in any way mean a Mind exists. Even if the brain did use discrete clusters to store discrete memories (which it might not), that we can’t do that says more about our own ignorance than it does about anything else.
I mean, emergent systems are all over in nature. Most of them are non-biological (landforms and stuff), but there’s biological ones too. The way an entire ant colony operates cannot be deduced from an individual ant, because it’s the interplay of all the ants that makes it work. The way DNA operates can’t be known from a single gene, but when you put enough together you get a living thing out of it.
So why is consciousness any different? Why isn’t it just an emergent property of a physical system, just like everything else in the universe? Before you begin to wonder about an explanation for the mind/body problem, shouldn’t you have some very conclusive proof that it exists? It all just seems like the height of hubris to think that we’re somehow special and unexplainable.
Unrelated Note to Seth about his ending summary for the show: That’s a common misconception to have regarding space flight, but they’ve got web pages about it even:
Also, on the subject near the end of things wanting to be free meaning that they should be free, Isaac Asimov actually wrote that in one of his best stories (Bicentennial Man):
Mark Linsenmayer says
Thanks for the post, Daniel,
My recent read of Chalmers’s “The Conscious Mind” made clearer what I think mostly Wes was arguing for on the show. “Emergence” seems a vague and unhelpful term (though we’ll have an episode at some point on this and get into it more; Dylan’s just done a Not School group on it). Of course what we call consciousness is a causal result of the underlying physical structures. If figuring more clearly the correlations between certain brain states and conscious experiences is a satisfactory explanation of consciousness to you, then great. Chalmers, though, doesn’t buy it, for the traditional reasons (the possibility of inverted spectrum, our inability to know what it’s like to be a bat, etc., some of which we got into on the episode). The way he describes this is that even if you know all of the physical facts, you can’t then predict, in advance the phenomenal facts. He contrasts this with other alleged forms of emergence, such as the behavior of macroscopic substances… e.g., a gas acts in such and such a way, and you can actually predict that by knowing the molecular facts about the gas in question. The way he presents this (which I don’t have time to attempt to replicate here, but I recommend the book even though I think some of it is just plain weird) not that scientists just haven’t figured out yet enough about the brain science to make such predictions, but that there’s a conceptual gap such that ANY amount of brain information would still never get you the prediction that such and such conscious event would occur. We can only establish such correlations through first-person reports, which is a reductionist is trying to define away.
So if you buy this, then there’s something weird about consciousness epistemically at least: there’s the obvious fact that in the rest of science, first-person, unverifiable reports are supposed to be verboten, or reinterpreted as “behavior.” If naturalism and physicalism rely on all knowledge being amenable to the scientific method, i.e. third-person verifiable knowledge, then this is a giant hole, and the response has to be either to redefine scientific method so as to allow this (this is Chalmers’s strategy), or to say that all this todo about naturalism is just an overreaction to supernaturalism, which is, as you say, goofy. Philosophical nonnaturalists (like G.E. Moore or Alasdair MacIntyre) don’t advocate supernaturalism, but claim that a brute physicalist ontology is not going to be sufficient for our needs, which include not only predictions of behavior but other kinds of “understanding” which can’t just be defined away because we don’t know how to deal with them. (Some of these, such as ethics, have a directly practical upshot.) The issue is not so pressing if you’re just dealing with physical science, but when you extend this to social sciences including psychology, well, that’s scientism, and it ignores the ways in which we do have (limited) knowledge of human affairs at a level that’s actually meaningful to us as individuals.
OK, that’s the best I can do right now. I need to read up more on emergence, certainly, but the only point really insisted on in the episode you’re responding to in favor of dualism was this epistemic one, which for Chalmers at least, entails “property dualism,” which is a metaphysical distinction even though it doesn’t entail any free-floating soul or any of that, which is highly unlikely.
Turn and twist as you may, but there will always be some reductionist rattler asphyxiating the atmosphere
“I’m not a philosopher, I’m a computer scientist. I’m also an atheist.” is a diagnosis these days. Spouting excessive amounts of scientism may cause harm to one’s intellectual capacity, Daniel! Though I don’t think it is a problem in your case, as you poses little to no intelligence already. Better patter your way back to the swarm of pseudo-intellectualism that is reddit.com/atheism
Daniel Gee says
But we probably can know what it’s like to be a bat, just not yet. In recent years they’ve made progress in the area of adding new sensory devices to humans, and also in the area of scanning the brain activity of both humans and non-humans. It seems easy to imagine a point in the future where you could either hook a motion detector in to your brain and sense the location of objects around you non-visually, or, if that’s not good enough, that we’d be able to read brain data out of a bat and then convert it into a format that your brain can understand and then get it into you that way. Which all sounds very crazy and science fiction except that they’ve already hooked up simple cameras into the brains of the blind and given them minimalistic vision with it.
Also, bats probably wouldn’t even be able to agree on what it is to be a bat. Not in a specific way. Humans can’t say what specifically it is to be human, I don’t know why bats would be better at this than us.
As to the Inverted Spectrum problem… yeah, I thought about that in 2nd grade. And I was worried for a time that the only way you’d be able to solve the question “what if someone else doesn’t see blue like i see blue?” was to cut out someone’s eyes and replace them and then ask them what they saw. Except as I got farther in school I learned something that I suspect that John Locke couldn’t possibly have known, which is that the visual colors are the way they are because of the wavelength of electromagnetic energy of a particular electromagnetic ray. Colors aren’t any sort of non-real thing in the mind, they’re physically detectable just like everything else. And so, we can look at the cells that take in EM waves and then respond by giving out a neural signal that goes to the brain, and we can examine if they’re similar cells to those of other eyes, and if they’re sending out the same sorts of signals to the other person’s brain, and if the second brain is responding in a similar way to the signal, and so on. This allows you to make a third-person examination of the process.
As to “sufficient for our needs”, there you have me. Though it’s easy for even a non-specialist such as myself to imagine the basic process you’d use to scan a bran and compare it to other brains, we can’t currently do any of that very precisely, or at all depending on what the test procedure is. Black Box Testing helps us here though. We can abstract away a lot of the brain’s inner workings and get at the interplay between input and output without needing to deal with how they grey matter is operating. I think the very fact that we can do this, and that people aren’t often used to using abstraction layers in their conceptions of things, is what makes people come to imagine this “conceptual gap” and assume a mind body problem. Sure, once you use an abstraction layer to try to ignore parts you can’t understand and get results right away you’ll end up not understanding things, because the point of an abstraction layer when thinking about things is to ignore some details so that you can get a simpler understanding of a situation. But it doesn’t mean that the details go away (which can cause your abstraction to produce bad results in “edge cases”) and it also doesn’t mean that working with less abstraction is impossible, it’s just more involved in terms of effort.
But the episode DID get me to think, so as a bit of philosophy it was top notch.
Tony Gilkerson says
I am still working my way up from episode 1 and I see this is over 2yrs old but that is where I am… I very much enjoyed Wes in ep 21. If was fun to see the normally laid-back Wes be so animated. Don’t get me wrong I love the content but I think it is the style of the discussion that is bringing me back. Oh and a quick shout out to Dylan. I think he has joined you guys twice… keep inviting him back, he is a good addition. The parent in me wants to give you all equal complements but then it would lose it meaning.
Seth Paskin says
So the parent in you thinks it’s OK to show favoritism? Thanks for listening!
Tony Gilkerson says
You misunderstood my parent comment, but that is ok. My attempt at being humorous in written correspondence usually just comes off as confusing. At any rate Seth I am glad you responded. Can I ask you a question? How is it that you can find applicable quotes from the assigned text as fast as the conversation goes from one point to the next? Mark does that too sometimes. I know you guys take notes but I still find it amazing that you can pull out quotes so fast while at the same time you are in a discussion.
The idea that Dennett “thinks consciousness doesn’t exist” is thrown around a lot, but it makes no sense whatsoever. Why would he write a book called Consciousness Explained and spend hundreds of pages wondering how consciousness is possible in a physical universe if he didn’t think there that was any such thing as consciousness to ponder in the first place?
Take this bit from “Quining Qualia”:
“Which idea of qualia am I trying to extirpate? Everything real has properties, and since I don’t deny the reality of conscious experience, I grant that conscious experience has properties. I grant moreover that each person’s states of consciousness have properties in virtue of which those states have the experiential content that they do.” Clear enough, isn’t it? “…I don’t deny the reality of consciousness experience…”
Another quote, from the 2nd footnote:
“The difference between ‘eliminative materialism’–of which my position on qualia is an instance–and a ‘reductive’ materialism that takes on the burden of identifying the problematic item in terms of the foundational materialistic theory is thus often best seen not so much as a doctrinal issue as a tactical issue…”
And from the paragraph that footnote is attached to:
“…even if we undertook to salvage some ‘lowest common denominator’ from the theoreticians’ proposals, any acceptable version would have to be so radically unlike the ill-formed notions that are commonly appealed to that it would be tactically obtuse–not to say Pickwickian–to cling to the term. Far better, tactically, to declare that there simply are no qualia at all.”
So Dennett is not denying the existence of feelings, tastes, smells, colours, pains, tickles, pleasures, sounds, and whatnot. He’s denying the existence of a theoretical entity, “qualia”, that some people identify with feelings, tastes, smells, colours, etc. He’s also denying folk psychological notions that correspond to the theoretical idea of qualia — they are useful notions as far as they go (like our Newtonian folk physics), but break down under stress (just like Newtonian physics breaks down at the speed of light). Best of all, Dennett says that we can see the problems with the idea of qualia by looking at our own experience. Hope that makes things clearer.
John Tiberio says
Great episode. A little thin on the Dennett stuff, though. I disagree with him as you all do, but I was hoping to get a specific response to some of the arguments in Consciousness Explained. There’s a good review here: threemysteries.wordpress.com
Dennett’s rejection of so called zombies in philosophy may go a ways in answering your question about Heidegger, you might enjoy: http://bloggingheads.tv/videos/17812
Guys, thanks for the great podcast. Like y’all, I was in phil grad school in the early 90s and remember the seductiveness of Consciousness Explained when it first hit the shelves.
But, I’d like to comment on what I thought was a great exchange around the :45 min mark. Mark makes a comment about the starting point of studies on the mind being the examination of inner conscious life, as examined by Descartes, Locke and Hume, when in fact as youngsters our natural comportment is generally outwards and built up from there to become thinking adults.
I think one of the flaws of this period (so to be clear, not the specific thinkers of this episode but the early moderns discussed at this point in the episode) is that, you have some of the best and nimblest minds (like the father of analytic geometry) introspecting on themselves. But they were so atypical of regular humans, it seems to me that — to use a crude analogy – it’s like the Unmoved Mover examining the nature of existence and then telling others beings what their existence is like.
MacIntyre, whom you have featured in a couple other episodes, has a take on this (mutatis mutandis) in his book Dependent Rational Animals. He says that we shouldn’t exclude the limitations of biology when studying virtue. Instead, we should scan for the social aspects of how we treat others who are not perfectly functioning humans such as the infirm or handicapped, and also should have a look at animals other than human animals to learn how intelligence and social norms are built up (and then, he spends a few chapters on recent research related to dolphin and primate minds).
It seems to me that one fruitful way of using science to study the mind-body problem (beyond seeing what lights up for “red”) is to contrast the well-functioning with those who are not; eg the brain’s plasticity to still give rise to “mind” when the subject is an infirm human, or contrasting human minds with animal minds. So this wouldn’t be “crude behavioralism” as Mark says, but rather degrees of similarity/contrast in the range of what we call minds.
One other quick comment: I haven’t listened to all the episodes, but y’all could transition quite nicely into Intentionality on some of the ones I have listened to. Perhaps Anscombe’s slim classic of that title, or, more recently Searle’s “Making the Social World” would be a great way to take Mark’s riff at the :45 min mark and turn it into an entire episode.
Oh, and one more, I liked Seth’s comment “who says phenomenology has to be about introspection?” I presume this was an allusion to the existentiale in Being and Time, but didn’t hear it discussed in the Heidegger episode.
Thanks again for all your great work!
Kyle Thompson says
I would normally consider myself someone quite interested in philosophy, but this episode confirmed for me that modern philosophy of mind leaves me with a profound sense of fatigue and boredom, and maybe a little bit of disgust. That is not a criticism of your episode, but rather a simple observation about how I think many people might feel about this subject. I recognize abstractly that this is an important subject, but I can’t make myself care about it enough to even agree with Wes’ position on it. Something about it seems strangely scholastic.
Still, I want to thank you for putting the episode together and getting me to think about the subject to whatever extent I was able to.