Podcast: Play in new window | Download (Duration: 1:09:41 — 63.9MB)
Continuing on functionalism with David M. Armstrong’s "The Causal Theory of the Mind" (1981).
We reconvened a day after Part One on Putnam to come back with fresh energy, considering Armstrong, who self-consciously presents himself as a defender of science: It's the most likely scientific hypothesis that mental states are physical states of the brain, and it's the philosopher's job to get rid of conceptual tangles that make this identity seem unintelligible. Functionalism is his solution for doing this, specifically a causal analysis of mental concepts. A mental state is that which in us has a certain characteristic cause-effect structure (e.g., hunger causes food-seeking) and involves information-sensitive types of causes: the causes involve belief and perception. These are mappings of the world that are then utilized by purposes.
Armstrong connects the two senses of the word "intentionality." As you may recall, intentionality in the technical sense is the "aboutness" of a mental state, like my perception of a ball is about that ball, and my belief that the ball is red is either about the ball or maybe about the fact that is red or, depending on your account of belief, about the proposition that it is red. And then this gets tricky if I'm hallucinating and there isn't really a ball there, in which case, what is my perception or belief really about? The other sense of intentionality is the everyday sense of having a purpose, and Armstrong's use of the term generalizes it to teleology, i.e., the overall purposefulness of a system.
So it might seem like functionalism as the mark of the mental and intentionality as the mark of the mental are different ideas, but Armstrong combines them: The function of a mental state is its intention, i.e., what it's aiming to get at in some sense, which in the case of a perception is the thing in the world that's being perceived, and in the case of a belief at the state of affairs the believer is trying to believe truly about, and in the case of a desire or purpose about literally getting or achieving something.
Why am I emphasizing this point? Because maybe the most interesting part of this paper insofar as it's different than Putnam's view is its commitment to externalism about mental events. A causal theory means that we explain mental states largely in terms of what causes them, and it's the object of intention that's the (typical) cause of the state: The red ball causes my perception of it, my beliefs about it, and maybe its alluring nature causes me (is one of the causes anyway) to want to grab it and chuck it at your head or whatever.
This is one (probably unsuccessful) way of trying to deflate the hard problem of consciousness, through something like direct realism. Instead of talking about ineffable qualia, we should analyze perceptions as exhausted by their representational contents, by what in the world they're perceptions of. Of course, then there's the problem of illusions, and Armstrong in this paper considers "the secondary qualities." Colors (sounds, smells) are not themselves in the world; they're caused by light of a certain wavelength bouncing off the object toward my eye. So this is a persistent illusion: what seems like a simple perception of red is in fact complex.
This emphasis on causal connections is supposed to connect psychology up to physics, to make talk of the mental properly scientific and so open the way to materialism. Phenomenal properties themselves are not going to play causal roles, but if they're associated with physical things like light or the motion of air (sound) or particles in the air (smell), then those things can serve causal roles instead. So this theory does not deny the existence of qualia, but does deny that they play a role in how we explain the mental. Does the theory explain qualia, then? You decide!
We also consider whether this externalist account means that mental states are merely relational, i.e., dispositions for producing external action (this would verge on behaviorism) or intrinsic. For Amrstrong's use of this to clear the way for physicalism, it has to be the latter: mental states may be analyzed in terms of their relations to other things, but what they actually are intrinsic states of the brain.
Listen to part one first or get the full, ad-free Citizen Edition. Please support PEL!
End song: "Pain Makes You Beautiful" by Jeff Heiskell's JudyBats, as featured on Nakedly Examined Music #5.
I am enjoying the recent podcasts on the nature of the mind, except for Seth’s bellyaching about it. His objection seems to be that all of this theorizing about consciousness and qualia has no application to everyday life. I disagree. As a medical doctor my job is to relieve suffering in human individuals. That job is not as simple as it may seem. A philosophical understanding of the nature of suffering certainly helps.
Take the example of pain. Pain is certainly not identical with firing C fibers, or any other circumscribed event in the central nervous system. It is also not a one-to-one correspondent with tissue damage. Pain is a complex biopsychosocial phenomenon that depends not only tissue but on emotional memory, mood, anxiety, secondary social gain, and personality factors such as dependency or machismo. Perhaps if doctors were more sophisticated about their understanding of pain they would not have been as susceptible to over-prescribing opioids at the urging of pharmaceutical companies.
In my specialty, psychiatry, the nature of the mind is obviously relevant. Currently, there are too many identity theorist in my field who think that treating mental disorders is just a matter of changing brain chemistry or circuits.
So, far from irrelevant, understanding the nature of consciousness and qualia is one of the most important practical applications of philosophy. Please don’t be deterred from doing more mind episodes.
Well said. I’m also a medical doctor, in the field of spinal cord injury and multiple sclerosis. Many of my patients suffer from pain. I’ve more or less stopped listening to medical podcasts, and instead I listen to this one and to Panpsycast. I’m surprised how much philosophy has enriched my medical practice and my approach to the phenomena or qualia of patients who are suffering. In college I found philosophy of mind to be incredibly dull, despite pursuing a degree in biopsychology. I was much more interested in what the existentialists and post-structuralists had to say. So I’ve been pleasantly surprised that the PEL folks, including Seth, have been able to illuminate this field and make it exciting. I think the issue of whether mind is its own non-reducible “essence” or fundamental property, or just something that arises from physical neuroanatomical parts, as the identity theorists believe, is an important problem with many clinical (and general) implications. The neurobiologists seem to think they have this problem solved, but I’ve found that the philosophers can demonstrate that the hard problem(s) of experience and consciousness remain stubbornly problematic.
I couldn’t agree more that this area is relevant and topical. Science and medicine, not to mention AI/robotics, are rife with real-time issues that these discussions speak to. For example:
– are reanimated brains conscious and can they suffer? (https://www.nytimes.com/2019/07/02/magazine/dead-pig-brains-reanimation.html).
– Does the human mind arise from functional organization? (https://www.scientificamerican.com/article/how-the-mind-emerges-from-the-brains-complex-networks/).
– Will future AI/robots suffer and if so how will we know? (https://global.oup.com/academic/product/superintelligence-9780199678112?cc=us&lang=en&😉
Seth’s view that philosophy of mind is irrelevant to our lives is misdirected and misinformed.
I feel obligated to response based your comments as well as one witheringly critical though disingenuous email. At no point did I say the activity of doing Philosophy of Mind wasn’t worthwhile. What I asked was ‘What is at stake?’ The question doesn’t imply there is no answer; it does imply that the authors we have been reading haven’t provided one. Let’s say we somehow get to the point where we can prove that Psychofunctionalism is “true”. What then?
Block believes his job is to create a conceptual framework that makes it possible to determine whether an entity not biologically isomorphic to us has phenomenal consciousness or qualia like ours. OK, what happens if, after 40+ years of writing and thinking about this, he succeeds? Who cares? What do we do with that?
He’s saying it’s nomologically true that I have good reason and a conceptual framework to assume that other biologically isomorphic entities (human beings) have qualia like mine. This goes beyond the problem of other minds to the problem of other’s experiences. But this is only stipulated and I can refuse to assent and then ask how I can generalize my qualia to other human beings.
I think this is a much more interesting and valuable question. Because the act of projecting my experience on to others selves is a form of conceptual hegemony, I think justifying that move is more important than determining whether an android we build has my experience. Validating your conceptual imposition on others, or perhaps how that is jointly negotiated, has real world implications for ethics and jurisprudence.
I’m glad to see doctors and psychiatrists weighing in; at least there may be some practical upshot in your fields which I fail to see in philosophy proper. And a final note on methodology: there are few positive arguments in the papers we have read. Most are fishing for ‘intuition accord’ via stories and analogies. In Block’s “Troubles with Functionalism” paper he provides no fewer than the following:
p.265 Martians: functionally equivalent, physically different
p.277 Homunculi-Headed Robots
p.278 China as Functionalism
p.286 Epistemagine
p.289 Fleas in my head
p.290 Man doubts God; what if he is God?
p.291 Different part of universe with infinitely divisible matter…
p.293 Brain-headed systems
p.294 A machine that would pass the Turing test – one hour speakable sentence
p.296 Paralytics and disembodied brains
p.298 Paradigmatically embodiment
p.299 Brain in a vat as amputee
p.299 Entire human nervous system reduced gram by gram
p.300 Tannins in wine
p.302 Protonhood
p.304 Inverted qualia
p.304 Inverted identical twins
p.304 p & q believer
p.305 Computer with only two instructions (a) and (b)
p.309 Destroy in order to save (Air Force & Vietnam)
p.315 Economic inputs and outputs
p.317 body burned in a fire
p.317 Cerebroscope
But perhaps I’m totally off point. In which case forgive my ‘whining’ and ‘bellyaching’.
Block’s arguments are exhausting, as are the anecdotal thought experiments you list and the project of fishing for “intuition accord.” I think that’s why I find your critique useful and don’t consider it to be “bellyaching.” I was most inspired by the discussion of Chalmers, but might be biased because I find him so much easier and reasonable to read than Block and some of these other folks. Personally, this field is more interesting if one reads in it alongside the philosophers you guys discussed many episodes ago. Take Camus, for example. An interesting thought experiment (and I am admittedly annoyed by most thought experiments) is to consider a functionally isomorphic entity and ask oneself whether or not that robot or alien would commit suicide. This brings forward the question as to whether or not existential despair and the state of lacking meaning is the key “qualia” by which we can approach the question of whether consciousness can be possible in functionally isomorphic beings, be they meat-based, silicon-based, or based on something we haven’t conceived yet.
It can also be illuminating, or at least fun, to read contemporary philosophy of mind alongside pragmatism. I sense hostility toward pragmatism among some of the PEL gurus, but nonetheless it can provide a useful framework by which to measure the implications of what Block and others are claiming. Richard Rorty would say, I suspect, that whether or not artificial isomorphic entities have ontologically “true” consciousness, eventually they might act AS IF they did to a convincing enough degree that we would regard them as having it- even if the philosophers of mind came up with a model that successfully refuted the possibility of true consciousness in non-human isomorphic systems.
However, this all may support Seth’s points above. “What’s at stake” in these readings is important, but cogent answers are hard to find, which is why it’s illuminating to reflect on the implications of the various philosophy of mind positions with one eye on the Existentialists and another (perhaps) on Pragmatism. Practically speaking, this is one benefit of your podcast; one may consider Kierkegaard during the commute to work, and listen to the episodes on Block during the ride home.
The paragraphs in Chalmers starting “What happens to my experience when we flip the switch?” assume that the evolved brain and the technological backup will for some reason share a single phenomenal consciousness, which will either notice or not notice the switch. I suppose it could be argued that the two systems share a consciousness in the same way that the left and right hemispheres of a brain share one consciousness, or we generally suppose that they do, but I’d like to see some argument as to why that would be the case (the combination problem is not just for panpsychists). My intuition FWIW would be that the two systems have separate and independent phenomenal consciousnesses. And since there has to be a mechanism somewhere that was keeping the two systems in the same state ready for the switch-over, neither consciousness will notice anything unless and until that mechanism is switched off.
Greg Egan wrote a short story, “Learning to be Me”, about the subjective experience of the backup system in an arrangement similar to this.
In fact Chalmers does say a little later that “there is simply no room for such a change to take place, unless it is in an accompanying Cartesian disembodied mind.”. He then says that it “seems entirely implausible to suppose that my experiences could change in such a significant way, even with me paying full attention, without my being able to notice the change,” though he seems to have defined the situation exactly so that he (either “he”) would not notice any change.