Podcast: Play in new window | Download (Duration: 50:03 — 45.9MB)
On Hilary Putnam’s “The Nature of Mental States” (1973) and David M. Armstrong’s “The Causal Theory of the Mind” (1981).
What is the mind? Mark, Wes, Dylan, and Seth consider a theory of mind that defines things not by what they’re made of, but what they do. What does this mean? Well, what makes something a mousetrap, for instance, is that it catches mice. It could be made of wood, or metal, or something else, but the point is that it has a structure that will achieve this task. A functionalist theory of mental states, then, defines mental states not as essentially brain states, but as the organism performing some special class of tasks: being in pain, experiencing a memory, having a sensation of color, etc.
A common analogy for this is that the brain is like the hardware that the mind runs on, and the mind is like the software code. Groups of neurons fire off in lawlike ways, transmitting information around the brain, and it’s that information pattern that is the mind. Hilary Putnam (whose article we’re concentrating on in this first half) developed this idea, as presented seminally by Alan Turing, into “machine-table functionalism,” where we get an idea of what a particular mental state is by creating a sort of a flowchart: If the machine is in a particular state (defined by the current values of numerous components) and receives a particular input, then it will (probably) move to a particular other state. Check out this wiki page for a simple example of this.
Why be a functionalist? Well, as discussed in our ep. 219, mind-brain identity theory seems too chauvinistic: If we say that what constitutes a pain is a particular type of brain state, then any type of creature that doesn’t have a brain like ours by definition can’t be in pain. If we define pain instead as some kind of typical behavior, like the disposition to writhe around, then that could apply to any number of creatures. But of course that sort of behaviorism is not really accurate: I could very well be in pain and not writhe around, maybe because I’ve conditioned myself to not show my pain, and/or because I’m exercising or receiving a needed medical treatment or am a masochist and so am tolerating or even actually want the pain. Functionalism advances on behaviorism by allowing mental states to be defined not just in terms of behavior (inputs and outputs) but in terms of other mental states. The belief “it is raining” is identified not just by my disposition to carry an umbrella, because clearly there are lots of other beliefs and desires that could affect whether this behavior manifests.
David M. Armstrong’s article (which we’ll get to in part 2) specifically defines the need for functionalism in terms of making materialism make sense. How can a mental state be a brain state, when those seem like such different kinds of stuff? Well, if the mental is defined as actions that the brain does, or more precisely dispositions to action that the brain has, that provides a nice model for us to understand why these things seem so different; the mental is not “stuff” at all, but activity. This solution actually goes back to Aristotle in the De Anima, who defined the soul as the “form” of the body.
Functionalism is really a class of views, not a single view. Armstrong’s view is analytic aka a priori functionalism, which says that what we really mean in ordinary language by mental terms is something functional. Putnam’s machine-table functionalism is a type of empirical functionalism aka psychofunctionalism (at least according to Ned Block’s account of these distinctions), which says that no, the functional schematic doesn’t have to be connected to what our ordinary language mental terms mean, but is instead supposed to elaborate the best psychological theories we have. This division is comparable to the one Chalmers made (see ep. 218) between Type A materialists (who claim that materialism is true a priori, implied in the definitions of mental terms themselves) and Type B materialists (who claim that it’s instead a scientific discovery that mind = brain). So even though functionalism is a step away from mind-brain identity theory (in fact, Chalmers himself is a functionalist, even though he’s not a materialist at all), similar questions come up in defining why or how you’re a functionalist to why or how you’re a materialist. Are we saying that functionalism (or mind-brain identity) captures our intuitions (at least after we’ve thought about them carefully), or is this a scientific issue, such that it doesn’t actually matter whether the theory in question is counterintuitive?
Does functionalism solve the “hard problem of consciousness” as Chalmers defined it in our ep. 218? Both Putnam and Armstrong describe functional states as exhausting what it is to be in a mental state: to be in pain, for instance, is to possess a particular functional organization. Putnam articulates the distinction we discussed with regard to Papineau between concepts and properties: mental concepts (including presumably qualia, though Putnam doesn’t use this term) and physical concepts refer to the same underlying property, just like water and H20 do. This is essentially just Frege’s sense-reference distinction.
Armstrong tries to interpret mental states in terms of their causes or “intentions,” i.e., what in the world they are aiming at. This translates into direct realism (see our ep. 138 with Searle, and yes, we know Searle is gross) with regard to perception. It’s not that we perceive a mental entity, a quale, and then have to make sense of this weird sort of entity, but that perception just connects us via causality with the world: “what it is like” to see a horse is just the horse that caused our perception of it. Armstrong recognizes that this raises “the problem of the secondary qualities,” given that colors, smells, sounds, etc., are not, strictly speaking, in the world in the way that a horse is. Armstrong also considers emotions, and concludes following William James that these are experiences of our own physiology. Armstrong considers that experiencing a color may be a matter of a systematic illusion: The sensation is caused by a certain wavelength of light, but it’s impossible for us to perceive it that way. We perceive a simple quality, possessed by the surface we’re looking at, but the object is in fact complex. Armstrong doesn’t see this discrepancy as threatening materialism, though of course Chalmers (and Block) will disagree about this.
A complicated recurring issue is whether “pain” and other mental terms pick out the same thing in all possible worlds (this is how we can think about what the word means and what’s essential about it metaphysically). If this interests you at all, you should just go back and re-listen to our previous episodes on Kripke and Putnam.
For more introductory information, take a look at the articles on functionalism in the Internet Encyclopedia of Philosophy and the Stanford Encyclopedia of Philosophy. We took as one of our introductory sources the in-depth introduction by Ned Block in an article we’ll be discussing for ep. 222: “Troubles with Functionalism” (1978). Both the Putnam and Armstrong articles, along with an abridged version of this Block article, can be found in David Chalmers–edited compendium, Philosophy of Mind: Classical and Contemporary Readings (2002). I also got a lot out of Oron Shagrir’s article “The Rise and Fall of Computational Functionalism” (2005), which lays out Putnam’s views, including why he later rejected functionalism as “utopian.”
Continues on part two. Get the full, ad-free Citizen Edition. Please support PEL!
Image by Corey Mohler.
Check out our new culture/entertainment podcast, Pretty Much Pop, at prettymuchpop.com.
Few thoughts related to this episode –
1. Pain is incredibly tricky. We use self reporting and a ridiculous 1-10 pain scale to measure it, and most nurses will tell you that one person’s “3” is another person’s “10” on the scale. In the case of hearing or sight we can appeal to an objective measure of the stimuli. For pain that’s much harder.
2. A lot of the confusion around the meaning of words (e.g. “water” and “H2O”) seems to stem from the assumption/assertion that words do/should have a single meaning. I understand the attractiveness of that idea, particularly if you work on formalizing language. But language is obviously very context dependent. Even the seemingly straight forward case of “water” and “H2O” is complicated. If I was translating a poem of novel from “Earth 2”, where “compound XYZ” had a functional role of water, I might very well translate the alien word for compound XYZ into the word “water”. Despite the fact that they are materially different.
3. So what else might be confusing about Earth 2? Let’s say there are tree like things there? Most of us Earth 1er’s would probably just call them alien trees. An exo-botanist from Earth 1 might clarify that the Earth 2 “trees” are not actually “trees” in the evolutionary sense, but even she might use the word “tree” in certain functional context. It seems like this same logic could apply to the “pain” of the aliens on Earth 2. If you burn the Earth 2 alien and he acts like he’s in pain… then he’s probably in pain. Our exo-biologist might point out that the alien’s pain sensation comes from an underlying process that’s fundamentally different form ours, so the qualia may not be that similar… but human nurses have the same problem just dealing with their human patients.
Functionalism doesn’t seem to completely capture what we know about how neurons connect to each other (many to many relationships), since functions usually have one output only. Multiple inputs and multiple outputs connected together in a huge neural network is what facilitates the parallel processing that distinguishes us from computers. For instance, when we smell smoke, multiple regions of the brain are activated at once. Did we leave something on the stove? Is there a hint of burning plastic? Is it just sausages on a BBQ? Does it provoke memories of camping? Etc, etc. Somehow, all these possibilities are weighed simultaneously and then ultimately result in multiple simultaneous actions as well (breathing, hormones, heart rate, more thoughts, and finally actual physical actions like walking).
As such, I don’t see flowcharts as a good metaphor for mental activity.
Talking about states also seems likely to lead one astray as we don’t experience consciousness as a series of states. Consciousness manifests dynamically, not statically. A film frame by frame in book form will be a different experience to just watching the film.
The other problem with states is that it seems very unlikely that the neurons that need to fire for me to type the letter ‘a’ are probably not only in different physical locations in my brain compared to yours, but also connected in completely different patterns to yours. I can see how the functionalists may be trying to abstract away those difficulties, but in doing so they risk abstracting away key parts of the experience of consciousness.
Each mind must be unique in structure and configuration because each person has led a different life and thereby built and ‘trained’ their neural network in a different way.
Another difficulty is that our brains are constantly being built and trained. This also seems to be a feature of consciousness and not something that can be swept to the side for the convenience of creating a nice neat theory of mind.
I’m not sure these are really problems for functionalism.
What about an economic market? I feel quite comfortable with a functional definition for markets, but they are squidgy in all the ways you mention. There are multiple inputs and outputs connected in complicated ways. It’s dynamic not static. Different markets are likely connected in different ways. They are “trained” to behave differently over time.
I can’t be sure they are problems for functionalism either. That said, I’m not convinced that our models of financial markets are terribly functional.
Seth’s opening remarks here ftw! I’ve always loved the topic but found the contenders just hopelessly mired in their own muck.