Podcast: Play in new window | Download (Duration: 53:39 — 49.2MB)
On Ned Block's “Troubles with Functionalism” (1978) and David Chalmers’s “Absent Qualia, Fading Qualia, Dancing Qualia” (1995).
If mental states are functional states, there couldn't be zombies, i.e., something functionally equivalent to you but which yet doesn't have qualia (a sense of "what it's like" to be you... an inner life). Yet Block claims that there could be such zombies: for example, a functional duplicate of you whose components are actually all the citizens of China acting according to signals broadcast by satellites according to algorithmic rules. Even if the resulting system acts like you, it obviously isn't conscious.
Chalmers argues that if you buy this story about functional zombies being physically possible, you'd then need to explain the experiences of a creature half way between you and the zombie (like if you replaced your neurons with little circuits one by one, each of which exactly duplicated the function of that neuron), but you can't: If you have an experience of being red, and the zombie (even though it claims to have such an experience) doesn't, would the half-you/half-zombie have half an experience? A pale pink experience? A washed-out grey experience? A dimming experience? Or is there some point at which the lights suddenly go off, so the halfway point would be equivalent to either the before or after depending on where that point is? Chalmers thinks that none of the possible descriptions makes sense, so Block's argument doesn't work and functionalism is left standing. What do you think?
Do you hate weird thought experiments like these like Seth does? Do you think like Wes that Block and Chalmers are really talking past each other, that Block is only attacking reductive functionalism, and Chalmers's argument only succeeds in defending nonreductive functionalism (i.e., function and mentality are correlated, or more specifically supervenient, but not actually one and the same)? Do you think like Mark that this is a totally separate issue than the hard problem (which Block and Chalmers both agree is a real problem that functionalism doesn't solve), and so the other guys should stop dragging the conversation back to that every single episode? Or are you like Dylan who didn't show up to this episode?
Go start at our first philosophy of mind episode of this series if you want to understand everything here.
Continues with part two, or get the unbroken, ad-free Citizen Edition. Please support PEL!
Image by Solomon Grundy.
Explore why and how we take in the media we do through Mark's new Pretty Much Pop: A Culture Podcast at prettymuchpop.com.
I’m totally enjoying this series (even if I’m not sure y’all are), and your discussions helped me understand functionalism.
If an alien scientist was able to determine all of the physical inputs and outputs of human brains and formulate a fully functioning theory of mind without ever realizing that it actually produces a subjective experience–that it’s “like something” to be conscious–on the “inside,” doesn’t that put us in the same predicament with respect to computers? If the hypothetical alien scientist was wrong not to suspect that a “subjective experience” would arise from the neural activity of the brain, then who are we to say a computer or an AI capable of passing the Turing test wouldn’t have subjective experience arising from semiconductors? If one thinks the nature of subjective experience is tied intrinsically to the specific type of medium through which it exists–that of neurophysiology and chemistry–and not solely to information processing, and that therefore computers and AI can in principle never have subjective experience, then he/she would have to be ready to discard the original argument about the alien scientist, and with it the whole premise that the hard problem actually exists. It would seem contradictory to me to argue that there actually is a hard problem of consciousness while simultaneously arguing that an AI is, in principle, not capable of having a subjective experience by virtue of the components its mind is made out of. To me, what makes “the hard problem” so hard, and possibly unanswerable, is that even if an AI were sufficiently advanced enough to pass the Turing test, actually did have subjective experience, and was capable of communicating to us that it was having subjective experience, we would still not be able to take the AI’s word at face value.