Podcast: Play in new window | Download (Duration: 1:23:11 — 76.2MB)
On Ned Block’s “The Harder Problem of Consciousness” (2002) and David Papineau’s “Could There Be a Science of Consciousness?” (2003).
What would give us sufficient reason to believe that a non-human was conscious? Block thinks this is a harder problem that we might suspect. We can’t know for sure exactly what consciousness in us is, so we can’t know for sure what such a being might require (a brain? certain patterns of behavior?) for them to be enough like us that we could safely apply our own experience of our own conscious states to them. Papineau diagnoses this as a fundamental vagueness in the concepts we use to describe our conscious states.
This conversation continues from ep. 218, with guest Gregory Miller from the Panpsycast Philosophy Podcast rejoining Mark, Wes, and Dylan.
As you’ll recall, the “hard” problem (per Chalmers) is us not being able to understand how first-person experience (qualia) results from brain matter. To complicate this, the same issue comes up if we assume that what is essential to a mental state is not a precise kind of brain state (which would rule out automatically robots and aliens and the like also being in this state), but instead a “functional organization.” Ep. 220 and 221 will be all about functionalism, but briefly, it’s the idea that what makes a pain a pain is that it has certainly typical causes (getting hurt) and certain typical effects (pain expression behaviors), so creatures with different kinds of brain materials could nonetheless feel pain. But of course, since functional organization is not referring to private mental experiences, it still seems possible (just as for materialism) that there could be a creature that has the relevant functional organization (or for materialism, matter) and yet not have the pain qualia, the actual experience of pain.
Functionalism is picking up on the idea from Alan Turing that if, say, Data from Star Trek or C-3PO from Star Wars can hold a convincing conversation with us, then heck, why would we not call that robot conscious? I mean, that’s how we judge whether other people are conscious, right? Block claims that when we judge another person to be conscious, we’re not just relying on their convincing behavior, but also because we know that they’re the same types of creatures we are. We can’t be strictly CERTAIN that any other human has qualia (the philosophical problem of other minds), but we sure do have strong rational grounds for believing this.
The same will go for, e.g., a judgment that an animal is in pain. We understand what pain behavior looks like, and we understand that other mammals at least have similar nervous systems to ours, so it’s entirely reasonable for us to believe that animals feel pain and have other qualia, even though their brains are unlike ours in certain ways.
But for a robot (or alien, or even an octopus whose brains have developed via a very different evolutionary track than ours), we have no such assurance that the light of consciousness is really on even if there’s a very accurate simulation of human conversation. We might want to follow Turing in classifying the system as “intelligent” or “thinking,” but whether it is conscious is simply inaccessible to us. Moreover, Block classifies it as “meta-inaccessible,” because we not only don’t know whether Data is conscious, but we also have no idea what would even count as rational grounds for belief that he’s conscious. This is what Block calls the Harder Problem, which is like the Hard Problem but adds this Other Minds problem wrinkle, adding an extra layer of confusion.
David Papineau’s paper was recommended to us as a simpler way of presenting essentially the same issue. In our own case, we know we have qualia, we know that when, say, a pain quale is going on that there’s some particular brain state involved (we might not now know which, but brain scientists can figure it out), and this “pain” also has a particular functional organization which one can try to chart out the nuances of and maybe use that to inform a computer simulation of pain. (If it seems weird to think about pain in this respect, think instead about what it is to have a belief or memory or desire; these are all typically related to each other causally, i.e., if I have a desire for an apple, a belief that there’s an apple in front of me, and other related beliefs and desires that I can have the apple, that I don’t know from experience that apples make me sick, etc., then I typically grab for the apple.) But we don’t know which (the brain state or the functional organization) the quale is actually identical to (assuming it’s identical to one of these, i.e., that materialism is true, which Papineau like Block has independent reasons for assuming). Following Wittgenstein, Papineau thinks that words only make sense in the context in which they’re originally used; they’re just too vague to be applied to the crazy cases philosophers think of. For Papineau, it’s just indeterminate whether the pain-like behavior that Data might be exhibiting really counts as “pain.” Maybe there’s no single cross-species concept of pain at all, and we’d just have to figure out for practical purposes, e.g., whether to accept their protestations of pain (and so stop hurting them) or not. Block thinks this result is intolerable: either Data is in pain (and has other qualia) or he’s not (doesn’t).
Papineau’s longer treatment of the hard and harder problems is Thinking about Consciousness (2002).
End song: “Mindreader” by Phil Judd from Play It Strange (2014). Hear Mark interview Phil on Nakedly Examined Music #98.
Image by Solomon Grundy.
Please support PEL and get this and every other episode ad free.
you folks should talk with philosopher R.Scott Bakker about his “blind brain” theory and how we shouldn’t expect heuristics (like intentional-stances, minds, beliefs, etc) developed thru our evolutionary paths to meet daily needs to be good tools for meta-cognition, science once again working against folk beliefs/rhetorics.
It doesn’t seem like this “blind brain” theory gives any insight about how we might bridge the explanatory gap, like Representational Qualia Theory does, right? (see: https://canonizer.com/topic/88-Representational-Qualia/6#statement)
did you actually look into BBT or are you just here promoting yer business venture?
https://rsbakker.wordpress.com/
I listened to about half of it. Well past the part where he clearly admitted he was like an “elimitavist”. Along with this, I saw no evidence that his theory had anything to do with bridging the explanatory gap. And to be sure I wasn’t missing anything, I was trying to ask for confermation. It usually helps to give an example of at least one possible way to bridge the explanatory gap, as most people just assume it isn’t possible. That was my primary intent for including that link.
Should I assume, since you haven’t provide the clarification I am asking for, that you still think Qualia are impossibly ineffable?
There is now an emerging consensus coalescing around what is now being called “Representational Qualia Theory” at Canonizer.com. (see: https://canonizer.com/topic/88-Representational-Qualia/6#statement) More than 40 of the current 60 participators are now on board with this super camp in various supporting sub camps, including Steven Lehar (https://canonizer.com/topic/88-Panexperientialism/34#statement), John Smythies (https://canonizer.com/topic/88-Smythies-Carr-Hypothesis/14#statement), Stuart Hameroff (https://canonizer.com/topic/88-Orch-OR/20#statement), and a growing number of others. Even Dennett’s Predictive Bayesian Coding theory is now in a supporting sub camp position to “Representational Qualia Theory”. (see: https://canonizer.com/topic/88-Dennett-s-PBC-Theory/21#statement )
It appears that there is, after all, a lot of consensus possible around the general ideas in Representational Qualia Theory. All the disagreement appears to be just around the nature of qualia. These various camps are contained in the supporting sub camps to Representational Qualia Theory. This super camp is basically a way to falsify the various sub camps until THE ONE camp that can’t be falsified is discovered. This will enable us to bridge the explanatory gap and eff the ineffable nature of qualia, the hard problem. It also predicts how we might resolve this “harder” problem in “week”, “stronger” and “strongest” forms of effing the ineffable. (see the paper referenced in the camp statement). It’s all about not being sloppy with the definitions of qualitative words like “red”, or not being “qualia blind”. Anyone that uses one word “red” when discussing the perception of “red” things, is qualia blind. Such models are not sufficient to include physical qualities.
OH I’M AN ISOMORPH AND I’M OK
I’M FUNCTIONAL IN EVERY WAY
I LOOK LIKE A DUCK
QUACK LIKE A DUCK
BUT I’M NOT QUITE LIKE YOU
I DON’T FEEL THAT PINPRICK
BUT I’LL TELL YOU THAT I DOOOO
OH I’M AN ISOMORPH AND I’M OK
I’M STRUCTURAL IN A PERFECT WAY
I SEE SOME RED
I STOP THE CAR
I SMELL THE DAHLIA
I SHOW BEHAVIORS PHENOMEN’LLY
BUT HAVE NO QUALIAAAAA
OH I’M AN ISOMORPH BUT YOU SHOULD KNOW
IT’S NOT LIKE WATER ‘N’ H20
I BASK IN SUN
BUT HAVE NO FUN
WHEN I GO ON VACATION
AND THAT’S BECAUSE I’M MADE UP
OF THE CHINESE NATIONNNNN
Ha, this is great bill — thank you :).
Thanks Wes, it’s a Phenomenal series!
Nice but this is failing to clearly distinguish between reality and knowledge of reality.
When you say: “I see some red, I stop the car” you are talking about reality, but you must also include in your model possible “isomorphic” diversity of knowledge of reality. For example, we both see and stop for “red”, but my knowledge of what we both call “red” could be like your knowledge of green. For more information, see this piece describing “3 robots that are functionally equivalent but qualitatively different”. (https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing)
If 3 robots answer “what is knowledge of red like for you?” very differently, are they “Functional in every way”?
Thanks Brent
Good points!