Podcast: Play in new window | Download (Duration: 50:49 — 46.6MB)
For the climax and denouement of our summer philosophy of mind series, Ned Block himself comes on to help us fill in the gaps about functionalism and attributing consciousness to machines. We discuss two essays by other authors responding to Ned's work from the collection Blockheads!: Essays on Ned Block's Philosophy of Mind and Consciousness
(2019), ed. Adam Pautz and Daniel Stoljar. (Thanks to Adam for helping us figure out what to read with Ned.)
In this first half, we clarify Ned's typology of consciousness, distinguishing phenomenal consciousness from access consciousness and continue the discussion we began in our ep. 219, covering Brian McLaughlin’s “Could an Android be Sentient?” and Ned's Blockheads! response, "Functional Role, Superficialism, and Commander Data: Reply to Brian McLaughlin." While McLaughlin believes that our biological dissimilarity from robots is sufficient justification for our not attributing phenomenal consciousness to them, Ned now thinks that if a robot were a functional duplicate of a person, that would be a strong (though defeasible) reason to attribute such consciousness, especially if the isomorphism ran very deep (we also go into what "deep" might mean here).
On part two we talk with Ned about Michael Tye’s “Homunculi Heads and Silicon Chips: The Importance of History to Phenomenology." Get the ad-free, unbroken Citizen Edition. Please support PEL!
I have noticed, in functionalist writing that ive read, which is very little(!), the description of certain mental processes as “functional”, or maybe possible to align with, or explain only through, mechanical systems. These processes include: adding, subtracting, i would guess geometry, sorting and other types of “pure information processing.” The point is they do not play a role in Hard Problem of Consciousness, which is confined to explaining *true* qualia-packed experiences like ‘the smell of lavender’, and so on.
I think the thinking goes like this:
1. We want to know whether some person or thing is experiencing, whether they are subjects.
2. Computers don’t experience qualia, and are not subjects.
3. Computers process data, add, subtract, sort and so on.
4. [2,3] Being able to process data, add, subtract, sort and so on does not constitute experiencing qualia, or being a subject.
5. ???
6. Processing data, adding, subtracting and so on cannot be qualitative or subjective.
7. Human processing of data, adding, subtracting and so on is not qualitative or subjective…
8. …and does not play a role in the Hard Problem
Its 5 that bothers me. By laying out my imagined functionalist like this, I hope to make it clear where my deep misunderstandings of functionalism lie. to those who know more.
i should have listened more first 😛
I bothered Ned via email for more information on Rafael Malach’s studies on diminished self-reporting during movies and highly focused tasks, and he responded very quickly with 3 papers for me. Super nice of him. I find the notion of losing identity temporarily when highly engaged to be fascinating.
If you are curious, two of the papers are from 2006-7, “Perception without a Perceiver” and “When the Brain Loses Itself.”