What would give us sufficient reason to believe that a non-human was conscious? Block thinks this is a harder problem that we might suspect. We can't know for sure exactly what consciousness in us is, so we can't know for sure what such a being might require (a brain? certain patterns of behavior?) for them to be enough like us that we could safely apply our own experience of our own conscious states to them. Papineau diagnoses this as a fundamental vagueness in the concepts we use to describe our conscious states.
As you'll recall, the "hard" problem (per Chalmers) is us not being able to understand how first-person experience (qualia) results from brain matter. To complicate this, the same issue comes up if we assume that what is essential to a mental state is not a precise kind of brain state (which would rule out automatically robots and aliens and the like also being in this state), but instead a "functional organization." Ep. 220 and 221 will be all about functionalism, but briefly, it's the idea that what makes a pain a pain is that it has certainly typical causes (getting hurt) and certain typical effects (pain expression behaviors), so creatures with different kinds of brain materials could nonetheless feel pain. But of course, since functional organization is not referring to private mental experiences, it still seems possible (just as for materialism) that there could be a creature that has the relevant functional organization (or for materialism, matter) and yet not have the pain qualia, the actual experience of pain.
Functionalism is picking up on the idea from Alan Turing that if, say, Data from Star Trek or C-3PO from Star Wars can hold a convincing conversation with us, then heck, why would we not call that robot conscious? I mean, that's how we judge whether other people are conscious, right? Block claims that when we judge another person to be conscious, we're not just relying on their convincing behavior, but also because we know that they're the same types of creatures we are. We can't be strictly CERTAIN that any other human has qualia (the philosophical problem of other minds), but we sure do have strong rational grounds for believing this.
The same will go for, e.g., a judgment that an animal is in pain. We understand what pain behavior looks like, and we understand that other mammals at least have similar nervous systems to ours, so it's entirely reasonable for us to believe that animals feel pain and have other qualia, even though their brains are unlike ours in certain ways.
But for a robot (or alien, or even an octopus whose brains have developed via a very different evolutionary track than ours), we have no such assurance that the light of consciousness is really on even if there's a very accurate simulation of human conversation. We might want to follow Turing in classifying the system as "intelligent" or "thinking," but whether it is conscious is simply inaccessible to us. Moreover, Block classifies it as "meta-inaccessible," because we not only don't know whether Data is conscious, but we also have no idea what would even count as rational grounds for belief that he's conscious. This is what Block calls the Harder Problem, which is like the Hard Problem but adds this Other Minds problem wrinkle, adding an extra layer of confusion.
David Papineau's paper was recommended to us as a simpler way of presenting essentially the same issue. In our own case, we know we have qualia, we know that when, say, a pain quale is going on that there's some particular brain state involved (we might not now know which, but brain scientists can figure it out), and this "pain" also has a particular functional organization which one can try to chart out the nuances of and maybe use that to inform a computer simulation of pain. (If it seems weird to think about pain in this respect, think instead about what it is to have a belief or memory or desire; these are all typically related to each other causally, i.e., if I have a desire for an apple, a belief that there's an apple in front of me, and other related beliefs and desires that I can have the apple, that I don't know from experience that apples make me sick, etc., then I typically grab for the apple.) But we don't know which (the brain state or the functional organization) the quale is actually identical to (assuming it's identical to one of these, i.e., that materialism is true, which Papineau like Block has independent reasons for assuming). Following Wittgenstein, Papineau thinks that words only make sense in the context in which they're originally used; they're just too vague to be applied to the crazy cases philosophers think of. For Papineau, it's just indeterminate whether the pain-like behavior that Data might be exhibiting really counts as "pain." Maybe there's no single cross-species concept of pain at all, and we'd just have to figure out for practical purposes, e.g., whether to accept their protestations of pain (and so stop hurting them) or not. Block thinks this result is intolerable: either Data is in pain (and has other qualia) or he's not (doesn't).
Papineau's longer treatment of the hard and harder problems is Thinking about Consciousness (2002).
Image by Solomon Grundy.