Sam Harris got a lot of grief on our Churchland episode. Whatever the difficulties that Churchland (and allegedly Hume) may have with the is/ought distinction, Harris provides a much easier target for this kind of criticism.
Here's Harris specifically responding (starting around 1:40) to the is/ought distinction: "a firewall between facts and values in our discourse:"
He doesn't explain what's wrong with the distinction, but just says that it's had terrible effects: encouraging superstition and making science into something inhuman and amoral.
Around 4:30, he gives something like Churchland's response (and incidentally, she shows up at 5:15 as an audience member, and he refers to her near 8:40): we are social beings, i.e. it is a fact that most of us want to help others. At 5:50, he describes something like the pluralism Churchland advocated, using an analogy of food: there's not just one palatable type of food, but there's an objective distinction between food and poison. Does a social practice make for happier, flourishing individuals? No? Then it's bad.
Toward the end of the clip, and into part 2, he defends consequentialism, and his breed of it against utilitarianism (following much what Churchland says in her book on this topic). If anything, his emphasis on moral realism is much stronger than Churchland's (as I commented on the show, I think her main influence Hume presages Nietzsche in important ways).
Not until 4:00 in the 2nd clip does Harris actually bring up neuroscience, only to quickly drop it. He then attacks what seems to me a straw man: the view that the diversity of moral opinions means that they can't be in any way objective. At around 8 minutes, he brings up scientism but doesn't respond to the charge. His concluding remarks sound much like those of Churchland: moral reasoning involves ineliminable complexity, but we still have to navigate our way through it as best we can.
P.S. I was on vacation for a while and then replacing my PC, so we're a bit behind on the episode editing; it'll likely be at least a week still until the feminism episode goes up, but hopefully the existence of God episode (which has already been recorded) will come up within a week or two after that.
Jay Jeffers says
Sam Harris seems confused, but I wonder if maybe he’s just being extremely coy.
In this piece, http://www.utilitarian.net/singer/by/197301–.htm Peter Singer distinguishes between “neutralists” and “descriptivists.” Neutralists say that any principle can count as moral if it purports to be overriding (not that we have to all agree that it’s morally good in order for it to count as a moral principle).
Descriptivists, on the other hand, place restraints on what counts as a moral principle (is logically tied to suffering and happiness).
On this scale, Harris is a descriptivist, and as such, must reduce all moral disagreement to a semantic disagreement over what morality actually means, which is an incredibly impoverished way of understanding the diversity of values in the world today, if you ask me.
The reason I say Harris is possibly being coy is that he doesn’t come right out and say that if one has neutralist intuitions, then he has nothing to offer them (he might not know how to say this, but if that’s the case, he’s ignorance is willful. He’s a smart guy; he could have engaged with the literature more than he did). Rather, he acts as if the literal meaning of morality is *necessarily* tied to suffering and happiness, and he says that everyone holds the same moral view because religious people’s beliefs about happiness and suffering are simply extended to the afterlife. This is not the result of any deep study of cultural diversity or analytic reflection on moral disagreement, but simply a facile gloss.
Whatever the cause of moral beliefs, what are we to make of cultures that say that they are obliged to do horrible things? They put their beliefs in moral terms, and it’s not hard to imagine that loyalty to God is an intrinsic value in some cultures. Now, a person would be more likely to follow God if they expect an eternal reward, sure, but it’s not a leap to believe that the reward comes because loyalty and obedience is thought to be inherently good. So we can’t easily track all culture’s moral beliefs such that they derive and/or are motivated by a tangible reward like happiness. Such is the case with moral disagreement: it’s often intractable, to put it mildly.
It just seems like the level of obliviousness Harris has to maintain to continue his line of argument must take some effort.
Jay Jeffers says
Oh, and in the piece I linked to, Singer’s gloss seems a big quick too. But nevertheless much more comprehensive than what Harris offers.
Jay Jeffers says
I should moderate a bit and say that some disagreement can be left over on the appropriate course of action even if the definition Harris offers pervades. Conservatives, liberals, etc, will still disagree on what’s moral even if they can agree on the definition Harris offers. So I shouldn’t say that “all” moral disagreement will be reduced to a semantic disagreement.
But the disagreement that bothers us most is the kind where our good is someone else’s bad, and vice versa. In such cases, it seems superficial to say that the epistemological matter can be adequately explained by saying our rivals are using words incorrectly, or working with wrong definitions, rather than saying the Taliban have misapprehended moral reality (which G.E. Moore would say) or that the disagreement can’t be adjudicated because there is no moral reality to apprehend (which J.L. Mackie would say). Both are better than Harris on the matter.
Wes Alwan says
One note here (which I know Mark is aware of but I want to reiterate it): saying that there are facts about what human beings tend to value, or facts about brain structure that lead to our social behavior and moral judgments, does not erode the fact/value distinction. The question is whether normative statements have truth values, and if so whether these truth values are functionally dependent on the truth values of only non-normative statements. Saying that you’ve observed that some behavior leads to flourishing doesn’t get you anywhere in this debate. Because “one ought to flourish” is still an implied normative statement here, which itself can’t obviously be grounded in non-normative statements. (What would these be?)
Further, what flourishing means is the point of contention for most ethical debates! If that were well-defined and agreed upon, it’s hard to see how most ethical debates would continue (although the meta-ethical debate might). So pretending that the answer to the hard ethical question — what flourishing means — is clear, and then going around treating that as a well-established ruler against which you can measure behaviors … misses the point entirely.
But doesn’t knowing that we’re social creatures — knowing about our needs — tell us something about what our flourishing would require? Sure, but (apart from the fact that this doesn’t get rid of the ought/is problem, as I’ve pointed out) a) these are very general notions, and ethical dilemmas happen at a finer grain of detail; and b) we know about our sociability and that it is “good” prior to knowing about the underlying brain structure, and have to in order to make progress with the idea. There are also innate violent potentialities … underwritten by the brain. We don’t go in, pick out some structure underlying that potentiality, and call it “good” simply because it’s a fact. Brain structure trivially underlies both dysfunction and function. One can then go down the road of talking about what’s typical to a species and make other virtue ethics-style attempts at settling the problem, but these quickly become tough rows to hoe (as always, see http://plato.stanford.edu/entries/naturalism-moral/). Probably the most promising route involves in thorough analysis of the concept of function, and whether it can be a non-normative notion that grounds the normative. (But again, I haven’t seen account that satisfies me).
Mark Linsenmayer says
The more I think about this, the more I get the feeling that this is not “the question,” that conceiving meta-ethics linguistically like this is an interesting exercise, but involves a category mistake. If ethical statements are taken to refer to something that is in some way socially established (yet not mere arbitrary conventions), then the correspondence theory of truth that the application of truth values to these statements implies is not going to be entirely adequate. To make an ethical claim is, as Nietzsche says, to “create values,” yet not in the superficial subjectivist sense that “it’s true because I believe it.” Rather, making an ethical claim is neither a matter of describing some preexisting state of affairs independent of my thinking (in which case a truth value would apply), nor is it a performative (in the sense that, e.g. making a vow is not a description but a verbal action) or emotional expression (in either of these cases a truth value would not apply). It’s just more complicated than that, I think, and while asking this linguistic question is a way into the issue, it’s not the most direct way, and the continual need to circle back to the the linguistic question as if this is the fundamental issue to be addressed seems wrong to me.
Wes’s point about flourishing is, of course, right on, as exemplified well in our Plato and Freud episodes, and I know that Wes has a somewhat pessimistic account of the possibility of flourishing given our inherently conflicted psychology. I recall an interview with Rorty where he said that you can’t argue between the fundamental values of utilitarian happiness on the one hand and Nietzschean extremes, but that nonetheless the former is pretty evidently superior to the latter and should be guiding our public policy. To clarify: Nietzsche was scared of the image of the “last man,” which seems to resemble the happiness we shoot for through social legislation: i.e. we want everyone to be safe and warm with plenty of material goods. For Nietzsche, by contrast, flourishing (probably) involves exertion of your own power, risk taking, and exertion to the point of suffering. While the latter ideal works well for challenging yourself and trying to live a great life, it sucks for social legislation, essentially involving just giving up on making things better for the mass of people, because the great ones will achieve their own excellence. While this duality is certainly still a living conflict (i.e. it matches up roughly with our two political parties), we still pretty clearly understand what “making things better” means as I just used it in my last sentence. Whatever we may claim in our more Nietzschean moments about human flourishing, we all have a pretty strong pre-theoretical understanding of what “the good” for someone amounts to, and utilitarianism more or less captures that. No, this doesn’t cash out (as Bentham thought) into a specific calculus that decides fine-grained moral decision-making: no rule set will come in and save us from having to be wise, i.e. experienced in navigating tough decisions. Again, whether the difficulty in these decisions is epistemological (i.e. there is an objectively optimific, correct decision but we have trouble figuring out what it is) or metaphysical (i.e. there is no objectively correct decision, but we nonetheless have useful tools to guide us in making these decisions) is pretty much beside the point for someone actually dealing with an ethical dilemma; ethics, being a practical enterprise, does not hinge upon a correct philosophical analysis of the linguistic, meta-ethical question.
Mark Linsenmayer says
One addendum: my final point above is not intended to be the Stanley Fish move that Wes referred to in the post previous to this one. Clearly there’s a connection, but I’ve not looked at the texts he refers to there and am not going to try to place myself into that debate. Obviously, I think meta-ethics is interesting, and relevant to ethical thinking itself, particularly when doing something like casting a judgment on the values of some other culture. I don’t think that contradicts the point I just made above.
Wes Alwan says
You saw my response to that coming from a mile away.
Wes Alwan says
Nicely put on the contrast between the Nietzschean and utilitarian approaches to flourishing. But even if we grant utilitarianism (and I still think all the textbook criticisms apply) I don’t think there’s as much agreement about what the good life is (and especially what legislation that should lead to) as you do. But even if I were to grant that: suppose Harris is just running around applying a rock solid ethical theory that we all accept. I know what flourishing is, he tells us, and science can help us apply that knowledge by using its tools to tell us whether behavior x or way of life y will lead to it. None of that supports any metaethical claim about science revealing values to us or collapsing a distinction between ought and is. I already have all my evaluative premises lined up (in my conception of flourishing). So if I come to the conclusion that corporal punishment is wrong because it leads to miserable adults, the standard by which we define “miserable” is the rub. Science hasn’t revealed any new values here. “If you value x, you ought not to do y” — no one ever denied this sort of hypothetical judgment plays a role in ethical reasoning. This doesn’t mean that “you ought not to do y” is a value that you’ve discovered by observation. You’ve merely noted the causal relationship between some behavior and what you value. And whether or not metaethics is itself is something Harris values is not the issue here! It’s of interest to me, and when I point out that his metaethics is bad it’s not a rejoinder to say that it doesn’t matter to him because it’s airy and philosophical and not part of how we make actual ethical decisions. I don’t use physics to play basketball, but I still might conceivably be interested in the physics of basketball and have a right to call someone on their error when they call a parabola a hyperbola (or on any other form of hyperbole; see the Fish post).
So now let’s get to our hypothetical judgments — a staple of non-scientific practical reasoning as well — and see what science actually adds. In the case of corporal punishment, the idea is that we can observe how the adults to see how they turned out. But think about how hopeless this project would be, given the extremely large number of variables which will play a role in that. Can you think of an experiment in which you could really scientifically assess the results of corporal punishment and control all variables? (Social science, epidemiological studies, and even drug trials suffers consistently from such problems). Wouldn’t you have a better chance thinking about the mechanisms involved, asking oneself for instance whether you want your child motivated more by fear or understanding? Wouldn’t actually armchair reasoning be more rigorous in this case? (But again, whether such evaluations are conducted from the armchair or scientifically (if you could call it that in the proposed case) — these are all examples of hypothetical reasoning which tell us nothing about the metaethics).
Finally, I don’t think that asking about whether ethical statements have truth values is a linguistic exercise. And it certainly doesn’t prejudice you to any theory of truth, correspondence or otherwise. Any talk of truth values merely requires the minimalist, non-metaphysical concept of truth (Tarski’s) on which we all must agree if we are even to talk to one another about it: ‘p’ is true iff p; contemporary pragmatists also accept this minimalist conception of truth — on which you can build all the various metaphysical or anti-metaphysical theories. And ultimately, before evaluating it I’d need more details about a more complicated meta-ethical theory that is neither realist nor anti-realist, cognitivist nor anti-cognitivist. (Pragmatism? Future podcast episode?) For now I’m inclined to agree with Boghossian that there’s no middle way between nihilism and moral realism (although I see the attractions of both … once again, agnosticism).
And just to hammer the ought-is points home, two useful reviews: First by biologist Allen Orr( http://www.nybooks.com/articles/archives/2011/may/12/science-right-and-wrong/?pagination=false)
“Harris’s view that morality concerns the maximization of well-being of conscious creatures doesn’t follow from science. What experiment or body of scientific theory yielded such a conclusion? Clearly, none. Harris’s view of the good is undeniably appealing but it has nothing whatever to do with science. It is, as he later concedes, a philosophical position.”
And a sympathetic review by Russell Blackford which nevertheless corrects the metaethics (http://jetpress.org/v21/blackford3.htm)
“At the same time, however, Harris overreaches when he claims that science can determine human values. Indeed, it’s not clear how much the book really argues such a thing, despite its provocative subtitle. Harris presupposes that we should be motivated by one very important value, namely the well-being of conscious creatures, but he does not claim that this is a scientific result (or a result from any other field of empirical inquiry). If, however, we combine this fundamental value with knowledge as to how conscious creatures’ well-being can actually be aided, we can then decide how to act. We can also criticize existing moral systems, customs, laws, political policies, and so on, if we are informed by scientific knowledge of how they affect the well-being of conscious creatures.
“While this is all coherent, Harris is not thereby giving an account of how science can determine our most fundamental values or the totality of our values. If we presuppose the well-being of conscious creatures as a fundamental value, much else may fall into place, but that initial presupposition does not come from science. It is not an empirical finding. Thus, even if we accept everything else in The Moral Landscape, it does not provide an account in which our policies, customs, critiques of policies and customs, and so on, can be determined solely by empirical findings: eventually, empirical investigation runs out, and we must at some point simply presuppose a value at the bottom of the system, a sort of Grundnorm that controls everything else.
“Harris is highly critical of the claim, associated with Hume, that we cannot derive an “ought” solely from an “is” – without starting with people’s actual values and desires. He is, however, no more successful in deriving “ought” from “is” than anyone else has ever been. The whole intellectual system of The Moral Landscape depends on an “ought” being built into its foundations.”
Ethan Gach says
Blackford gives the most charitable interpretation of Harris’ thesis.
Outside of the book-selling polemics aimed at by the title and PR campaign, Harris basically says if you don’t think that the flourishing of concious creatures is the most important thing, at least in the abstract, you won’t believe anything else he says.
He also dodges the problem of conflicting flourishings. It’s not clear that my well being is dependent on the well being of my fellows. But it’s a possibility, no matter how tenous. With 50/50 odds, the claim that flourishings ultimately don’t conlfict isn’t a completely radical one.
That’s why, ultimately, Harris’ book is not very interesting. It’s a good way of contemporizing some ethical issues and getting conversations going at dinner, but it doesn’t actually contribute anything overall.
If anything, the more interesting but less explored topic is in the title “The Moral Landscape.” The proposition moves toward putting morality on a timeline continuum rather than locating it in discrete instances. So rather than being concerned with what is moral at time Z, I have to be concerned about how it will cash out at later times as well.
I’m sure other consequentialists have explored this at greater depth, but it’s an interesting problem in ethics that isn’t often touched upon, e.g. what do we owe to future generations, etc.
Jay Jeffers says
This may be splitting hairs, but it seems like the rhetoric you point to (the “book-selling polemics aimed at by the title and PR campaign”) are not isolated from the actual message Harris is delivering. He tells us that the only possible definition of morality has to do with maximizing the flourishing of conscious creatures (even here a little charity is needed, because his message is not always this clear).
He doesn’t then say those that disagree with this have incommensurate values and so there’s no use in continuing the conversation. Rather, he asserts that those that claim to disagree either covertly (or without realizing it) hold the same definition of morality as Harris does (a helluva bold empirical claim), or that those that disagree are plain wrong (because they have defined morality incorrectly. Nice and tidy ain’t it?). He doesn’t deal at all with the reality that some people take considerations that we find repulsive to provide overriding reasons for action, and we’re left with saying that those that disagree with Harris’ definition of morality are wrong by definition, like any other banal linguistic disagreement (sucking the emotional gravity out of moral disagreement in the process).
Anyway, his actual message seems polluted by the crudeness of his PR campaign.
Ethan Gach says
In the book he remarks, as he does in many of his lectures, that nothing can validate a fundamental moral principle, just as nothing can validate science in any strict epistemological sense, but that that doesn’t stop us from doing science or pursuing, what by most accounts, seem to be progressively moral ends.
And outside of the contrarian position, who would actually argue that the “best” life isn’t the one we want? In a Socratic fashion, Harris is basically making the point that whatever you think human flourishing is in practice, people have some notion of it in theory that they seek to achieve.
As a result, if one accepts that the “best” of all possible worlds is better than the “worst,” one is comitted to then figuring that out. In that way, he doesn’t engage metaehtically, but simply dictates that his argument starts from a position of moral realism further down the path.
I’m not claiming that to be a legitimate analytic move. But I think the broader point is that there really is a lot of consensus on what is “good” and that a lot of the fringe debates can be figured out by applying scientific investigation.
I think his medicine/morality metaphor is important, if only because it begs another intersting question, in that he pressumes it’s better to be healthy than not, and that as a result, it’s better to be psychologicall “healthy,” than not.
But there is an intersting divde between the two. We don’t find medicine controversial, but claims to mental medicine make people very uneasy, because they question the very legitimacy of what appear to make up our mental states and thus our personal identities, even our conciousness.
In affect, I think Harris’ view boils down into a hyper process oriented materialist view (though I may be applying those terms incorrectly), where conciousness itself is a process that has the propensity to function a certain way built into it, and the fulfillment of those functions is “good” and would be considered “flourishing.”
I think one could construe Harris’ proposal as science being the means by which conciousness is able to realize itself.
Mark Linsenmayer says
Yep, I like that Blackford clarification.
Like his atheism works, I think Harris’s work here is primarily political rather than philosophical. There’s a sense of the term science that is perhaps better expressed by the term “craft,” where the theoretical foundations have been established, and it becomes a matter of mechanics to then push forward the endeavor, and this is the sense of science I think Harris is pushing for. (It should be clear from the fact that I guess astrology could count as a craft in this sense that this is not the same thing as being committed to an experimental method or much else.) As Blackford says, there’s a fundamental normativity at the heart of it, but if he can get his audience to grant this as uncontroversial, then the project can proceed. What tools are available for an empirical study of what is optimific is an interesting question, and I take Wes’s point: strictly controlled experiments are a good goal but (as with the social sciences generally) can’t be the sole source of empirical evaluation. I don’t know that I have enough information to evaluate how helpful knowledge of the brain is in this process; certainly it seems like that might be useful, along with the kind of psychological modeling that Freud did and all the other kinds of sometimes ad hoc theoretical work that philosophers and psychologists do.
OK, so on this broad interpretation of the program (which is not “scientistic,” i.e. doesn’t rely strictly on empirical scientific method but still claims to be empirically grounded), I’ve taken a lot of the umph out of Harris’s position: no longer is science taking over the role of philosophy and nuanced thinking about methodology (though this may be a caricature of Harris’s position regardless). Why have the program at all then? What’s being contributed? Again, the answer is political: he’s arguing against a) religious fundamentalists who don’t get the point of the Euthyphro and don’t accept the basic utilitarian intuition, and b) Westerners who haven’t thought a lot about it but whose basic, background orientation towards moral issues has been molded by group a.
Against a, he can’t do much more than hold up a mirror and say “Is this really what you believe? Isn’t that crazy?” One example that will hit home to many Westerners that don’t consider themselves in any way fundamentalists is male circumcision: fundamentally, Harris might argue, circumcision is unnecessary surgery that causes pain to an infant and (arguably) decreased sexual pleasure in adults. Whether or not the practice actually lowers the risk of some diseases is an empirical issue, as is the degree of risk in the surgery itself. The matters of infant pain and lifetime sexual pleasure are also empirical ones, though trickier to figure out. So all Harris can say to those who perform infant circumcision is “Is this really what you want to be doing given these empirical findings?” If the response is “no, my religion says otherwise” despite the harm of the practice (we can pretend for the sake of the example that the harm has been established), then the argument falls back to the new atheist project against faith, i.e. into shouting.
Re. group b, the political point is to awaken the non-philosophical into recognizing that they really do buy the Euthyphro argument and really do agree with the underlying utilitarian principle, and thus they should be opponents of group a rather than their unwitting puppets. It’s like when political commentators complain that one party has improperly set the terms of the debate: all of Harris’s work is about trying to shift things so that secularist ideas get serious consideration and aren’t getting marginalized by default.
Jay Jeffers says
Political polemics dressed up as a philosophically pure contribution? Ostensibly pure arguments aimed at shifting the Zeitgeist? Coming from adherents of postmodern-bashing, truth-upholding scientism? Excuse me while I go puke.
PS You may be right. Which is what I find gross about it.
Ethan Gach says
I agree that it’s really a political work, and I think that’s where most of the author’s interest is.
I think, scientifically at least, his endevour is similar to Eaglemen’s: question a lot of long held moral and political beliefs in the face of new empiricle evidence and figure out if they ought not to be modified. But the initial project is to get people on board with secular empricism, or even just secular analytic reasoning.
Tom McDonald says
“what flourishing means is the point of contention for most ethical debates. If that were well-defined and agreed upon, it’s hard to see how most ethical debates would continue (although the meta-ethical debate might).”
This gets to the heart of the problem in my view. I forget where Kant says it, but at one point he concludes that moral or normative reasoning is more distinctive or expressive of the human condition than reasoning about natural patterns.
Why would he say this? Precisely because its non-objective character reveals to us the freedom in the reasoning subject that can never be empirically verified. BTW, Hegel’s whole philosophy is predicated on this Kantian insight.
And this issue of the normative versus the natural is still what demarcates the ‘fuzzy’ humanistic from the ‘techie’ scientisitc philosophers.
Mark an excellent presentation on Sam Harris. Nufsed said about this rogue thinking philosopher.
A few questions:
1. What is the difference between metaethics and ethics?
2. Doesn’t saying “p is true iff p” commit the fallacy of misplaced concreteness – mistaking an abstraction from the concrete whole? Wes acknowledges as much in discussing the difficulty of determining prison policy or drug efficacy.
While p might be true abstracted from reality, mix it up woth a little q, r, s. t, and throw in some 1, 2, 3 for good measure, then assess whether ‘p is true iff p’ has any relevant value. We must deal with Emergence more realistically.
Jay Jeffers says
I think we’ve run out of room on the sub-thread up top, so I’m moving down here. In your first paragraph, you point out how Harris compares science and morality. If we buy this comparison, I think we’ve already missed something important. I’m not certain you don’t know this, but the only way for me to illustrate what I’m getting at is to get to it, and wait for your reaction, so..
Harris glosses over the difference between epistemological skepticism and skepticism over reasons-for-action, or more geared toward Harris’ generality, skepticism over collective motivation.
Science: see, we all have phenomenal experience, and we’ve devised this incredible discipline, science, to identify patterns in the behavior of this experience that we can count on to reap returns that we mostly all enjoy and endorse. In the course of this phenomenal experience, we come to believe the experience is had by other people, and that their experience is close enough to ours that our incredible discipline, science, can help all of us. Hell, it even seems like there’s an external world that science is telling us about.
Now, a historical problem in philosophy has been to answer the radical skeptic. I mean, what if it’s all a dream? What if I’m the only conscious being? Yada yada yada…zzzz Suffice it to say, we don’t have to answer these questions to do science, or to motivate the desire to do science. We’re already doing science; it’s underway.
Morality: Meta-ethical disagreement is not a result of skepticism as radical as the kind I outline above. One can acknowledge everyday sensory experience, and still doubt that she has a (non-instrumental) reason to be good to other people. Moral skepticism is in fact not typically radical skepticism; it’s not as if moral skeptics doubt brain scans, or happiness surveys from Denmark. And it’s not as if moral skeptics necessarily lack enthusiasm for the Enlightenment project. I mean, many continentals or postmoderns may not believe in moral truth with a capital “T” but moral skepticism in the Anglo world comes mainly from people very impressed with science and its potential for technological progress and ability to show us what’s true.
Onto Collective Motivation skepticism:
As I hinted at above, reasons for action is a major hangup in metaethical debates. Now, if someone is looking for proof that we *should* marshal our collective resources toward the endeavor of science, she’ll never get it. Thankfully, we don’t need this kind of proof to actually do science (nor do we need this kind of agreement to sign treaties, obey laws, help old ladies across the street, etc). I understand that Harris wants to emphasize the fact that we don’t need such proof, but in the process he glosses over something he has an obligation to tackle, given not only his PR campaign rhetoric, but things he says in the book.
What Harris glosses over is the reality that from an epistemological point of view, the is/ought gap is as robust as it’s ever been, and moral disagreement between cultures, or even between different groups within cultures, is *from a rational point of view* potentially unresolvable (and often actually is unresolvable).
Of course, what we can do is shame people, or perhaps target the more tolerant of the preexisting commitments of conservative cultural groups, try to get them to buy into secular liberal democracy, or we can believe (not without evidence) that as societies become richer, they have an opportunity to maximize flourishing within a legal framework and social structure like Denmark’s. It could be that we’re witnessing the arch of economic and social development, and as societies develop and find more effective ways to flourish, intractable moral and political disagreements fade to the margins as people become inculcated with the values that sustain a liberal and thriving democracy. In such a scenario, people with the old disagreements die off, the people left behind change their disagreements, the remainder are shamed, and young people are raised to believe the new regime is natural. But this *has not solved the old moral disagreements*. I really can’t emphasize that enough.
No one in this debate is suggesting that we don’t do science or that we cease being moral. There are, nevertheless, skeptical arguments Harris misses, and acts as if he tackles, but then actually just dismisses by acting as if the argument from the moral skeptics, if taken seriously and applied consistently, would cause us to doubt our resolve to do science. And since that’s ridiculous, folks are now free to dismiss moral skepticism.
It’s really very sloppy.
The question is if the *fundamental* motivation to do science is justified by rationality (as in “logic”) or demonstrable truth (answer: no, same for morality). If my answer is even plausibly correct then Harris’ treatment of moral disagreement is extremely facile, and fails to justify his and his (New Atheist) tribe’s belligerent dismissal of relativism, postmodernism, religion, or anything not sufficiently truth-strong.
In summary, in an everyday sense science actually is justified epistemologically, and it takes a strained entertainment of radical skepticism to doubt that. Now, we’re not *certain* of the truth of science, but that’s actually not the same as being epistemologically justified (thank God) and it’s definitely not the case that radical skepticism about the truths science delivers is of the same form as the mainstream form of moral skepticism coming from English speaking philosophy.
Whether what science offers us is something we ought to value and ought to continue doing, well, that’s not a question science can answer, and we shouldn’t talk in ways (as Harris does) that encourage a conflation of the epistemic performance of science on the one hand with our fundamental desire and motivation for the things science offers, on the other.
Jay Jeffers says
Sorry that was so long.
Ethan Gach says
There’s a lot to respond to here. I’ll just start by noting that “rational disagreement” is a concept that doens’t really make sense unless one presumes it. To say that that moral relativism or moral disagreement exists because there can be rational disagreement over these things seems to be looking at the existence of disagreement, pressuming it’s rationality, and then saying, well look people disagree and we can’t resolve it.
The resolution of a disagreement has no direct bearing on the truth or falsity of a claim. The only point of resolving disagreements is political/social/collective action.
Harris has an interenst in addressing those who disagree for the purpose of creating a political consensus, but the mere existence of disagreement is in no way damaging to a particular theory or proposition.
“What Harris glosses over is the reality that from an epistemological point of view, the is/ought gap is as robust as it’s ever been, and moral disagreement between cultures, or even between different groups within cultures, is *from a rational point of view* potentially unresolvable (and often actually is unresolvable).
“What Harris glosses over is the reality that from an epistemological point of view, the is/ought gap is as robust as it’s ever been, and moral disagreement between cultures, or even between different groups within cultures, is *from a rational point of view* potentially unresolvable (and often actually is unresolvable).”
The is/ought gap is as nonsensical as mind/body dualism. For the purposes of inquiry it poses a seemingly unresolvable problem, but for pratical purposes of deciding what is actually “moral” it should give others as little puase as it gave the origional author. If there is a an insurmountable gap between them, then all are the one or the other. It would be like claiming two kinds of particles, but which can never interact with one another. The contents of such a proposition discludes it from having any meaningful application. Likewise, I’m not sure what the existence of an is/ought gap would do. What is it’s cash-value of it in a pragmatic sense? It can’t be proven and yet even it’s proof would not inform us of anything.
And as I’ve already claimed, the unresovability of dissenting groups doesn’t say anything about anything, ESPECIALLY, if one subcribes to the is/ought distinction in the first place. Wouldn’t that be getting an ought from an is? There is disagreement, so their ought to be pluralism?
But on the whole, you are correct, Harris dismisses/does not address these skeptics because they aren’t his audience because they’re consensus is not politically useful or meaningful.
He’s trying to speak to a group that already, as Mark notes, shares a generally “utilitarian intuition,” and showing how secular empricism can help make those utilitarian calculations, rather than relying on rigid and often unhelpful moral principles to ajudicate between competing interests.
Jay Jeffers says
One way to find out what the is/ought gap is would be to consult an orthodox source on English speaking philosophy:
“…at least part of Hume’s concern seems to have been that no set of claims about plain matters of fact (‘is’ claims) entail any evaluative claims (‘ought’ claims). That is, he seems to have thought, that one can infer the latter from the former only if, in addition to premises concerning plain matters of fact, one has on hand as well at least one evaluative premise.”
This substantive definition from the SEP is the first offered in their treatment of the topic on their current site, which is no surprise, since it reflects common understandings. Nothing about this is nonsensical, even if you find it unhelpful or unenlightening. If people didn’t talk as if they’d overcome it, no one would have to talk about it at all. That facts and values interplay is not revolutionary, new, or a refutation of the is/ought gap.
And no, I don’t think pluralism follows from the diversity of moral opinion, because that would be a crude deduction from the fact of the matter to the moral advisability of a state of affairs, and as such would be silly. Now, often what is going on in cases like this is an enthymeme is present, which is not actually a crude deduction since the premise is doing work, even if not explicitly. In cases where there isn’t an unstated premise, you obviously don’t see this as a problem. Meaning, you obviously don’t think crudely deducing what’s right from a factual state of affairs is a problem or happens very often since you have no use for the is/ought gap. I don’t agree, or least not historically, but that’s a bigger topic than we can tackle here.
Actually, most of this we can’t resolve here, but I can assure you that the thought process you outline in your first paragraph about noticing disagreement and then presuming we can’t resolve it is not the process I’ve gone through and is not the process gone through by those most impressed by the unresolvable nature of moral disagreement. Actually I tried to detail how even if the disagreement dissipates, that doesn’t mean it was resolved rationally, but I guess that didn’t make the point I was hoping it would.
Anyway, I didn’t say that moral disagreement was *rational.* As a matter of fact, I insist that it isn’t. That doesn’t make it irrational, mind you, just non-rational. And I agree that the mere existence of disagreement doesn’t damage the truth of a theory or proposition; I think everyone involved in discussions on this site would agree.
Ethan Gach says
Non-rational is irrational, at least from the view point of rationality. Would someone who subscribes to the existence of rationality say that non-rationality can be anything other than irrational?
My point here is that by supposing the existence of three different approaches, i.e. rational, irrational, and non-rational, you’re treating them from a meta-approach that must itself be characterized from one of those approaches.
Someone from the point of view of moral realism wouldn’t regard amorality as a legitimate alternative, amorality would in effect be immorality, unless it resulted in morality by happenstance.
Rationality is a binary that rejects the possibility of a third departure. So unless we first pressume non-rationality, there’s no reason to believe that moral disagreement is anything but irrational.
Jay Jeffers says
“Non-rational is irrational, at least from the view point of rationality.”
No. That’s wrong.
And your comparison of rationality and morality is misplaced. Rationality is not binary is the sense that all objects, thoughts, impressions, tastes, and moral assertions are only either rational or irrational. Some things are non-rational.
“I love the Rolling Stones.” Is this rational or irrational?
Answer, neither. Whether or not you believe morality fits into this realm of the non-rational is an important question, but I think we can easily be rid of the notion that rationality cannot abide non-rationality. I mean, seriously.
Jay Jeffers says
I should clarify, *some* moral disagreement may be based on disagreements over science or economics or what have you. But the really problematic kind of moral disagreement, the kind Harris glosses over with glibness, is not amenable to rational adjudication.
Jay Jeffers says
And anyway, Ethan, Harris doesn’t say he’s up to what you, (and Mark) say he’s up to; he gives unsuspecting readers the impression he’s up to something more straight-forward. So if you’re right on what he’s up to, I would say that it follows from your assessment that his work in TML is intellectually lamentable (allowing please, that my use of the concept “follows” employs an enthymeme).
Ethan Gach says
I thought I already said above that the book is dissappointing. And already said his marketing campaign is misleading.
The book is not. Read the introduction, he tells you where he’s going, he goes there, and it turns out to not be a very interesting or new place.
Jay Jeffers says
I’ve read the whole book, Ethan. Based on our discussion here, I doubt we’ll ever see eye to eye on its quality (or lack thereof).
What makes you think I haven’t read it?
That Guy Montag says
I’m developing a bit of a habit of defending Sam Harris on the internet. It’s not intentional but the issue is that his point is often missed, (and I think there are good reasons for that) and I happen to share most of his intuitions so this debate tends to help my own thinking.
I think people are right to raise the fact that his book is political because I think that probably is the reason why so many people struggle to get to grip with his argument. So sure, he starts his argument by appealing to the moral landscape and there seems to be two reasons for this. The first has been raised more than once in the discussion: he’s clearly aiming to remind people that even if we disagree about the specifics, if we step back enough it’s not hard to make rough judgements about better or worse states of affairs. The second point is very clearly to leave a lot of space for different answers to particular moral problems. He really is trying to create a lot of space for pluralism about moral outcomes and this in itself should have stopped all of those people accusing him of a naive Utilitarianism.
The question this leaves us however is what exactly could the notion of happiness he’s trying to appeal to be. A rough and ready interpretation is that he wants to argue that every moral state of affairs has an impact on our experience and “happiness” simply is our experience of moral states of affairs. I think that this is the crux of what drives him to his conclusions but What it also means is that his argument *really* starts in his chapter on Belief. I don’t want to dive in too much into this yet, largely because I’m at work right now. I will say that this focus on the experience of moral states of affairs is exactly what we’d expect from Harris given his stance on meditation and spirituality, his research on the neurology of belief and little things like a multi-page footnote on Davidson in The End of Faith. Mark/Wes: could I possibly therefore to suggest an episode on the Pittsburgh School or Wilfred Sellars, which I think would do quite a lot in helping us understand where Harris is coming from?
I’ve never seen someone so willing as Sam Harris is to permit educated audiences to sit at his feet while he warns them of the perils of the is/ought distinction and rehearses both the platitudinous and the unsubstantiated with such youthful brio and academic gestures. Those of you that can consistently get past the bravado and straight to the content are more disciplined than I. I never thought there was a pot of gold at the end of that rainbow anyway, and if I ever had the hankering to listen to a non-philosopher lecture me on the way it is concerning moral philosophy, I think I’d be served just as well by a clever undergraduate eager to bloviate on such matters.
I do believe, however, that Harris’ recipe for getting substantive moral claims to be entailed by our scientific theories is ingenious. It’s really quite easy, actually. Here’s the strategy: in order to get your scientific theories to entail a substantive moral claim like “We ought to act in ways that maximize human flourishing,” just add that claim as an axiom to your scientific theory on the grounds that it is obviously true. Presto! Once added as an axiom, it will be entailed by the theory. That was easy! Or if you want, add it as an axiom to the probability calculus and declare triumphantly that the moral claim is guaranteed by some of our best mathematics. Why hasn’t anyone thought of this before?
Mark Linsenmayer says
Succinctly put, Cartesius. The line between “there is a point at which inquiry must stop: basic, inarguable postulates that we have to take as self-evident” and “this is an [arbitrary] axiom of my system” can be thin.
I certainly don’t believe Harris’ utilitarian moral principle is self-evident. But even if it is, merely adding it as an axiom to a scientific theory makes it no more a scientific principle than adding it as an axiom to the probability calculus makes it a mathematical principle. Furthermore, merely adding it as an axiom to our scientific theories will by itself do nothing to make us any more confident that it is true. Finally, Harris betrays no awareness that unless adding the axiom enhances our scientific theories with respect to empirical strength, then adding the axiom will come at the cost of theoretical simplicity without any gain in theoretical strength. But what empirical data not currently explicable by our theories would be explicable were we to add the utilitarian maxim as an axiom? Somehow I doubt Harris has stumbled across empirical data for which the best explanation is, “We are morally obligated to act in ways that maximize human flourishing.”
Mark Linsenmayer says
Sure… pretty much none of my comments here should be taken to defend Harris. I think Bentham took the utility principle (pain is bad, pleasure is good) to be self-evident, and the moral sentiment folks just consider a claim like “when clear-headed and thoughtful, we morally approve of maximizing human flourishing” to be a brute fact which, coupled with their denial of other possible sources of moral knowledge to override this belief that we just find ourselves having (e.g. denying the coherence of the concept of revelation), entails that either all morality is bunk or utility is basic.
Like Wes (who I recall seemed to be a skeptic about moral realism but thought that if morality is real, it’s something like Kant’s), I tend to look on morality as a fiction, but, analyzing the content of the fiction, I think something like Mill’s utilitarianism (which, unlike Bentham’s, eschews strict calculations and is OK with, e.g. recognition of rights: rule rather than act-based utilitarinism) is close to accurate (Note that I’m getting this distinction between Bentham and Mill from the Churchland book; she sees Mill’s doctrine (with its “competent judges” and all) as able to approximate what Aristotle talks about in requiring wise judges instead of blind rule-followers.)
On the other hand, as I’ve likely voiced elsewhere, I also believe that moral decision making has a tragic element that utilitarianism doesn’t acknowledge, e.g. killing someone in a war, even if it’s justified by the circumstance, is still tragic… you don’t get off morally unstained whatever your rationale for violence.
Wes Alwan says
Actually I’m an agnostic about moral realism and am more attracted to virtue ethics than anything (included attempts to naturalize ethics in this way — despite the serious problems such efforts face).
Jay Jeffers says
“Why hasn’t anyone thought of this before?”
HAHAHA!!! Nice one.
Kid Charlemagne says
I’ve been trying to educate myself a bit more about Sam Harris by listening to his interviews and talks (doing this on the cheap). Could someone explain how he would reconcile notions of contradictory happiness? The idea of maximizing utility often comes across the majority versus the minority problem. How would he address the problem of when a majority of the population would be made happy by taking away the land and property of a minority? This happens from time to time in Africa where the indigenous majority claim that everyone would be better off by taking land and property away from the colonizers. That would maximize overall “happiness” but seems inherently wrong.