It is oft said (at least when exercising etymology muscles) that philosophy is “love of wisdom.” Just like other mind-related topics such as emotion and creativity, wisdom is getting the scientific treatment. One of our listeners pointed us to a book by Stephen S. Hall titled Wisdom: From Philosophy to Neuroscience
which surveys a variety of answers to the question of what wisdom is and how it is cultivated, starting with folks like Socrates, Confucius, Buddha, and Jesus, then following along into more recent attempts in psychology and biology to address the question.
Hall is interviewed on the Leonard Lopate show about the book and his writing of it. He points approvingly to Nozick’s definition of wisdom as “knowing what matters” and characterizes the scientific work in studying wisdom as really working to try to “refine a bunch of ill-defined concepts.”
Hall sums up the conclusions from his investigation in The Eight Neural Pillars of Wisdom:
1. Emotion Regulation – Studies at Stanford University, including brain imaging experiments, have shown that older people process emotion differently than younger people on average. They are less likely to dwell on the negative, tend to value relationships more, and rebound from setbacks more quickly.
2. Compassion – Electrophysiological measurements of the brains of Buddhist monks in the midst of compassion meditation have identified a unique pattern of brain activation, known as a “gamma oscillation,” which may coordinate and synchronize mental activity in disparate parts of the brain during empathic understanding and acts of loving-kindness.
3. Moral Judgment – Cognitive neuroscientists, in a series of brain scanning experiments over the past decade, have identified a neural circuit involved in moral reasoning, and have shown that moral judgment can change depending on whether we are physically close to another person (“up close and personal” judgments) or are acting at a distance.
4. Humility – Business psychologists have shown that the combination of intense professional will and extreme personal humility are the essential traits in turning a good company into a great company; by contrast, CEOs who rank high in narcissism measures tend to be leaders—but bad ones. They put personal drama and egotism ahead of company performance.
5. Altruism – Scientists have used brain-scanning experiments to identify a tentative circuitry in the brain that monitors situations of social injustice, and seems to prompt a form of behavior known as altruistic punishment—decisions in which a person sacrifices personal gain to punish a rule-breaker.
6. Patience – A sense of imagination about the future, a capacity which resides in the brain’s prefrontal cortex, helps suppress the impulse for immediate gratification, according to brain scanning experiments, and helps people plan goals and remain optimistic about the future.
7. Sound Judgment – Building on a huge amount of neuroscience that has been investigating decision-making, scientists are now teasing apart the process of neural valuation—how the brain attaches value to various choices. This may turn out to be the neural answer to a question asked by philosophers for centuries about the central challenge of wisdom: how do we decide what is most important?
8. Dealing with Uncertainty – Scientists at Princeton University, UCLA and elsewhere have been investigating how the brain reacts when it encounters the unexpected. Animal experiments suggest that habit allows us to react more quickly when the world is unchanging, but that in an environment of great flux, habit slows down our neural ability to adapt to changing circumstances.
It’s not surprising to me that there is neural circuitry for things like “sound judgement” and, indeed, I expect some interesting light to be shone on the necessary conditions for making good judgements and how one cultivates being a “sound judge,” especially if one knows already what ought to be valued. Still (and this is without really delving into the book itself), I’m doubtful that we’ll be able to avoid the question of value itself — what we ought to value — and that seems terribly difficult to answer with a psychological study.
Lots of this overlaps with podcasts we’ve done: our discussion in episode 41 with Pat Churchland about her book Braintrust: The Neuroscience of Morality and our two episodes (53 & 54) on Owen Flanagan’s The Bodhisattva’s Brain: Buddhism Naturalized (which really has a lot of critical things to say about pillar #2 above).
-Dylan
Thanks for pointing out Hall’s interview on the Leonard Lopate show, I will check it out.
You wrote, “Still (and this is without really delving into the book itself), I’m doubtful that we’ll be able to avoid the question of value itself — what we ought to value — and that seems terribly difficult to answer with a psychological study.”
I agree. Value is what I’m interested in, especially how we get value from brains. It seems like we’re jumping over a few important steps that get lost somewhere in between the neurons firing in our brain and, say, something as simple as understanding the meaning of a word. Until we have a solid science of the emergence semiosis, a book like Hall’s, although I will plan to read it before I say too much more, is still a ways off from explaining that leap from neurocircuitry to compassion for example, let alone why the scribbles in this comment ‘mean’ something for a person reading them (I hope they do at least).
I guess this is because I think the implications these psychological studies draw are interesting to think about, but a little bit of wishful thinking. After all, even when we can isolate certain areas of the brain, say for processing aspects of language, every conscious moment is still the result of a whole brain functioning, not just parts of it. Just some thoughts, and looking forward to checking out the book….oh, and by the way, the podcasts are really great, just wanted to say you guys are doing an excellent job!
Predictably I need to express skepticism towards this kind of project, though it is contingent on the scope of its ambitions.
First objection is on empirical grounds: in order to make valid generalisations of this kind about human cognition, this would need to be an immense comparative project. This, we know from experience, is hard enough even when we are talking about a single “cultural area” let alone the human race. When dealing with cross-cultural data we are immediately plunged into problems of interpretation. Also, anthropologists are very good at deflating universals by playing the “not in my tribe” game. Clifford Geertz has argued that “man is, in physical terms, an incomplete, unfinished, animal”, missing the thing that makes him/her human: culture.
Second objection is derived from the first: Though I don’t deny the biological basis of cognition, I’m wary of making generalisations based on “hardware” that ignore the empirically demonstrated variability of the “software.” As you point out cultural content (“value itself”) cannot be easily reduced to psychology. This is not trivial, since socially constructed meaningful schemas dictate the rationality by which humans judge their actions (especially since, as MacIntyre seems to agree, values are extensions of practices). Content affects form. Thus what counts for wisdom or rationality can be very different if we are Western liberals or, for example, Island Melanesians who think democracy is anathema to basic human autonomy and dignity and inevitably leads to violence. Values depend on basic concepts like personhood, or the subject object relationship, which vary from culture to culture.
When anthropologist can abandon the interpretative project of cultural comparison because we can just do a brainscan I’ll yield the floor.
did you read the book or listen to the interview?
http://blogs.plos.org/neuroanthropology/
Listened to the interview, haven’t read the book.
My criticism was directed not so much at Hall’s book, but towards the general notion of embracing neuroscience as the explanatory tool for a category as general as “wisdom.” There is plenty of psychological and cognitive anthropology that has a neuroscientific component that I do not have a beef with, since it carefully takes cultural variability into account and limits its scope. There’s for example good stuff on spatial cognition and on how radically spatial schemas vary in different cultures. Other stuff, like Harey Whitehouse’s analysis of modes of religious cognition, I find too reductionist.
Thanks for the link. Was there a specific thing that you were pointing to?
Each one of these claims is founded on having foolishly reversed the obvious direction of causality that is occurring. It’s not that the compassionate CEO is the best one, as obviously whichever are the “best” ones have already been able to secure each of their respective positions by virtue of facts other than those this article chooses to explore (specifically it’s that being most profitable as is the sole task of business rarely depends on being even remotely compassionate). What is actually the case according to their experimental data is that the best corporate CEOs exhibit certain kinds of brain states as opposed to other ones, and the apparently significant force of this utter banality leaves us only with a repugnant conclusion to make about the futility of the human condition. Altruism is just another form of chemical process like any other that happens to be occurring in the space of the brain, what absolutely apalling creatures that would prove us to be. It’s all just post ad-hoc ideological explanations for the success of those who presently find themselves in standing hegemonic power.
http://urbanomic.com/Publications/Collapse-2/PDFs/C2_Paul_Churchland.pdf
“Demons get Out” an interview with Paul Churchland
Everything I said above goes doubly for this guy. He seems to have a very shallow understanding of the content of people’s brains. It did confirm though that yep, I’m not just imagining it, some neuroscientists really are that naive.
so leaving aside that Paul is a philosopher is there something in particular in the interview that you object to?
Yes, my bad, he is a philosopher not a neuroscientist.
Let’s just say that he seems to have a very mechanical understanding of the nuances of socially constructed meaning. And I didn’t like his AI stuff at all.
Sorry, no time for details. I’m going away, but I’ll write a better response when I come back on the weekend.
dmf:
My specific problem with Churchland, and with many neuroscientific theorists, is simply this: he does not take cultural variability and its implication into account. Now, granted I’m being polemical and going by one interview alone when talking about Churchland. He does mention that he has later in his career come to realize how important surrounding culture is to the cognitive activity person engages in. An example he gives is the obvious one, language. However, according to him language does not reflect the thing he is most interested in: the basic structure of animal cognition. Now the question is how far we can mechanically reduce human cognition into this basic structure, and the innate workings of the brain?
A total opposite of someone like Lacan, who said that unconsciousness is structured like a language, Churchland is very suspicious of language. Depending how deep this suspicion goes it is incompatible with linguistic relativity (or Sapir-Whorf) hypothesis. According to the hypothesis language affects the way in which we perceive the world, or as Benjamin Whorf puts it: “users of markedly different grammars are pointed by the grammars toward different types of observations and different evaluations of extremely similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world.” Or as Sapir puts it “the worlds in which different societies live are distinct worlds, not merely the same world with different labels attached.” Cognitive theorists tend to hate this. As Jerry Fodor puts it: “I hate relativism. I hate relativism more than anything else, excepting, maybe, fiberglass powerboats. More to the point, I think that relativism is very probably false. What it overlooks, to put it briefly and crudely, is the fixed structure of human nature.” It is this fixed nature that Churchland seem to be talking about when he describes the basic structure of animal cognition. Steven Pinker has stated that linguistic relativity is “an intriguing hypothesis, but virtually all modern cognitive scientists believe it is false.” The weakest and strongest forms of linguistic relativity can be gaged by criticism and defence levelled at it. Some have argued that it would make translation completely impossible, and thus must be false because we can indeed translate between languages (this is a straw-man argument, since hardly anyone supports this strong a version of the hypothesis). On the other hand some defenders have argued that any problem of translation proves that it is at least to some extent true, otherwise all the worlds languages would neatly overlap.
Since the fifties and the rise of cognitive sciences, people have been hacking away at linguistic relativity. The classic test case is colour, and Churchland indeed talks about the organization of subjective qualia according to a universal structure that has been effectively solved. Not so fast! Now we know that the ways in which different languages classify colour vary and the number of terms varies, though there seems to be some consistency to the way in which terminology builds up (if you have only two terms they are black and white, if you add a third it is red etc.). Cross-cultural studies where people are shown colour charts seem to prove that despite some cultural variation, all people have the same perceptual capability to distinguish between colours, even if they do not have specific terms for them. There is, however, a deeper problem that rises from the methodology of the studies. In order to obtain large amounts of commensurable data, they do not look at how colour terms are actually used in language, but impose an artificial context for the tests. Some have argued that colour itself, as it is presented in the tests, is an ethnocentric category, and may create arbitrary sets from the point of view of the test subjects! The terms that distinguish colour (hues) often also refer to non-colour qualities, like shininess / dullness, wetness /dryness, rawness / ripeness etc. Thus the semantic fields do not actually overlap between languages. Why does the approach seem to work then? From the data it looks like these universal colour systems are there. John Lucy has argued that “what is there is a view of the world’s languages through the lens of our own category, namely, a systematic sorting of each language’s vocabulary by reference to how, and how well, it matches our own. This approach might well be called radical universalist position since it not only seeks universals, but sets up a procedure which guarantees both their discovery and their form…No matter how much we pretend that this procedure is neutral and objective, it is not. The procedure strictly limits each speaker by rigidly defining what will be labelled, which labels will count, and how they will be interpreted…Is it any wonder, really, that all he world’s languages look remarkably similar in their treatment of colour and that our system represents the telos of evolution.” [According to the evolutionist take on colour terminology the more there are categories the more evolved the language.]
This speaks to my point in the above post about the problem of interpretation of cross-cultural data. The relations between qualia (see Churchland) are defined by the conceptual networks that make the world culturally intelligible to us (Saussure’s meaningful differences), not the individual perceptual capabilities of human physiology. This complicates Churchland’s view that we can reduce experience to “scientific explanations” of “real physical stuff.”
There seems to be an underlying notion in Churchland’s statements that development of human “rationality” is a kind of proto-scientific empirical project of “understanding the world”, and that we are creeping towards a more objective view. That stuff like spirit possession is really about erroneous interpretation of sense data. Bad “ugly” metaphors are replaced by good metaphors by the power of their empirical explanatory force. This view, which has the smell of 19th century evolutionist thinking, is hopelessly naïve from the point of view of cultural anthropology. In most cases science, which is a cultural practice among many, can’t simply dissolve the “illusion” of things like spirit possession, since they create meaningful worlds in very different ways. It is this collectively created meaning that isn’t in any way reducible to perception that is most glaringly absent from these reductionist accounts. One could argue that it is symbolically articulated propositional attitudes that enable the kind of meaningful meta-language, which separates human and animal cognition (a separation that Churchland wants to blur). For example, can we explain the meaningful content of emotions by reference to neuropeptides? The kind of medicalization of emotion that he seems to be talking about (I’m adrenalized, I’m depressed because my dopamine levels are low) does a poor job of explaining the meaningful cultural basis of emotions.
The notion that cultural meanings define human consciousness is largely absent in Churchland’s discussion of artificial intelligence (as it is from most). Churchland criticizes the way propositional language has been the starting point of programmers. I tend to agree with him, but think that his solution is also a dead end, as far as simulating human cognition. Maybe we can build a physical model of a human brain, but how do we get it to experience the world through the networks of inter-subjective meaning that we take for granted. We couldn’t program it, most of our cultural models are opaque to us, and it couldn’t just learn by passively perceiving reality. We would have to try to gradually socialize the computer as a human.
Okay, this is a long post. My last objection has to do with scientific models. A one to one neuroscientific mapping of a single brain wouldn’t tell very much about the cultural content of the brain, culture is contextual and exists between people. So we would need to scan many people across many situations. Our cultural experience of the world cannot easily be reduced to a perfect picture of a brain (as Churchland hopes), since the amount of data would be so immense that it would be counterproductive to any explanatory project.
But, as I said, when anthropologist can abandon the interpretative project of cultural comparison because we can just do a brainscan I’ll yield the floor.