Podcast: Play in new window | Download (Duration: 57:28 — 52.7MB)
Continuing with Dave Pizarro on articles by Stanley Milgram, Philip Zimbardo, and John Doris about situationism, which entails that people’s level of morality will vary by situation, as opposed to virtue ethics, which posits that how people will act in a novel situation will be determined by the quality of their character.
We get into Doris’s article, “Persons, Situations, and Virtue Ethics” (1998), where he argues against the traditional idea that we have virtues like “honesty.” Instead, these traits are more situation-specific, so even someone who doesn’t cheat on his or her taxes or spouse might well still steal candy. Doris sites a 1975 study by Levin and Isen where people who found a (planted) dime in a phone booth were much more likely to then help someone who dropped some papers as the subject was leaving the booth. Does this really show that helpfulness isn’t a stable virtue in people, or is something else going on here and in Milgram’s experiment? Does situationism excuse bad behavior? Would any one of us do just what most the citizens of Germany did during the Nazi regime if we were in that situation? Can we maybe train ourselves to better resist social pressure, not just in specific situations we’ve rehearsed in advance, but across the board?
Listen to part 1 first or get the ad-free Citizen Edition. Please support PEL!
End song: “Doing the Wrong Thing” by Kaki King, as heard on Nakedly Examined Music #54.
After having missed the last few months’ worth of episodes, this topic had me excited for a refresher.
First, Dave was a fantastic and informative guest, and the discussion of character was inspiring. Still, I’d hoped that the conversation would put some of this stuff into context with the rampant social engineering and data-fueled real-time experimentation that’s currently on the rise pretty much everywhere. I have to think that at least some other listeners who’ve been reading the Atlantic or the Guardian over the past couple of years might be scratching their heads at Dylan (you’re usually my hero, Dylan) “vigorously” defending gamification and leaving it at that.
Yes, incentives can be useful, but when should we have the freedom to construct or discover them organically or for ourselves, and when should larger powers be allowed to compose and arrange them at their discretion? When do incentives actually cheapen or subvert the activities they’re embedded in? When should they be organized by communities, when by corporations, and when by governments? As we become ever more connected, in which systems and in what areas is it even possible to extend or withhold such privileges? Should their be incentives or disincentives for every behavior that we can take a moral stance on or engineer a nudge for? If not, which ones? How do we know (and should we know) when we’re in someone’s experiment and when we aren’t? Most of all, at what point does mapping out general vulnerabilities to persuasion or manipulation become complicit with those very practices?
All this is just to say that an offhand hooray for prodding the public into the good life seems uncharacteristically hasty from you bunch. Food for thought:
http://www.nybooks.com/articles/2017/04/20/kahneman-tversky-invisible-mind-manipulators/
https://www.theguardian.com/technology/2015/aug/10/internet-of-things-predictable-people
I just discovered your podcast and loved it! But I was frustrated during this episode because I kept wanting you to apply Haidt’s Moral Foundations Theory to Milgram. Using that theory would say that people were in conflict because they had two basic moral foundations in conflict: care for people and respect for authority (not community as suggested in the podcast). Respect for authority is stronger among conservatives than among progressives and has been declining over time, especially among the WEIRD (Western Educated Industrialized Rich Democracies). Thus, fewer people would probably obey the instructions now than when Milgram did this experiment, especially if it was a Yale. These also vary by culture, for example both Germans and Japanese are stronger in respect for authority than the US. Here’s the official website for Haidt’s theory.
http://www.moralfoundations.org/
I’m only on minute 19 of Part 2, but I am suspectin that Wes is biased by his training. The idea that character is inherent and immutable is still a common belief among most laity. An honest person does not cheat. Doesn’t cheat a spouse or a stranger or a passing acquaintance and situational factors (except very extreme ones) aren’t a factor. It is one of the things that makes many people so vulnerable to manipulation, because that blind spot is so pervasive. Like the idea that decisions are all conscious, rational ones, when that is, in fact, demonstrably false a great deal of the time. It’s been well known in psychological circles for ages, but still shocks lay people with distressing frequency.
One other thing I think it is important to note in a discussion of Situationism and the idea of character: societies are made up of primarily relative strangers,
particularly in large cities, but also in counties, states, countries and the globe. Learning how we treat strangers and what elements influence those interactions is *at least* as important as knowing how we treat our loved ones. And the fact that there is a difference at all is also incredibly important.
I remember first studying the Zimbardo and Milgram experiments in high school. Some interesting nuance of the historical context on both help understand why we feel so appalled today. Milgram started his experiment contemporaneously with the famous Eichmann/“Banality of Evil” trial. To date the only near-codification of modern human subjects research ethics was the Nuremberg Code (1947), which was written in response of the gross and abominable excesses of Nazi scientists. The Nuremberg Code really only advises on preventing physical harm or death to a consenting subject but still deferred greatly to informed experts. Declaration of Helsinki (1964) specified that the subjects’ vulnerability and welfare take precedence in considerations of how valuable the research is to scientific advancement. So, Milgram wasn’t necessarily operating in an ethical void but he still had fewer ethical norms and guardrails to rely on.
Zimbardos experiment (1971) also occurred at a prescient point in history. Apart from the criticisms that zimbardo bears for letting the experiment carryon for as long as it did, one year later in 1972 a whistleblower outed the infamous Tuskegee syphilis experiment that was started in 1932. The trial sought to observe the effects of untreated syphilis specifically on impoverished black sharecroppers. The real horrors of the study were realized when the investigators allowed the study to continue well after penicillin was discovered and in standard use for treating such ailments by the mid 1940s. These revelations led to the development of the Belmont report which was issued in 1978 and summarizes research with human subjects ensure informed consent and autonomy of subjects; zero to minimal risk of harm to subjects and to justify exceptions to this; benefits and risk of research does not exploit vulnerable populations. Individual IRBs are expected nowadays to exercise a much more critical review and oversight role and really hold the investigators to task, not that there isn’t more to do…