A research physicist friend of mine who works at Lawrence Berkeley National Laboratory is a bit of a global warming skeptic. When I brought up all the scientific research on the subject, he said, somewhat dismissively, "Yes, but anyone who gets a PhD in climate science goes into it with an agenda. No one goes into particle physics just to prove a point. So no, I don't always trust their research." Not being a scientist myself, I had no clever rejoinder at the time, other than to say, essentially, "Well, 50,000 climate scientists can't all be wrong!" But what if most scientists tend to be wrong most of the time? And not due to political agendas, but academic, professional, or even psychological ones?
A good New Yorker article appeared last month regarding the fallibility of scientific research as currently practiced, or perhaps as inevitably practiced. There is a lot to chew on here, once you consider the ramifications.
In one sense, the research problem described in the article doesn't fall under the ambit of "philosophy of science" as such. These are psychological and sociological problems, as much as epistemological ones. If you want to jump to the good part:
This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
-Daniel Horne
While it’s good to keep a healthy sense of scepticism of groupthink and other phenomena, my experience in the climate science community leads me to believe that most climate scientists are not driven by an agenda, per se, but the same sort of interests that drive all scientists – the quest for knowledge and the search for truth in the world.
The fact of the matter is that climatology is just physics and chemistry applied to the study of the earth’s atmosphere/ocean/biosphere system. It’s just as rigorous as particle physics and the like, and its data are backed by copious amounts of raw data and measurements dating back hundreds, even thousands, of years.
A less abstruse article on mean regression. Also it’s not as alarmist, or damning, of the scientific method.
http://abcnews.go.com/print?id=12510202
http://abcnews.go.com/print?id=12510202
The link that the previous statement should have included.
See also: http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
And: http://rationallyspeaking.blogspot.com/2011/01/problem-of-replicability-in-science.html
@ Anh-vu:
Thanks for responding!
1. Well, I’ve got one professional scientist (let’s call him Marco) saying one thing from his experience, and another scientist saying something else from his experience! Who is correct? 🙂 The point isn’t whether or not Marco’s impressions are (in)correct. I think you would agree that you could both be right, and that in any event, your experience doesn’t trump Marco’s, but only complements it. And even if Marco’s incorrect about ideologies corrupting research (I’m agnostic on this), my point is that I was stunned to learn that _even professional scientists_ can be skeptical of widely-published and reported scientific research.
2. I, for one, do not see a “healthy sense of skepticism” generally directed toward the scientific community. Obviously, I see all sorts of _unhealthy_ skepticism coming from the illiterati, but that’s not what we’re discussing here! So the Lehrer piece strikes me as news, although he may simply be exposing a debate to the general public that is already part of the established discourse within the scientific community.
3. The thesis of the New Yorker article actually _contradicts_ Marco. The article argues that bias takes many forms, and can corrupt scientific research for reasons other than explicit ideological agendas. (In other words, Marco’s own research at LBNL might be just as susceptible to error!) There are all sorts of other agendas besides blatant ideological ones that can creep in. If one doesn’t start out with that agenda, they’ll be molded into it soon enough, or their career will be stifled. So, whether one brings it with them into their academic career, or has it hammered into them during the middle, is irrelevant if what we’re talking about is the amount of faith we put into published scientific research.
4. Re: the merits of climatology as compared to other sciences. To defend climatology in the _abstract_ is to miss both Marco’s point, and Lehrer’s. Climatology isn’t inherently problematic, but nor is it inherently rigorous. No scientific discipline is. Climate science research (or any science research) can be performed well, or poorly, or somewhere in between. No science is “just” anything, and in any event, it’s not the science that’s “on trial,” but the scientists. And some scientific disciplines may be more _institutionally_ susceptible to “skew” than others. (FWIW, I’m not particularly skeptical toward climate science research, my real beef is with pharmaceutical research.) The problem is that most of us trust that faulty research in any field will be “outed” over time, and that therefore the more agreement between scientists exists on a particular point, the more reliable and credible the findings. Lehrer’s article indicates that this may be – to some degree – wishful thinking. Agreement between scientists might rather imply intellectual and institutional inertia, which makes it more difficult to correct, or even acknowledge, prior errors in scientific research.
5. I think that’s kind of a big deal; or, in any event, a point not heard often enough. Why? Because this institutional problem eventually erodes public support not only for scientific research generally, but also for the claims scientific research should rightly be allowed to make on public policy.
@ Jon
Thanks, I was looking for some kind of rebuttal to the Lehrer article, although I fear I found Paulos’ response as unduly glib as you found Lehrer’s arguments abstruse.
1. For one thing, Lehrer is not talking about mere “mean regression” as such, which is simply a statistical phenomenon. (I can see why Paulos would want to dismiss these problems as mere mean regression, although he doesn’t even do that.) Lehrer is talking about the various institutional incentives and disincentives that exist within the academic and professional scientific communities which aggravate the problem.
2. Second, given that Paulos is a member of the very establishment Lehrer critiques, clearly Paulos is not an unbiased source.
3. Third, nowhere in his article does Lehrer “damn” scientific method. Even Paulos doesn’t say that. What Lehrer points out is how scientific research, as currently practiced, is subject to bias and uncorrected error, in ways subtle and not-so-subtle. Paulos acknowledges this, but dismisses the concern by saying: “A greater realization of these effects by journalists, scientists, and everyone else will lead to more caution in reporting results, more realistic expectations, and, I would guess, a decline in the decline affect (more accurately, the stat-psych effect).” I’m not so confident.
4. Anyway, Lehrer can probably clarify his arguments better than I can. Here’s his follow-up piece:
http://www.newyorker.com/online/blogs/newsdesk/2011/01/jonah-lehrer-more-thoughts-on-the-decline-effect.html
To quote Lehrer: “These questions have no easy answers. However, I think the decline effect is an important reminder that we shouldn’t simply reassure ourselves with platitudes about the rigors of replication or the inevitable corrections of peer review. Although we often pretend that experiments settle the truth for us—that we are mere passive observers, dutifully recording the facts—the reality of science is a lot messier. It is an intensely human process, shaped by all of our usual talents, tendencies, and flaws.”
Hmm, Daniel, what you say is quite reasonable, reflecting on it. Thank you for the comprehensive reply!