A research physicist friend of mine who works at Lawrence Berkeley National Laboratory is a bit of a global warming skeptic. When I brought up all the scientific research on the subject, he said, somewhat dismissively, "Yes, but anyone who gets a PhD in climate science goes into it with an agenda. No one goes into particle physics just to prove a point. So no, I don't always trust their research." Not being a scientist myself, I had no clever rejoinder at the time, other than to say, essentially, "Well, 50,000 climate scientists can't all be wrong!" But what if most scientists tend to be wrong most of the time? And not due to political agendas, but academic, professional, or even psychological ones?
A good New Yorker article appeared last month regarding the fallibility of scientific research as currently practiced, or perhaps as inevitably practiced. There is a lot to chew on here, once you consider the ramifications.
In one sense, the research problem described in the article doesn't fall under the ambit of "philosophy of science" as such. These are psychological and sociological problems, as much as epistemological ones. If you want to jump to the good part:
This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.