In my last essay, I argued that it was important to distinguish science from non-science, and that a first step toward doing so was to distinguish between nomothetic (law-seeking) sciences like physics and chemistry, and idiographic (particularizing) sciences like biology and geography. I tried to show that science doesn’t have to copy methods characteristic of physics in order to count as good, authentic science. Rather, the subject matter itself determines the appropriate methods of study.
In this article, I’d like to lay out a demarcation criterion that, I hope, will be both descriptive (that is, true to what actually goes on in science) and normative (that is, can guide our expectations of what good science should look like). The challenge here is to defend some core intuitions about science, without stuffing it into a box. We need a demarcation criterion that is flexible enough to account for legitimate diversity of practice.
I believe that a “virtues” approach to this question can do this. In order to explain what I mean, I’d like to take a step back from the philosophy of science for a moment, and turn to ethical philosophy. In ethical philosophy, there are three main approaches: consequentialist, deontological, and virtue-based. A consequentialist holds that the moral quality of an action has to be assessed by its results, rather than by the nature of the person who performs them or the activity performed. A deontologist holds that the moral quality of an action is latent in the action itself, whoever performs it and with whatever results. A virtue theorist holds that the moral quality of an action is an expression of the character of the person who performs it. Good actions flow from good character. So on the consequentialist approach, the goal of moral instruction is to just get the best outcome; on the deontological approach, to get people to do a certain thing; and on a virtue-based approach, to get them to be certain kinds of people. (I’m simplifying tremendously, of course, but it can’t be helped.)
If we look at virtue-based approaches in a little more depth, we can see that character can be described in terms of those virtues. We probably all have some ideas as to what a good person should be like: honest, generous, principled, brave, and so on. We can make a list. Perhaps no two lists would be precisely the same, but the point is that composing such a list provides the basis for analysis along these lines. We need to know what virtue consists of in order to assess whether a person’s character (and thus actions, on this view) are virtuous. An important advantage of this approach is that it isn’t an all-or-nothing affair. We can acknowledge that a person has some virtues, and hence has good character to that extent, without committing to an assessment of their overall character. This leaves room for gradation without being ill-defined.
It seems to me that this approach can be usefully applied to the demarcation problem in the philosophy of science. We can hold that a research activity is scientific to the extent that it possesses certain virtues, and we can be precise in our assessment by being precise about just what those virtues are.
So what are the virtues that can inform the demarcation of science from non-science? One is empiricism, i.e., theories should grow out of observation, and eschew, as much as possible, things that haven’t been or can’t be observed. We don’t want to start invoking invisible, undetectable entities in our theories when we can help it. We want to talk about things we can see, hear, and touch. While this might seem perfectly obvious, what I’ve tried to show in earlier articles is that it really isn’t. It’s the success of science that has made it seem obvious to us, just as the success of mathematics made it seem obvious to the ancient philosophers that real knowledge was abstract rather than empirical.
Another is naturalism. We don’t want to reference supernatural agents, like ghosts or angels, God or karma, in a scientific explanation. The reason is in part definitional: the mandate of science is to study the natural world. It isn’t tasked with telling us the meaning of life, providing us with aesthetic or ethical criteria, explaining the ultimate nature of being, or anything like that. Its mandate is limited. Now I imagine that there will be an objection here, that science studies the natural world, and the natural world is all there is, hence the mandate of science is not limited—it studies what really exists, and the reason it doesn’t study supernatural agents is that these don’t exist. The “natural world” and “everything that exists” are just different words for the same thing, so science studies everything. That might be right, but it’s also a step beyond what our present discussion requires. Put another way, the kind of naturalism involved in science is methodological: it proceeds as if there were nothing but the natural world. The kind of naturalism involved in this further claim, that the “as if” yields good results because it is true, is ontological naturalism. In future articles I’ll discuss this topic in more depth, but whatever one thinks about ontological naturalism, it remains the case that methodological naturalism is integral to modern science. Scientists (like Michael Behe) who try to invoke God as part of a scientific explanation are generally regarded as renegades to the profession by their colleagues, because methodological naturalism is so integral to modern science.
A third virtue of a theory or a practice is its fruitfulness. An insight that not only solves an outstanding problem, but solves a large number of adjacent ones or leads to new and surprising insights, has the virtue of fruitfulness. If we think about Einstein’s theory of relativity, or Darwin’s of natural selection, we can see how these theories remain productive to this day. Conversely, when a theory doesn’t seem to be leading to any new and interesting insights, it starts to look like it’s deteriorating. Maybe we’ve learned everything we can learn through this one particular approach and it’s time to move on. The difficulty that relativity encounters when trying to explain the discrepancy between the predicted and observed rotational speeds of galaxies may indicate that a new approach is needed because the old approach seems to have hit a wall. It’s not as fruitful as it once was.
A fourth virtue is parsimony, or Occam’s Razor. Sometimes this is also referred to as “elegance” or “beauty.” We prefer simpler explanations to more complex alternatives, all things being equal. There has been a lot of philosophical discussion about whether the universe is actually simple in relevant respects (making our preference rational and truth-bearing) or if we’re just irrationally biased in favor of simpler theories. This, too, will be a good topic for a future article. For the moment, though, it’s enough to point out that scientists prefer simpler theories to the more complex. To give an example from physics, Nobel Laureate Steven Weinberg wrote:
I remember that, when I learned general relativity in the 1950s, before modern radar and radio astronomy began to give impressive new evidence for the theory, I took it for granted that general relativity was more or less correct. Perhaps all of us were just gullible and lucky, but I do not think that is the real explanation. I believe that the general acceptance of relativity was due in large part to the attractions of the theory itself—in short, to its beauty.
A fifth virtue is conservatism. Science is not often thought of as a conservative activity, but Thomas Kuhn has shown in his book, The Structure of Scientific Revolutions, that it really is. Conservatism in this case does not mean political conservatism, but rather a disposition to respect precedents and to prefer incremental over drastic changes. The opposite of conservatism in this case is not liberalism, but radicalism: a preference for disregarding precedents and making dramatic changes. The reason that conservatism is a virtue and a norm in science is that it’s a collaborative endeavor. In order for researchers to work together, they have to have a shared set of beliefs—about what constitutes a meaningful problem, a productive approach, and an acceptable answer. When a large group of researchers agree about these things, they can pool their efforts and move in the same direction, without being continually derailed by excursions in other directions. If each researcher has their own ideas about appropriate questions, methods, and answers, then that ability to cooperate is endangered. Instead of investigating the natural world, they’ll end up continually debating the right way to investigate the natural world. In other words, they’ll stop doing science and start doing philosophy of science. Now, there’s nothing wrong with philosophy of science, of course, and I think it’s great when scientists learn about it. But it’s a distinct activity from science itself. At some point you have to stop debating how the natural world should be investigated and go out there and actually investigate it. What conservatism does is sideline questions of the former variety, so that researchers can concentrate on their work. Big revolutionary changes do happen in science, of course, but contrary to the self-representation of some scientists, they are not frequent nor (as a rule) eagerly embraced when they do. But anyone who wants to learn about this argument in more depth should look up Thomas Kuhn’s book, because it’s really a classic, and he explains things much more thoroughly and persuasively than I can.
Another virtue that is commonly cited, especially by scientists themselves, is falsifiability. The potential to be wrong is not a deficiency, but a virtue in a scientific theory because it gives researchers a way to test it. Karl Popper’s go-to example was the Eddington experiment of 1919, the first experimental test of relativity. Einstein’s account of gravity differed from Newton’s in a small but testable way, which is what the experiment set out to observe. Newton had described gravity as an attraction between massive objects, meaning that gravity could only affect the trajectory of objects with mass. Since light has no mass, it cannot be affected by gravity, on Newton’s account. Einstein described gravity as curvature in space-time, in which case anything that passes through space-time ought to be affected. Light does that, so it should be affected. Both Newton’s and Einstein’s accounts thus made a testable prediction. Newton predicted that light would not bend as it traveled near a massive object, Einstein predicted it would. The reason the Eddington experiment counted so powerfully in favor of Einstein is that it bore out his prediction, and refuted Newton’s. (The experiment was to observe Mercury as it moved toward the far side of the sun, during an eclipse. The eclipse was necessary because it made it possible to distinguish Mercury’s light from that of the Sun, by blocking out most of the Sun’s light. As the light from Mercury passed near the sun, on its way to Earth, Mercury’s position appeared to shift irregularly. This was taken as confirmation that the light waves had been bent as they traveled so near to the sun.)
Popper contrasted relativity favorably with Marxism and Freudianism because, he said, relativity was falsifiable, while Marxism and Freudianism were not. Marxists and Freudians, Popper complained, spent all their time trying to confirm the theory they already held, rather than trying to falsify it by putting it through demanding tests.
Falsifiability usually involves experiments, and that’s why experiments are so highly valued. As I tried to show in the previous article, however, they are not always available. Some sciences study particular, non-repeatable events. Cosmology, for instance, studies the history of the universe. It happens one time, unlike the transit of Mercury during an eclipse, which has happened many times before, and will happen many times again. The reason I think the idiographic-nomothetic distinction is important is that I don’t think it should count against a science that it doesn’t have access to experiments—in virtue of the object of study, rather than any deficiency on the part of the researchers or their methods—the way Popper’s insistence on falsifiability as the most important demarcation criterion implies.
Finally there is accuracy. A good scientific theory is not muddled, confused, or ambiguous. It’s precise, so everyone with the right training knows what it means and what it doesn’t. This is where the difference between qualitative and quantitative description becomes so important. If I say that Donald Trump is a terrible President, my meaning is somewhat ambiguous. Obviously I disapprove, but is my main complaint that he’s a narcissist, an incompetent, or a traitor? And what, in turn, do those terms mean? Am I just giving a general complaint against politicians (more of the same), or am I saying that he’s uniquely terrible? I can only clarify the meaning of these words by using others, which are themselves open to a variety of interpretations. It’s very difficult to nail down what exactly is meant in a qualitative description. A lot of it hinges on tacit knowledge—not what we articulate, but what we don’t. When tacit knowledge isn’t shared, the meaning gets lost and misinterpretations easily arise. Now let’s contrast that situation with what pertains in mathematics. If I say that seven is a prime number, my meaning is not at all difficult to get at. I mean that the only whole numbers it can be divided by, with a whole number as a result, are one and itself, unlike the numbers six, eight, and ten, for instance, each of which can be divided by two as well as by one and itself. It’s the same with a mathematical equation. The terms are defined in a rigorous and precise way, so anyone who knows what the terms mean knows exactly what the equation says as well. It’s because mathematics is pure abstraction (it refers us only to concepts, not to objects) and because it is devoid of qualities (“two” is neither good nor bad, hot nor cold, etc.) that its meaning can be so precise. Quantitative descriptions are often employed in science because they eliminate unnecessary confusion (among other reasons.)
Well, that’s my list. I think the power of this approach is that it doesn’t try to put a straitjacket around science, saying that it has to be this or that or else it doesn’t really count. Its more flexible than the insistence on a single demarcation criteria (like falsification), and it makes room for historical variability. Newton’s explanation of gravity, for instance, was decidedly non-naturalistic. He explained it as the omnipresent will of God, holding all things together. Maybe that explanation wouldn’t be well-received today, but it seems odd to say that Newton’s physics wasn’t “really” science. The conditions of science change over the centuries, and a virtue-based approach can take account of that. It can allow us to specify what time and place we’re talking about, and hence what virtues were or are appropriate (that is, widely recognized) in that time and place, rather than trying to impose one uniform standard across all of human thought.
This is only a partial list. Already I can think of one or two others that might have been included, but it’s already getting to be a long essay. In the next essay, we’ll try to nail down a bit what is meant by the term “religion”—an even more imposing task than with the word “science,” since it has so many, and such contested, meanings!
This essay is part of a series; the previous essay can be found here.
Daniel Halverson is a graduate student studying the History of Science and Technology. He is also a regular contributor to the PEL Facebook page.