SCIENCE AS A FAITH-BASED ENTERPRISE

by Lyle Lofgren

January 3, 2011


In The Truth Wears Off (New Yorker, 12/13/10), Jonah Lehrer describes an effect I'd not heard of before: an investigator discovers an important scientific causal relationship, which is replicated by other adepts in the field. But if the experiment is repeatedly conducted for an extended time, the relationship becomes less apparent, and often disappears completely. This effect mainly shows up in such squishy areas as pharmaceutical drug testing, but can also occur in  other fields, such as physics. This observation brings up the question of what a person interested in science can believe, and the related question, "Is Science no better than Religion?"

I conclude that Science is not that different from Religion: both are faith-based enterprises, and lead to trouble when the faith is placed in human interpreters.

We're taught that scientific conclusions are provisional, subject to change with new observations. That's a contrast to religion, where you're taught that the truths revealed by God are immutable. There are mysteries, as in the Book of Job, where God taunts Job that he knows nothing about the ways of God. But mostly, our Judeo-Christian religion (hereafter just called religion, because it's such a major influence on western science) is about rules for how to behave in the presence of a deity that will punish you for transgressions. I think that's because the clergy have used religion as a tool of social control -- nice for keeping the peace, but not really a crucial part of the relationship of humans to The Ineffable.

Scientists, if pressed, will agree that scientific knowledge is provisional, but will defend the Scientific Method as the way to determine the truth of a proposition. But it requires as much faith to believe unconditionally in the Scientific Method as is needed to believe in an omniscient, omnipotent God. A scientist may be religious and believe in a God with all His peccadilloes, but at the same time believe that Nature never changes the basic rules (the Laws of Physics, for example). Although you have to separate the traditional concept of God from that of Nature to hold both beliefs, the scientist has to have a (usually unarticulated) faith in the Spinozan idea that God (or Nature), being perfect, cannot change. For if the rules change, no experiment can be reproduced, and nothing of value can be learned by observation.

The Old Testament God is unpredictable: he might decide to smite you for no reason, or for a petty reason, such as God's bet with Satan in the Book of Job. A more common belief, particularly among Christian fundamentalists, is that anything that happens that we interpret as bad, even including the loss of a football stadium1, is God's punishment for the people's wickedness — a very old idea, to be found repeatedly in such stories as Noah and his flood2, the destruction of Sodom and Gomorrah3, and the drowning of Pharaoh's army4. This sort of faith is the opposite of the scientific provisionality principle, yet it can infect scientists, too. The scientific principles we're taught in school are of this type: Conservation of Momentum, Energy, and Charge; electromagnetic field effects; gravitational attraction, etc. We learn these principles and how to apply them, but belief in them is based on faith -- faith in the authority of our teachers. When a discrepancy arises due to more precise measurements, we look for a replacement theory that encompasses the old theory as a special case. Or, as in the Theory of Relativity, one principle (space and time as independent variables) is abandoned to save another (conservation laws independent of reference frame).

There's of course no way of knowing if God is arbitrarily changing the Rules of Nature over time, any more than there's a way of divining Ultimate Reality. But there are a number of reasons why apparent causal effects can disappear over time.

One such problem, mentioned in the article, is bias towards positive results, particularly if they're first reported by a respected expert. If such a prestigious scientist reports a result, and (say) 10 other researchers try to replicate it, only some will be successful. But a negative result is seldom reported, and, if reported, seldom published. If that many experimenters are looking for a causal relationship, a significant number of them will find it. The tyranny of authority is described in Thomas Kuhn's The Structure of Scientific Revolutions5 (No, I haven't read the book, but I've read so much about it that it sometimes seems to me as if I had), and Kuhn's Paradigm concept is relevant to the Lehrer article. As the tyrant's power wanes, more investigators dare to publish results that disagree with the renowned expert. In this way, the hard sciences are not that different from the soft "sciences," such as medicine or psychology. Part of the reason for this effect is that, contrary to what we're taught should happen, scientists transfer their blind faith from the scientific method to scientific results.

I'm sure there's a lot of politics involved with the peer review process, which would also help suppress publication of experimental results that contradict the prevailing paradigm. As another example, I once talked with a woman who worked for the Health Department, teaching food safety to food service people. I mentioned a report I'd read: someone in the field realized that no one had ever conducted an experiment to show that plastic cutting boards were safer than wooden ones. So they made some standardized knife cuts on both plastic and wooden boards, then smeared them with salmonella. To their surprise, the plastic boards grew the bacteria, but there was no sign of them on the wooden ones. Repeated tests showed the same results. The experimenters surmised that the wood fibers somehow trapped the bacteria and kept them from the surface and from reproducing. This woman had also heard of the report, but she said the Health Department had decided to keep recommending the use of plastic boards, because they didn't want to lose credibility by changing their message.

The original investigator became a respected expert by publishing findings that other experts agree are significant. But Metrology is not a simple field. It's common practice to conduct an error analysis on test results, including making corrections for known (also called systematic) errors and stating random error as a tolerance on the corrected mean result. But error analysis depends on faith: faith in the equipment manufacturers' specifications (which are sometimes over-optimistic — equipment manufacturers have marketing departments, too), and, beyond that, faith in the competence of the national laboratories (NIST, in the case of the US) that determine the basic physical constants (kilogram, meter, second) on which honest accuracy claims are based. It's a practical impossibility for an individual to check these claims, and it's very difficult to be sure that all the possible error sources, both systematic and random, have been considered — not to mention that error analysis is not a high priority in the rush to publish a result before someone else does.

In addition to error sources, there's a further confounding influence on accuracy of the results: even repeated measurements using the same equipment do not give the same results — if they do, your equipment isn't sensitive enough. This effect is described by sampling theory, which is not simple, and is based on assumptions that are not always understood. Sampling theory assumes that the unknown quantity consists of an infinite population that follows a Gaussian (Normal) Distribution, and therefore has a Mean and Standard Deviation(σ). You can only take a finite sample of this population, so sampling theory gives the probability that the Mean and σ of the sample matches those of the population to within a stated tolerance. For example, even ignoring equipment errors, the result would have to be stated in terms such as, "There is a 95% probability that the correct answer is X plus or minus Y," where X is the sample mean and Y is twice the sample standard deviation increased by factors to account for sample-size-dependent uncertainties in both mean and σ.

Scientists don't always apply statistics and error analysis correctly. I read somewhere that every new measurement of the speed of light published in the twentieth century was outside the claimed error limits of the previous measurement. There are a number of subtle effects here. A single standard deviation covers only 68.2% of the total probability area under a Gaussian curve — a much better estimate is made using plus/minus 3σ, which covers 99.8% of the data spread. But a sampling theory confidence level of 95% means that, out of 20 experiments, one of them will give an answer outside the published error bars. The 95% confidence level is used because one of the sampling theory pioneers (Fisher) used it -- another case of faith in an expert without thinking further about whether the choice makes sense. I've read papers where we're not even told how many standard deviations are represented by the error bars on the results graphs, much less how they were estimated.

It takes a certain amount of arrogance to be a respected expert, but it has the advantage that no one dares to question you. I read Robert A. Millikan's arrogant autobiography6 years ago, so I was not surprised to learn recently that, when measuring the charge of an electron, he discarded about 60% of the data he took, then claimed he had used it all. He had no statistical reason for the data he discarded, but pretty good intuition: his published value is within 0.6% of the presently accepted one. But his published accuracy claim was plus/minus 0.2%, whereas if he'd used all the data, his accuracy claim should have been plus/minus 2%.

There are other, more basic, questions about the nature of Nature. Does she really speak in the language of mathematics, as Galileo claimed? We place a great amount of faith in mathematics and the logic that underpins the methods we use. And there are relationships that are easily expressed mathematically that would be very difficult to describe using ordinary language without merely writing out the formula in words. And we can approximate and predict a lot of effects with it, so utility is an argument in favor of our faith in logic and mathematics. But there are aspects of nature that are forever beyond the reach of mathematics, such as water flow below a waterfall — a good Minnesota example is the Temperance River below the falls near its outlet to Lake Superior. Details of the flow pattern and eddies are completely chaotic and change unpredictably from moment to moment. We can tell ourselves that it's not an important phenomenon, but observation of any such physical behavior should shake one's faith in the ability of mathematics or physics to adequately describe nature.

We can defend our faith in logic and the scientific method by explaining how useful it is, but a follower of the Christian Science religion can similarly point to success of faith in spiritual healing. If you're sick and want to get well, you can either go to a doctor or pray. Both methods are effective, until they aren't.


NOTES:

1. "Loss of a football stadium" refers to the collapse of the inflatable roof of the Hubert H. Humphrey Metrodome in Minneapolis on Dec. 12, 2010, due to excess snow which was, of course, sent by God. Perhaps He was mad because the Stadium Commission had taken a donation to rename the football field itself Mall of America Field, and we all know how Jesus felt about money-changers in the temple. Dumping snow on everything was necessary, because there weren't any tables to tip over.

2. Genesis, Chapters 6-8

3. Genesis, Chapter 18

4. Exodus, Chapter 14

5. Kuhn, Thomas S., The Structure of Scientific Revolutions (University of Chicago Press, 1962)

6. Millikan, Robert A., The Autobiography of Robert A. Millikan (Prentice-Hall, 1950)


RETURN TO LOFGREN HOME PAGE