The Scientific Method

1.1: What is the "scientific method"?

The scientific method is the best way yet discovered for winnowing the truth from lies and delusion. The simple version looks something like this:

 

1. Observe some aspect of the universe.

2. Develop an hypothesis that is consistent with what you have observed.

3. Use the hypothesis to make predictions.

4. Test those predictions by experiments making new observations.

5. Modify the hypothesis in the light of your results.

6. Go to step 3, over and over again, using your new results to make new hypotheses.

7. Challenge the world with your ideas even unto the ends of the universe.

8. You may be seeing an hypothesis giving birth to a theory.

 

This leaves out the co-operation between scientists in building theories, and the fact that it is impossible for every scientist to independently do every experiment to confirm every theory. Because life is short, scientists have to trust other scientists. So a scientist who claims to have done an experiment and obtained certain results will usually be believed, and most people will not bother to repeat the experiment.

Experiments do get repeated as part of other experiments. Most scientific papers contain suggestions for other scientists to follow up. Usually the first step in doing this is to repeat the earlier work. So if a theory is the starting point for a significant amount of work then the initial experiments will get replicated a number of times.

Some people talk about "Kuhnian paradigm shifts". This refers to the observed pattern of the slow extension of scientific knowledge with occasional sudden revolutions. This does happen, but it still follows the steps above.

Many philosophers of science would argue that there is no such thing as the scientific method.

 

1.2: What is the difference between a fact, a theory and a hypothesis?

In popular usage, a theory is just a vague and fuzzy sort of fact. But to a scientist a theory is a conceptual framework that explains existing facts and predicts new ones. For instance, today I saw the Sun rise. This is a fact. This fact is explained by the theory that the Earth is round and spins on its axis while orbiting the sun. This theory also explains other facts, such as the seasons and the phases of the moon, and allows me to make predictions about what will happen tomorrow.

This means that in some ways the words fact and theory are interchangeable. The organization of the solar system, which I used as a simple example of a theory, is normally considered to be a fact that is explained by Newton's theory of gravity. And so on.

An hypothesis is a tentative explanation of the observations. Typically, a scientist devises a hypothesis and then sees if it "holds water" by testing it against available data. If the hypothesis does hold water after repeated testing and confirmation by others researchers, the scientist declares it to be a theory.

An important characteristic of a scientific theory or hypothesis is that it be "falsifiable". This means that there must be some experiment or possible discovery that could prove the theory untrue. For example, Einstein's theory of Relativity made predictions about the results of experiments. These experiments could have produced results that contradicted Einstein, so the theory was (and still is) falsifiable.

On the other hand the theory that "there is an invisible snorg reading this over your shoulder" is not falsifiable. There is no experiment or possible evidence that could prove that invisible snorgs do not exist. So the Snorg Hypothesis is not scientific. On the other hand, the "Negative Snorg Hypothesis" (that they do not exist) is scientific. You can disprove it by catching one. Similar arguments apply to yetis, UFOs and the Loch Ness Monster. See also question 5.2 on the age of the Universe.


1.3: Can science ever really prove anything?

Yes and no. It depends on what you mean by "prove".

For instance, there is little doubt that an object thrown into the air will come back down (ignoring spacecraft for the moment). One could make a scientific observation that "Things fall down". I am about to throw a stone into the air. I use my observation of past events to predict that the stone will come back down. Wow - it did!

But next time I throw a stone, it might not come down. It might hover, or go shooting off upwards. So not even this simple fact has been really proved. But you would have to be very perverse to claim that the next thrown stone will not come back down. So for ordinary everyday use, we can say that the theory is true.

You can think of facts and theories (not just scientific ones, but ordinary everyday ones) as being on a scale of certainty. Up at the top end we have facts like "things fall down". Down at the bottom we have "the Earth is flat". In the middle we have "I will die of heart disease". Some scientific theories are nearer the top than others, but none of them ever actually reach it. Skepticism is usually directed at claims that contradict facts and theories that are very near the top of the scale. If you want to discuss ideas nearer the middle of the scale (that is, things about which there is real debate in the scientific community) then you would be better off asking on the appropriate specialist group.

 

1.4: If scientific theories keep changing, where is the Truth?

In 1666 Isaac Newton proposed his theory of gravitation. This was one of the greatest intellectual feats of all time. The theory explained all the observed facts, and made predictions that were later tested and found to be correct within the accuracy of the instruments being used. As far as anyone could see, Newton's theory was the Truth.

During the nineteenth century, more accurate instruments were used to test Newton's theory, and found some slight discrepancies (for instance, the orbit of Mercury wasn't quite right). Albert Einstein proposed his theories of Relativity, which explained the newly observed facts and made more predictions. Those predictions have now been tested and found to be correct within the accuracy of the instruments being used. As far as anyone can see, Einstein's theory is the Truth.

So how can the Truth change? Well the answer is that it hasn't. The Universe is still the same as it ever was, and Newton's theory is as true as it ever was. If you take a course in physics today, you will be taught Newton's Laws. They can be used to make predictions, and those predictions are still correct. Only if you are dealing with things that move close to the speed of light do you need to use Einstein's theories. If you are working at ordinary speeds outside of very strong gravitational fields and use Einstein, you will get (almost) exactly the same answer as you would with Newton. It just takes longer because using Einstein involves rather more math.

One other note about truth: science does not make moral judgments. Anyone who tries to draw moral lessons from the laws of nature is on very dangerous ground. Evolution in particular seems to suffer from this. At one time or another it seems to have been used to justify Nazism, Communism, and every other -ism in between. These justifications are all completely bogus. Similarly, anyone who says "evolution theory is evil because it is used to support Communism" (or any other -ism) has also strayed from the path of Logic.

 

1.5: "Extraordinary evidence is needed for an extraordinary claim"

An extraordinary claim is one that contradicts a fact that is close to the top of the certainty scale discussed above. So if you are trying to contradict such a fact, you had better have facts available that are even higher up the certainty scale.

 

1.6: What is Occam's Razor?

Ockham's Razor ("Occam" is a Latinized variant) is the principle proposed by William of Ockham in the fifteenth century that "Pluralitas non est ponenda sine neccesitate", which translates as "entities should not be multiplied unnecessarily". Various other rephrasings have been incorrectly attributed to him. In more modern terms, if you have two theories which both explain the observed facts then you should use the simplest until more evidence comes along. See W.M. Thorburn, "The Myth of Occam's Razor," Mind 27:345-353 (1918) for a detailed study of what Ockham actually wrote and what others wrote after him.

The reason behind the razor is that for any given set of facts there are an infinite number of theories that could explain them. For instance, if you have a graph with four points in a line then the simplest theory that explains them is a linear relationship, but you can draw an infinite number of different curves that all pass through the four points. There is no evidence that the straight line is the right one, but it is the simplest possible solution. So you might as well use it until someone comes along with a point off the straight line.

Also, if you have a few thousand points on the line and someone suggests that there is a point that is off the line, it's a pretty fair bet that they are wrong.

The following argument against Occam's Razor is sometime proposed:

This simple hypothesis was shown to be false; the truth was more complicated. So Occam's Razor doesn't work.

This is a strawman argument. The Razor doesn't tell us anything about the truth or otherwise of a hypothesis, but rather it tells us which one to test first. The simpler the hypothesis, the easier it is to shoot down.

A related rule, which can be used to slice open conspiracy theories, is Hanlon's Razor: "Never attribute to malice that which can be adequately explained by stupidity". This definition comes from "The Jargon File" (edited by Eric Raymond), but one poster attributes it to Robert Heinlein, in a 1941 story called "Logic of Empire".

 

1.7: Galileo was persecuted, just like researchers into <X> today.

People putting forward extraordinary claims often refer to Galileo as an example of a great genius being persecuted by the establishment for heretical theories. They claim that the scientific establishment is afraid of being proved wrong, and hence is trying to suppress the truth.

This is a classic conspiracy theory. The Conspirators are all those scientists who have bothered to point out flaws in the claims put forward by the researchers.

The usual rejoinder to someone who says "They laughed at Columbus, they laughed at Galileo" is to say "But they also laughed at Bozo the Clown". (From Carl Sagan, Broca's Brain, Coronet 1980, p79).

Incidentally, stories about the persecution of Galileo Galilei and the ridicule Christopher Columbus had to endure should be taken with a grain of salt.

During the early days of Galileo's theory church officials were interested and sometimes supportive, even though they had yet to find a way to incorporate it into theology. His main adversaries were established scientists - since he was unable to provide HARD proofs they didn't accept his model. Galileo became more agitated, declared them ignorant fools and publicly stated that his model was the correct one, thus coming in conflict with the church.

When Columbus proposed to take the "Western Route" the spherical nature of the Earth was common knowledge, even though the diameter was still debatable. Columbus simply believed that the Earth was a lot smaller, while his adversaries claimed that the Western Route would be too long. If America hadn't been in his way, he most likely would have failed. The myth that "he was laughed at for believing that the Earth was a globe" stems from an American author who intentionally adulterated history.

 

1.8: What is the "Experimenter effect"?

It is unconscious bias introduced into an experiment by the experimenter. It can occur in one of two ways:

·         Scientists doing experiments often have to look for small effects or  differences between the things being experimented on.

·         Experiments require many samples to be treated in exactly the same way in order to get consistent results.

 

Note that neither of these sources of bias require deliberate fraud.

A classic example of the first kind of bias was the "N-ray", discovered early this century. Detecting them required the investigator to look for very faint flashes of light on a scintillator. Many scientists reported detecting these rays. They were fooling themselves. For more details, see "The Mutations of Science" in Science Since Babylon by Derek Price (Yale Univ. Press).

A classic example of the second kind of bias was the detailed investigations into the relationship between race and brain capacity in the last century. Skull capacity was measured by filling the empty skull with lead shot or mustard seed, and then measuring the volume of beans. A significant difference in the results could be obtained by ensuring that the filling in some skulls was better settled than others. For more details on this story, read Stephen Jay Gould's The Mismeasure of Man.

For more detail see: T.X. Barber, Pitfalls of Human Research, 1976. Robert Rosenthal, Pygmalion in the Classroom.

 [These were recommended by a correspondent. Sorry I have no more information.]

 


1.9: How much fraud is there in science?

In its simplest form this question is unanswerable, since undetected fraud is by definition unmeasurable. Of course there are many known cases of fraud in science. Some use this to argue that all scientific findings (especially those they dislike) are worthless.

This ignores the replication of results which is routinely undertaken by scientists. Any important result will be replicated many times by many different people. So an assertion that (for instance) scientists are lying about carbon-14 dating requires that a great many scientists are engaging in a conspiracy. See the previous question.

In fact the existence of known and documented fraud is a good illustration of the self-correcting nature of science. It does not matter if a proportion of scientists are fraudsters because any important work they do will not be taken seriously without independent verification. Hence they must confine themselves to pedestrian work which no-one is much interested in, and obtain only the expected results. For anyone with the talent and ambition necessary to get a Ph.D this is not going to be an enjoyable career.

Also, most scientists are idealists. They perceive beauty in scientific truth and see its discovery as their vocation. Without this most would have gone into something more lucrative.

These arguments suggest that undetected fraud in science is both rare and unimportant.

The above arguments are weaker in medical research, where companies frequently suppress or distort data in order to support their own products. Tobacco companies regularly produce reports "proving" that smoking is harmless, and drug companies have both faked and suppressed data related to the safety or effectiveness or major products.

For more detail on more scientific frauds than you ever knew existed, see False Prophets by Alexander Koln.

The standard textbook used in North America is Betrayers of the Truth: Fraud and Deceit in Science by William Broad and Nicholas Wade (Oxford 1982).

 

1.9.1: Did Mendel fudge his results?

Gregor Mendel was a 19th Century monk who discovered the laws of inheritance (dominant and recessive genes etc.). More recent analysis of his results suggest that they are "too good to be true". Mendelian inheritance involves the random selection of possible traits from parents, with particular probabilities of particular traits. It seems from Mendel's raw data that chance played a smaller part in his experiments than it should. This does not imply fraud on the part of Mendel.

First, the experiments were not "blind" (see the questions about double blind experiments and the experimenter effect). Deciding whether a particular pea is wrinkled or not needs judgment, and this could bias Mendel's results towards the expected. This is an example of the "experimenter effect".

Second, Mendel's Laws are only approximations. In fact it does turn out that in some cases inheritance is less random than his Laws state.

Third, Mendel might have neglected to publish the results of `failed' experiments. It is interesting to note that all 7 of the characteristics measured in his published work are controlled by single genes. He did not report any experiments with more complicated characteristics. Mendel later started experiments with a more complex plant, hawkweed, could not interpret the results, got discouraged and abandoned plant science.

See The Human Blueprint by Robert Shapiro (New York: St. Martin's, 1991) p. 17.

 

1.10: Are scientists wearing blinkers? 

One of the commonest allegations against mainstream science is that its practitioners only see what they expect to see. Scientists often refuse to test fringe ideas because "science" tells them that this will be a waste of time and effort. Hence they miss ideas which could be very valuable.

This is the "blinkers" argument, by analogy with the leather shields placed over horses eyes so that they only see the road ahead. It is often put forward by proponents of new-age beliefs and alternative health.

It is certainly true that ideas from outside the mainstream of science can have a hard time getting established. But on the other hand the opportunity to create a scientific revolution is a very tempting one: wealth, fame and Nobel prizes tend to follow from such work. So there will always be one or two scientists who are willing to look at anything new.

If you have such an idea, remember that the burden of proof is on you. Posting an explanation of your idea to sci.skeptic is a good start. Many readers of this group are professional scientists. They will be willing to provide constructive criticism and pointers to relevant literature (along with the occasional raspberry). Listen to them. Then go away, read the articles, improve your theory in the light of your new knowledge, and then ask again. Starting a scientific revolution is a long, hard slog. Don't expect it to be easy. If it were, we would have them every week.

 

THINKING ABOUT THINKING

 

How can we know what to believe when the facts are confusing or experts disagree? As you learn about environmental science-in this book and elsewhere-you will find many issues about which the data are indecisive, leading reasonable people to disagree on how they should be interpreted. How can we choose between competing claims? Is it simply a matter of what feels good at any particular moment, or are there objective ways to evaluate arguments? Critical thinking skills can help us form a rational basis for deciding what to believe and do. These skills foster reflective and systematic analysis to help us bring order out of chaos, discover hidden ideas and meanings, develop strategies for evaluating reasons and conclusions in arguments, and avoid jumping to conclusions. Developing rational analytic skills is an important part of your education and will give you useful tools for life.

Certain attitudes, tendencies and dispositions are essential for critical or reflective thinking. Among these are;

·         Skepticism and independence. Question authority. Don't believe everything you hear or read, including this book. Even the experts can be wrong.

·         Open-mindedness and flexibility. Be willing to consider differing points of view and entertain alternative explanations.

·         Accuracy and orderliness. Strive for as much precision as the subject permits or warrants. Deal systematically with parts of a complex whole.

·         Persistence and relevance. Stick to the main point and avoid allowing diversions or personal biases to lead you astray.

·         Contextual sensitivity and empathy. Consider the total situation, feelings, level of knowledge, and sophistication of others as you study situations. Try and put yourself in another person's place to understand his or her position.

·         Decisiveness and courage. Draw conclusions and take a stand when the evidence warrants doing so.

·         Humility. Realize that you may be wrong and that you may have to reconsider in the future.

·          

Critical thinking is sometimes called metacognition or "thinking about thinking." It is not critical in the sense of finding fault but rather is an attempt to rationally plan how to think about a problem. It requires a self-conscious monitoring of the process while you are doing it and an evaluation of how your strategy worked and what you learned when you have finished. Assembling, understanding, and evaluating data are important steps, but critical thinking looks beyond simple facts to ask what reasons underlie and argument as well as what implications flow from a set of claims. These are some steps in critical thinking.

1.        Identify and evaluate premises and conclusions in an argument. What is the basis for the claims made? What evidence is presented to support these claims, and what conclusions are drawn from this evidence? If the premises and evidence are correct, does it follow that the conclusions are necessarily true?

2.        Acknowledge and clarify uncertainties, vagueness, equivocation, and contradictions. Do the terms used have more than one meaning? If so, are all participants in the argument using the same meaning? Are ambiguity or equivocation deliberate? Can all the claims be true simultaneously?

3.        Distinguish between fact and values. Can the claims be tested? (If so, these are statements of fact and should be verifiable by gathered evidence.) Are claims or appeals being made about what we ought to do? (If so, these are value statements and probably cannot be verified objectively.) For example, claims of what we ought to do to be moral or righteous or to respect nature are generally value statements.

4.        Recognize and interpret assumptions. Given the backgrounds and views of the protagonists and this argument, what underlying reasons might here be for ethe premises, evidence, or conclusions presented? Does anyone have an ax to grind or a personal agenda concerning this issue? What do they think I know, need, want, believe? Is a subtext based on race, gender, ethnicity, economics, or some belief system distorting this discussion?

5.        Determine the reliability or unreliability of a source. What makes the experts qualified in this issue? What special knowledge or information do they have? What evidence do they present? How can we determine whether the information offered is accurate, true, or even plausible?

6.        Recognize and understand conceptual frameworks. What are the basic beliefs, attitudes, and values that this person, group, or siciety holds? What dominating philosophy or Tethics control their outlook and actions? How do these beliefs and balues affect the way people view themselves and the world around them? If there are conflicting or contradictory beliefs and values, how can these differences be resolved?

 

In logic, an argument is made up of one or more introductory statements, called the premises, and a conclustion that supposedly follows from the premises. It is useful to distinguish between these kinds of statements. Premises usually claim to be based on facts; conclusions are usually opinions and values drawn form or used to interpret those facts. Words that often intruduce a premise include as, because, assume that, given that, since, whereas, and we all know that. Words that often indicate a conclusion or statement of opinion or values include and so, thus, therefore, it follows that, consequently, the evidence shows, we can conlude that. Remember, even if the facts in a premise are correct, the conclusions, drawn from them may not be.

 As you go through this book, you will have many opportunities to practice these critical thinking skills. Try to distinguish between statements of fact and opinion. Ask yourself if the premises support the conclusions drawn from them. Although I will try to present controversies fairly and evenhandedly, I, like everyone, have biases and values-some that I may not even recognize-that affect how I present arguments. Watch for aread in which you must think for yourself and use your critical thinking skills.