Don’t believe me
There are three words that make scientific investigation different from any other form of knowledge production: Don’t believe me.
The spectacular success of science at finding explanations and basing technologies on those explanations is not primarily due to its use of evidence, or its design and performance of experiments, or its willingness to risk predictions. Science’s success is largely a product of its willingness to doubt itself.
Many knowledge systems use evidence, test their ideas through experiments and make predictions, but science is the only knowledge system that consistently says, “Don’t believe me.” In most knowledge systems doubt excludes you from membership. Questioning authority, tradition and dogma is taboo. But in science doubt is the key to entry.
Our education system and popular media has created the impression that scientists have a special genius for having ideas about the nature of reality. But the truth is that the source of scientists’ ideas is no different to those of artists, philosophers, poets, mechanics, gardeners and daydreamers. Democritus’ idea that everything is made of irreducible atoms was speculation. Fleming’s discovery of penicillin was an accident. Kekulé imagined the circular structure of the benzene molecule during a dream about a snake biting its tail. The idea that led Newton to discover the law of gravity was the wild conjecture that the moon is in free fall. Mendeleev’s periodic table started out as an analogy based on the card game of patience. Einstein’s ideas about relativity began as fantastical thought experiments, like travelling on a beam of light. Wallace’s vision of the tree of life was the product of a hallucination induced by malarial fever. Scientists may have access to a particular kind of data, but their ideas, just like any other new ideas, are based on clever guesses and fantasies. They don’t have a special talent for having ideas, but they have a special commitment to criticising ideas once they, or someone else, has had them.
It was the father of natural philosophy, Thales of Miletus (born in the 7th century BCE), that said something like: What if we can share our ideas on the nature of reality, agree to keep criticising one another’s ideas and then keep that conversation going for centuries. It was this kind of thinking that eventually led to the scientific and political revolution which began in the 17th century in Europe. This revolution was all about challenging authority, tradition and antiquity as reliable sources of knowledge. Its attitude can be seen in the motto of the Royal Society, the oldest scientific academy that has been in continuous existence. Its motto is Nullius in verba, or, “Take no one’s word for it!” It was the 20th century philosopher, Karl Popper, who made this principle even more explicit by saying, “the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.” Popper suggested that we cannot prove anything is absolutely true. The only thing we can do with any certainty is identify error.
Most forms of human knowledge are based on the generalisations we make from experience. Our predictions about the future are based on our experiences of the past. This is called induction. In most cases, this kind of thinking, this inductive reasoning, works well enough, but its generalisations are not always reliable. This is why in science all claims to knowledge about the nature of reality remain open to criticism and falsification, even if the corroborating evidence and experiments give us overwhelming reasons for provisional acceptance. All ideas are potentially falsifiable. An idea’s ability to survive falsification makes it the best model we have so far for explaining some phenomenon in reality.
It is easy to design an experiment to prove your hypothesis. You select the data that supports it. In this way you become a victim of what we call confirmation bias. This is why you need the courage to question your assumptions and design an experiment, or counter-argument, that disproves your own hypothesis.
For falsifiability to be embraced as a method for producing knowledge, you need a courageous community of practice that is more committed to working out what is going on than they are to being right. For such a community of practice to function, there needs to be some agreement on the criteria for mutually beneficial disagreement. What constitutes a logical argument, evidence, experimental design and effective modelling? Practitioners of science would argue, for example, that a well-designed experiment can be repeated by anyone (and be used to disprove hypotheses), regardless of her prejudice, scepticism, language, beliefs, cultural assumptions or what she had for breakfast. This has led some to claim that science is a universal method for assessing knowledge, free of subjective points of view and cultural assumptions.
This perspective is poorly understood by those who claim that science is trying to force its narrow form of knowledge production onto them. Contrary to popular misconceptions, science is not an attempt to defend a collection of beliefs or truth statements, but a process that encourages creative guesses and then criticises them. When dealing with knowledge claims that cannot be submitted to a reliable process of falsification, science cannot say anything, and it must withhold judgement.
By embracing its own fallibility, science grows knowledge faster than any other knowledge system. In the short-term, scientific research may often be determined by who is funding the research, which interest groups the research serves, who controls access to technology, and ideological perspectives that determine research priorities. In the long-term, however, the errors of the past are identified, explanations improve and new technologies are innovated. Is this some kind of cultural domination protecting a particular worldview? Or is it the opposite – a reliable process for identifying error in any argument in any context? If you disagree, and think you know of more reliable criteria for assessing knowledge and identifying error, it is the nature of that tradition of criticism we call science to welcome your suggestions.
If your need to know is growing stronger than your need to be right, you’ll be willing to say about the most precious ideas you hold, “Don’t believe me.”