Image
© overexpressed.com
I am a relative fan of science. I say 'relative' because while it can often afford considerable illumination, it's most certainly not the be-all-and-end all. For example, there are plenty of things that may have merit that have not even been subjected to proper scientific study. For example, there are no studies (in the form of 'randomized controlled trials') that conclusively prove stopping smoking is beneficial to health, but that wouldn't stop me from suggesting that someone stop smoking if they asked for my advice on the matter.

Also, even when something has been subjected to systematic scientific study, the evidence base can actually give a very skewed version of reality. One way this can happen is as a result of what is known as 'publication bias'.

Imagine there's a widely held belief that, say, saturated fat causes heart disease. Studies that support this idea are viewed as 'positive' studies, while those that don't are 'negative'. There can be a tendency for medical and scientific journals to preferentially publish positive studies. In other words, studies that are in line with current thinking are more likely to make their way into the scientific literature than negative ones. In this way, existing dogma can essentially go unchallenged - something that is inherently unscientific.

The problem can be compounded by scientists themselves.

It's worth bearing in mind that researchers gain from publication in two main ways. Firstly, it can enhance their reputation and standing in the scientific community. Also, the most published researchers are the ones most likely to attract funding for their work. For some scientists, regular publishing is a matter of professional survival.

Researchers can work out for themselves the sorts of studies that stand the best chance of being published. There can be, therefore, an innate push among scientists to produce research which does little or nothing to further our understanding of a particular subject.

Recently, researchers from the University of Edinburgh in the UK published an analysis of 4,600 research papers from a wide range of disciplines including clinical medicine, psychology, psychiatry and pharmacology. The research was published from 1990 to 2007. What the researchers noted was a steady decline in the proportion of negative studies over thie time period. In 1990, some 30 per cent of studies were negative, but by 2007 this figure had dwindled to just 14 per cent.

There is evidence, therefore, of an increasing tendency for the scientific community to churn out 'more of the same'. The pressures on researchers and scientists to produce 'meaningful' results can ultimately lead to us getting a very biased view on reality. On the outside, an increasing consistency of positive results can look like the evidence is becoming ever more reliable. In reality, though, the evidence may in fact be getting steadily less reliable.

This is just one example why science is not always to be trusted implicitly, and why those who stick slavishly to it are usually not to be trusted either.

References

Daniele Fanelli. Negative results are disappearing from most disciplines and countries. Scientometrics epub 11 September 2011