Science has always struggled to sift crackpot ideas from genuine maverick genius. If it were just a matter of combining unambiguous data with flawless theories, the task would be quite simple. Unfortunately, says Harry Collins, science is an all-too-human activity, and heroes and villains come in every possible guise.

I was at a closed conference with friends and acquaintances from the gravitational wave world, a group I have studied and written about for 30 years. The dress code was informal - T-shirts and jeans, or open-necked shirts and sports jackets for the older scientists. There at the podium was a medical doctor with a halo of white hair, in a smart grey suit and red bow tie, spouting management-speak. As far as we could tell, he seemed to be telling the group that it did not understand its $250 million interferometers, and that his microscale experiments showed they had missed something vital about the interaction of the mirrors. I swapped smirks with those around me. But we were wrong. A couple of stubborn scientists extracted the important bit of sense from the packaging, and the design was changed.

If science were a matter of combining unambiguous data from perfectly conducted experiments with flawless theories, assessing the claims of "outsider" scientists and their maverick ideas would not be that hard. But the logic of science is not so far removed from the logic of ordinary life (though admittedly ordinary life lived among extraordinary ideas and amazing machines) and so fallible human judgement still determines what happens at the heart of even the hardest science.

Since the 1960s, the tension between the canonical model of science (experimentation and theory) and the everyday practice of science (pottering around and hunch) has been explored by historians, philosophers and especially sociologists. One way to get a sense of how that tension compounds the problem of dealing with outsiders and their ideas is by comparing funding policies.

Consider, for example, that the US military spends around $1 million per year on anti-gravity research. This is best understood by analogy with the philosopher Blaise Pascal's famous wager. Everyone, he argued, should believe in God because the cost of believing was small while the cost of not believing could be an eternity in hell. For the military, the cost of missing a technological opportunity, particularly if your enemy finds it first, is a trip to the hell of defeat. Thus goes the logic of the military investigating anti-gravity.

The same rationale led to the funding of some unlikely experiments by University of Maryland physicist Joe Weber. Weber founded gravitational-wave research, but failed to find the waves. By 1975, he had lost most of his credibility, yet in the 1980s, the US military paid to test an implication of a theory he had invented in his struggle to regain his former glory. The implication was that it was possible to detect neutrinos emitted by the reactor of a nuclear submarine, using only a crystal you could hold in your hand rather than vast underground tanks. If the US navy ignored Weber's idea but the Russians used it to build technology that could detect US nuclear subs, the undersea deterrent would no longer deter.

The same logic drives companies such as Pirelli, which is still pursuing Weber's idea in the hope of transmitting modulated neutrino signals through the Earth to carry messages without cables (New Scientist, 17 April 2004, p 36), and Canon and Toyota, which funded cold fusion long after the research councils would not touch it.

The logic of state funding agencies such as the US's National Science Foundation or the UK's Engineering and Physical Science Research Council is quite the opposite. Here the pressures of accountability can make them too conservative. There are a few new agencies aiming to fill the ground between the wild logic of the military and certain companies, and the worthy work of "official" science. One such is Donald Braben's Venture Research, which plans to back projects by picking promising researchers rather than wading through research applications (see his book Pioneering Research, Wiley, 2004).

But even multiple funding styles cannot address the problem that it is impossible to explore every new scientific idea to the standard set by science: there are just too many. Many scientists in the public eye are deluged by self-styled pioneers claiming to have found the fundamental flaw in relativity or a new energy-free method of transport. Discriminating between them requires a mixture of the "green-ink and no margins" test, and more sophisticated short-cuts, such as knowing that papers in Physical Review that describe new physics tend to be taken more seriously than articles in New Scientist doing the same, and that scientists from major institutions are trusted more than scientists from obscure colleges. No one has time to track everything to the bitter end.

Ironically, even exploring an idea to the bitter end may prove impossible. For example, after a hundred years, no one has absolutely proved the non-existence of extrasensory perception. If anything, the findings run very slightly in its favour. Weber's claims that his early detectors caught high fluxes of gravitational waves and that his crystals could detect neutrinos have also never been disproved to the standards of logic. The reason that the US military finally abandoned both strands of research was sociological: Weber had had a good run and it was time to move on. There is no logic that says that Pirelli is completely crazy for taking another look.

The irreducible tension in science is to maintain enough "social control" over new ideas and spending to ensure science isn't engulfed by seas of possibilities, while leaving room for new ideas. Tossed on the waves of these possibilities are people such as Martin Fleischmann (cold fusion), Eric Laithwaite (anomalous gyroscopes), Albert Einstein (relativity), Linus Pauling (vitamin C), Alfred Wegener (plate tectonics), Thomas Gold (origin of oil), David Duesberg (non-viral causes of AIDS), and Subrahmanyan Chandrasekhar (black holes). The list goes on and on. Such ideas eventually wash up on one shore or the other, but only 20:20 foresight will tell you which one.

However, sometimes what looks like "outsider science" has content of quite a different sort. Take the measles, mumps and rubella (MMR) vaccine and autism affair in the UK. Andrew Wakefield, the doctor behind the furore, published some evidence in The Lancet suggesting a link between autism and measles-related virus particles in the gut. But these particles were never linked to MMR vaccine. There was word-of-mouth testimony from some parents, but no link between MMR and autism has ever been proved. Wakefield simply speculated about a relationship at a press conference - and no one has ever gone further than to hypothesise about it.

This case was presented to the public as a genuine scientific controversy, and, to my discomfort, the MMR story as told by many social scientists is one of struggle between wise parents and uncomprehending, authoritarian medical authorities. There have been (and will be) many real struggles of that sort, but this was not one of them. The only usable scientific evidence was epidemiological - and that pointed to the safety of MMR. Because it is so hard to prove a negative, none of this shows that there is not a hidden link between MMR and autism lurking below the statistics. But there is no evidence to show there is.

The energy it took to deal with Wakefield's claims, and to persuade parents to vaccinate their children at all after the scare, could have been much better spent. Wakefield was not behaving as a scientific outsider: he was simply not providing scientific evidence at that press conference.

In addition to the difficulty of proving a negative, scientists are also very unwilling to face up to the social and financial logic that drives their choices. A tentative claim about, say, telepathy, can provoke a sort of fundamentalist zeal among some scientists refuting the claim, which in turn undermines their claims for science as an exemplar in a divided world. They should say merely this: "Well, it's not inconceivable, I can't absolutely prove you wrong, but my time is better spent doing things I judge to have more potential."

Scientists, then, are not always their own best friends when it comes to helping others navigate the loss of absolute certainty about our world. I am also not sure how it helps if they assume omnipotence in the name of science, as Richard Dawkins did recently when he insisted that scientists must be atheists. And Stephen Hawking has been turned into a new kind of religious icon, with his books taking the place of the incomprehensible Latin Bible in our homes.

Here science becomes "revealed truth", obscuring the long hours of tedious work, the experiments open to reinterpretation (and failure), and theories with their infinities and arbitrary variables that can never quite be tamed. The Dawkinses and the Hawkings threaten to make the hard-won victory of science over religion a pyrrhic victory by replacing old faiths with new.

If science is essentially ordinary life albeit conducted in extraordinary circumstances, it must contradict literal interpretations of texts that clash with its findings, but it should not claim the right to address deeper questions of existence.

The biggest danger for science is that in missing its footing on the tightrope of certainty, it crashes to the ground. In the social sciences, this danger is best represented by the romantic value today placed on the instincts of the general public: the folk are said to be as wise, or wiser, than experts. It is a political necessity and responsibility in a democratic society to take account of the technological "preferences" of the people, but this should never be confused with technological or scientific "wisdom". That road leads to a society none of us would want to inhabit.

There is no easy and sure scientific way to sift every claim, but there are good and bad judgements. That is the safety net protecting us against scientific populism. This populism is a way of evading the hard search for the grounds of knowledge by giving equal weight to everyone's frame of reference. We must keep hold of the idea that judgement, though never perfect, is generally done better by those who know what they are talking about.

From issue 2581 of New Scientist magazine, 09 December 2006, page 46-48

Harry Collins is Distinguished Research Professor at the school of social sciences of Cardiff University, UK. His books include: The Golem: What everyone should know about science (with Trevor Pinch, published by Cambridge University Press); Gravity's Shadow, and Dr Golem (with Trevor Pinch), published by University of Chicago Press. His next book (with Rob Evans) will be Rethinking Expertise.