bad science, corruption science
"Science, the pride of modernity, our one source of objective knowledge, is in deep trouble." So begins "Saving Science," an incisive and deeply disturbing essay by Daniel Sarewitz at The New Atlantis. As evidence, Sarewitz, a professor at Arizona State University's School for Future Innovation and Society, points to reams of mistaken or simply useless research findings that have been generated over the past decades.

Sarewitz cites several examples of bad science that I reported in my February article "Broken Science." These include a major biotech company's finding in 2012 that only six out of 53 landmark published preclinical cancer studies could be replicated. Researchers at a leading pharmaceutical company reported that they could not replicate 43 of the 67 published preclinical studies that the company had been relying on to develop cancer and cardiovascular treatments and diagnostics. In 2015, only about a third of 100 psychological studies published in three leading psychology journals could be adequately replicated.

A 2015 editorial in The Lancet observed that "much of the scientific literature, perhaps half, may simply be untrue." A 2015 British Academy of Medical Sciences report suggested that the false discovery rate in some areas of biomedicine could be as high as 69 percent. In an email exchange with me, the Stanford biostatistician John Ioannidis estimated that the non-replication rates in biomedical observational and preclinical studies could be as high as 90 percent.

Sarewitz also notes that 1,000 peer-reviewed and published breast cancer research studies turned out to be using a skin cancer cell line instead. Furthermore, when amyotrophic lateral sclerosis researchers tested more than 100 potential drugs reported to slow disease progression in mouse models, none were found to be beneficial when tested on the same mouse strains. A 2016 article suggested that fMRI brain imaging studies suffered from a 70 percent false positive rate. Sarewitz also notes that decades of nutritional dogma about the alleged health dangers of salt, fats, and red meat appears to be wrong.

And then there is the huge problem of epidemiology, which manufactures false positives by the hundreds of thousands. In the last decade of the 20th century, some 80,000 observational studies were published, but the numbers more than tripled to nearly 264,000 between 2001 and 2011. S. Stanley Young of the U.S. National Institute of Statistical Sciences has estimated that only 5 to 10 percent of those observational studies can be replicated. "Within a culture that pressures scientists to produce rather than discover, the outcome is a biased and impoverished science in which most published results are either unconfirmed genuine discoveries or unchallenged fallacies," four British neuroscientists bleakly concluded in a 2014 editorial for the journal AIMS Neuroscience.

Some alarmed researchers refer to this situation as the "reproducibility crisis," but Sarewitz convincingly argues that they are not getting to the real source of the rot. The problem starts with the notion, propounded in the MIT technologist Vannevar Bush's famous 1945 report Science: The Endless Frontier, that scientific progress "results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown." Sarewitz calls this a "beautiful lie."

Why it is a lie? Because it makes "it easy to believe that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science, guiding it in its most productive directions and providing continual tests of its validity, progress, and value." He adds, "Technology keeps science honest." Basically, research detached from trying to solve well-defined problems spins off self-validating, career-enhancing publications like those breast cancer studies that actually were using skin cancer cells. Yet no patients were cured of breast cancer. The "truth test" of technology is the most certain way to tell if the knowledge allegedly being generated by research is valid. "The scientific phenomena must be real or the technologies would not work," Sarewitz explains.

Sarewitz points out that the military-industrial complex—the very force from which Vannevar Bush was eager to escape—generated the targeted scientific results that led to many of the technologies that have made the modern world possible, including digital computers, jet aircraft, cell phones, the internet, lasers, satellites, GPS, digital imagery, and nuclear and solar power. He's not suggesting that the Department of Defense should be in charge of scientific research. He's arguing that research should be aimed more directly at solving specific problems, as opposed to a system where researchers torture some cells and lab mice and then publish a dubious paper. An example of the kind of targeted scientific work he favors is the National Breast Cancer Coalition's Artemis project, whose goal is to develop an effective breast cancer vaccine by 2020.

"Academic science, especially, has become an onanistic enterprise worthy of Swift or Kafka," Sarewitz declares. He wants end-user constituencies—patient advocacy groups, environmental organizations, military planners—outside of academia to have a much bigger say in setting the goals for publicly funded research. "The questions you ask are likely to be very different if your end goal is to solve a concrete problem, rather than only to advance understanding," he argues. "That's why the symbiosis between science and technology is so powerful: the technology provides focus and discipline for the science."

And there's a bigger problem. In his 1972 essay "Science and Trans-Science," the physicist Alvin Weinberg noted that science is increasingly being asked to address such issues as the deleterious side effects of new technologies, or how to deal with social problems such as crime and poverty. These are questions that "though they are, epistemologically speaking, questions of fact and can be stated in the language of science, they are unanswerable by science; they transcend science." Such trans-scientific questions inevitably involve values, assumptions, and ideology. Consequently, attempting to answer trans-scientific questions, Weinberg wrote, "inevitably weaves back and forth across the boundary between what is known and what is not known and knowable."

"The great thing about trans-science is that you can keep on doing research," Sarewitz observes, "You can...create the sense that we're gaining knowledge...without getting any closer to a final or useful answer." Some contemporary trans-scientific questions: "Are biotech crops necessary to feed the world?" "Does exposure to synthetic chemicals deform penises?" "Do open markets benefit all countries?" "What will the costs of man-made global warming be in a century?" "What can be done about rising obesity rates?" "Does standardized testing improve educational outcomes?" All of these depend on debatable assumptions or are subject to confounders that make it impossible to be sure that the correlations uncovered are actually causal.

Consider climate change. "The vaunted scientific consensus around climate change," notes Sarewitz, "applies only to a narrow claim about the discernible human impact on global warming. The minute you get into questions about the rate and severity of future impacts, or the costs of and best pathways for addressing them, no semblance of consensus among experts remains." Nevertheless, climate "models spew out endless streams of trans-scientific facts that allow for claims and counterclaims, all apparently sanctioned by science, about how urgent the problem is and what needs to be done."

Vast numbers of papers have been published attempting to address these trans-scientific questions, Sarewitz observes. They provide anyone engaged in these debates with overabundant supplies of "peer-reviewed and thus culturally validated truths that can be selected and assembled in whatever ways are necessary to support the position and policy solution of your choice." It's confirmation bias all the way down.

The advent of big data also worries Sarewitz. Dredging massive new datasets generated by an already badly flawed research enterprise will produce huge numbers of meaningless correlations. Since the integrity of the output is dependent on the integrity of input, big data science risks generating a flood of instances of garbage in, garbage out, or GIGO. Sarewitz warns, "The scientific community and its supporters are now busily creating the infrastructure and the expectations that can make unreliability, knowledge chaos, and multiple conflicting truths the essence of science's legacy."

Ultimately, science can be rescued if researchers can be directed more toward solving real world problems rather than pursuing the beautiful lie. Sarewitz argues that in the future, the most valuable scientific institutions will be those that are held accountable and give scientists incentives to solve urgent concrete problems. The goal of such science will be to produce new useful technologies, not new useless studies. In the meantime, Sarewitz has made a strong case that contemporary "science isn't self-correcting, it's self-destructing."

Ronald Bailey is a science correspondent at Reason magazine and author of The End of Doom (July 2015).