Skip to content

Biomedical Odyssey

Life at the Johns Hopkins School of Medicine

Biomedical Odyssey Home Perspectives in Research Solving the Scientific Reproducibility Crisis

Solving the Scientific Reproducibility Crisis

In an ideal world, reproducibility is a cornerstone of scientific research. The scientific method should provide conclusions that are as close to the truth as possible. In reality, reproducibility of results is a constant worry in scientific research.

scientist in the lab is distressed while looking at overturned beakers

According to a recent survey by Nature, scientific irreproducibility is at a crisis level. The inability to reproduce published results can be much more than a headache for the graduate student working on the next big question. It wastes a great deal of time and money for labs attempting to build off of the foundations of irreproducible data. In a long-term scope, this ultimately delays the progression of scientific knowledge and, in a translational setting, could postpone the quest to find patients better treatments and cures.

How commonplace is irreproducibility? About half of Nature’s survey respondents had been unable to produce another group’s results, with a slightly smaller proportion of respondents unable to reproduce even their own results.

Few attempts have been made to quantify the actual proportion of experiments that fail to be reproduced. In 2015, Science published a paper that quantified the reproducibility of 100 psychology experiments published in high-ranking journals in 2008. About 40 percent of the experiments produced results close to those that were published. A similar cancer biology study published in Nature found only about 10 percent of experiments could be reproduced with results representative of the original findings. However, when asked, most scientists claim at least half of the published studies in their respective fields were correct.

If irreproducibility is really a crisis, how can science correct itself? In Nature’s survey, they asked respondents to tick from a list of measures that were most likely to help prevent irreproducible results from being published. The responses were overwhelmingly positive for all suggestions. Even for the lowest-ranked factor, “more time checking notebooks,” about three-quarters of the respondents thought it likely to boost reproducibility. The highest-ranked factors included “better understanding of statistics,” “better mentoring” and “more robust experimental designs.”

Nature provides a quote from mathematical biologist Irakli Loladze that hints at this agreeability: “Reproducibility is like brushing your teeth. It is good for you, but it takes time and effort. Once you learn it, it becomes a habit.” Most of us do a minimum of brushing automatically, but probably fewer of us supplement with flossing and mouthwash with frequent checkups, even though we could all agree that these are good practices.

What researchers need is for their institutions to step in make sure labs are developing good reproducibility habits. Reviewing data before they are sent out to be published and making sure there are clear links from hypothesis to conclusions to catch cherry-picking of significant results or statistical errors would flag most causes for irreproducibility before they left the institution. A larger emphasis on quality over flashiness or speed may also reduce pressure on principal investigators to get papers out the doors before they are scrutinized. And this means trainees may feel less influenced to prove their mentors’ expectations correct and instead follow the data in an unbiased and hypothesis-driven manner.


Related Content

Data Dump: Not All Data Are Created Equal
The advance of technology has rapidly multiplied the questions that scientists can ask and answer in their experiments. These new possibilities make this a time of excitement and promise, but they also bring challenges that must be overcome if the promise is to be realized. Read more.