What's Going on With Reproducibility?

February 9th, 2017, Alexandria Denis, Tim Errington


You may have heard that the Reproducibility Project: Cancer Biology (RP:CB) team just released results of 5 major cancer biology studies that they attempted to reproduce. This was an early step in what will be a much larger project in which over 25 studies will be revisited before the project is done--but it's already been interesting. The results were mixed and the press coverage was quite enthusiastic about the implications of that. We've put a summary page up here if you'd like to dive in.

It’s an important topic, and one that most folks in the scientific community are interested in. Here at COS, we think that these early results speak more to the research process. We are strong believers that the more open, available, and collaborative research is, the more efficient it will be. Our primary goal in undertaking this project is to find out how challenging reproducing published results is and what factors contribute to its success or failure so we can share that information with the research community.

Another goal of the project is to help clarify what the definition of "reproducible" is. The tendency is to answer the question, “Was the study replicable?” with a binary answer, “yes” or “no,” but it is much more nuanced than that. One definition of “Replicate” is to reproduce original processes and results. What is this acceptable margin of error? There are multiple, complex factors that determine this margin. Another measure of replication is subjective assessment. As part of the RP:CB, we are gathering this information through a formal survey provided by independent scientists, and will be analyzing these data at the end of the project along with other aspects of the process.

So what do these initial results tell us?

  1. Replication is hard, but critically important. The variation in results show that. It is messy, inconsistent, siloed, and challenging. Science demands collaboration; the more minds that tackle a research hypothesis, the more parameters can be tested, controlling the effects of unexpected bias and ultimately yielding a more accurate and efficient accumulation of knowledge.

  2. This project allows us to dive deeper and look harder at how we do things and make ourselves better.
 The RP:CB studies, though still preliminary, are helping to reveal exactly why replication studies aren't done more often, why they take so long, how much they cost, and what we can do to make them easier.

  3. Getting information about the original work can be extremely difficult and time consuming. Improving reproducibility requires a shift in the incentives that drive researchers’ behavior, the infrastructure that supports research, and the business models that dominate scholarly communication. A key challenge is that the decentralized nature of the scholarly community creates a coordination problem. Cooperation is hard to get, but it’s healthy for the practice of science.

There are commonalities across disciplines regarding the challenge of reproducibility
. Causes for differing outcomes between original and replication studies have implications for understanding the phenomenon itself. For example, the conditions necessary to obtain a result may not be yet understood. There may be many reasons for this--poor data quality, incomplete instructions or processes, missing components.

  4. Openness and transparency are critical to moving research forward. Publication is a starting point, not an end point. Science is connected, and every discovery is part of a bigger story. The point of replication is to increase the likelihood that shortcomings in the process will be resolved in advance through better transparency, planning, and communication. The more we know about how something was done and what tools were used, the easier it is to try it again.

  5. Preregistration is critical to the replication process. Preregistration is important because it increases the credibility of hypothesis testing by confirming in advance what will be analyzed and reported. If we don't commit in advance to what we are going to test, then it's really difficult to know if we succeeded.

It's still really early in this project, and we will undoubtedly learn many more valuable lessons as more results come in. But the preliminary look already tells us we have plenty to work on.

Recent Blogs

The Content of Open Science

What Second Graders Can Teach Us About Open Science

What's Going on With Reproducibility?

Open Science and the Marketplace of Ideas

3 Things Societies Can Do to Promote Research Integrity

How to Manage and Share Your Open Data

Interview with Prereg Challenge Award Winner Dr. Allison Skinner

Next Steps for Promoting Transparency in Science

Public Goods Infrastructure for Preprints and Innovation in Scholarly Communication

A How-To Guide to Improving the Clarity and Continuity of Your Preregistration

Building a Central Service for Preprints

Three More Reasons to Take the Preregistration Challenge

The Center for Open Science is a Culture Change Technology Company

Preregistration: A Plan, Not a Prison

How can we improve diversity and inclusion in the open science movement?

OSF Fedora Integration, Aussie style!

Replicating a challenging study: it's all about sharing the details.

How Preregistration Helped Improve Our Research: An Interview with Preregistration Challenge Awardees

Some Examples of Publishing the Research That Actually Happened

Are reproducibility and open science starting to matter in tenure and promotion review?

The IRIS Replication Award and Collaboration in the Second Language Research Community

We Should Redefine Statistical Significance

Some Cool New OSF Features

How Open Source Research Tools Can Help Institutions Keep it Simple

OSF Add-ons Help You Maximize Research Data Storage and Accessibility

10 Tips for Making a Great Preregistration

Community-Driven Science: An Interview With EarthArXiv Founders Chris Jackson, Tom Narock and Bruce Caron

A Preregistration Coaching Network

Why are we working so hard to open up science? A personal story.

One Preregistration to Rule Them All?

Using the wiki just got better.

Transparent Definitions and Community Signals: Growth in the Open Science Community

We're Committed to GDPR. Here's How.

Preprints: The What, The Why, The How.

The Prereg Challenge Is Ending. What's Next?

We are Now Registering Preprint DOIs with Crossref

Using OSF in the Lab

Psychology's New Normal

How Open Commenting on Preprints Can Increase Scientific Transparency: An Interview With the Directors of PsyArxiv, SocArxiv, and Marxiv

The Landscape of Open Data Policies

Open Science is a Behavior.

Why pre-registration might be better for your career and well-being

Interview: Randy McCarthy discusses his experiences with publishing his first Registered Report

Towards minimal reporting standards for life scientists

Looking Back on the Prereg Challenge and Forward To More Credible Research

OSF: Origin, growth, and what’s next

A Critique of the Many Labs Projects

The Rise of Open Science in Psychology, A Preliminary Report

Strategy for Culture Change

New OSF Registries Enhancements Improve Efficiency and Quality of Registrations

Registered Reports and PhD’s – What? Why? How? An Interview with Chris Chambers

How to Collaborate with Industry Using Open Science

How to Build an Open Science Network in Your Community

Seven Reasons to Work Reproducibly

COS Collaborates with Case Medical Research to Support Free Video Publishing for Clinicians and Researchers

This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.