ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Editorial

Publishing confirming and non-confirming data

[version 1; peer review: not peer reviewed]
PUBLISHED 04 Feb 2016
Author details Author details
OPEN PEER REVIEW
NOT PEER REVIEWED

This article is included in the Preclinical Reproducibility and Robustness gateway.

Abstract

This editorial introduces the Preclinical Reproducibility and Robustness channel on F1000Research, which has been created to encourage and facilitate open and transparent publication and discussion of confirmatory and non-confirmatory studies in biomedical research.

Keywords

Reproducibility, Robustness, Amgen

Editorial

In 2012 Begley and Ellis shocked the academic community by reporting that scientists at Amgen, a major biotech company, could not replicate the findings of nearly 90% of 53 high-profile oncology publications1. This study followed other, less publicized disclosures questioning the reliability of conclusions in the biomedical literature2–5. Today, scientific leaders, patient advocates—even economists—are coming forward in greater numbers to challenge the quality and efficiency of medical research. New efforts have begun to explicitly repeat a sample of the research reported in high-profile publications. For the psychology field, a recent paper in Science magazine reports that "a large portion of replications produced weaker evidence for the original findings"6, and a contract research organization has been funded to begin a systematic effort to reproduce experiments from 50 high-impact oncology publications7. We write to promote an additional effort to improve scientific standards--one aimed at strengthening the self-correcting nature of science through the widespread, rapid publication of the failures (as well as the successes) of attempts to reproduce published scientific findings.

Scientific publication has long been the critical mechanism for conveying scientific data to the world, allowing the conclusions of one team of scientists to be tested by others and, if confirmed, to be extended into the unknown. The model of nature that science has erected is thereby built up detail by detail, layer upon layer, on a foundation of self-consistent logic and experiment. Because science depends on observations that are verifiable, science is at its core self-correcting. But the process of self-correction in science must be improved. Today, many intriguing, but non-robust conclusions that remain unchallenged in the biomedical literature create opportunity costs for drug development, forcing both the biopharmaceutical industry and academic scientists to devote major resources to validating, rather than extending, results.

The vast majority of scientists are well-intentioned, and they want to achieve, discover and invent. They strive to uncover the truth about nature and to get the credit for doing so. But science is a human endeavor. Although intellect, diligence, aspiration and passion drive inexorable progress, ego and unconscious bias are also involved. All scientists have encountered non-robust findings in the literature. We know that deliberate fraud is rare, but wishful thinking is common, and potentially dangerous.

Writers, readers, reviewers, granters, and editors all prefer a good story. This automatically introduces a bias into the publication process favoring positive results. Non-confirming data are communicated much less frequently and effectively than are provisional, positive results. For this reason, energetic steps must be taken to make the publication of scientific results more balanced. Scientists must create new paths and rewards that lead to much more efficient, rapid publication of failures to reproduce published results. The scientific community urgently requires unobstructed visibility of non-confirming datasets, with clear expositions of the materials used and experiments conducted. Scientists can then compare the datasets to form their own conclusions. We cannot require perfection prior to publication, which would be at best impractical, and at worst disastrous. But we can make the inevitable imperfections easier to identify.

Industry can enhance the self-correcting nature of science by organizing a robust effort to publish both its non-confirming and its confirming results, while simultaneously encouraging publication of such validation experiments by academic scientists. This editorial announces the creation of a special online channel on the open science platform F1000Research that will be dedicated to publications of non-confirming and confirming results. The Preclinical Reproducibility and Robustness channel will focus on thorough reporting of the methods used, and ensure access to the source data underlying the findings. Amgen scientists have just published three such research articles on this channel as an initial effort to stimulate this process of science self-evaluation.

Each submission to this channel will undergo a fully transparent post-publication peer review following the F1000Research publication model. Referee reports from invited named experts will be posted alongside the article, allowing readers to get a full picture of the soundness of the validation experiments. The original authors can provide signed comments on the article, or publish their own full Correspondence article (for peer review) in the channel if they have further (published or new evidence) that adds to the discussion. It is our hope that, both through this format and others, a vigorous new publishing culture can be established to enhance the crucial self-correcting feature of science.

Comments on this article Comments (2)

Version 1
VERSION 1 PUBLISHED 04 Feb 2016
  • Reader Comment 07 Mar 2016
    C. Glenn Begley, TetraLogic, USA
    07 Mar 2016
    Reader Comment
    Congratulations!
    This is a great initiative! 
    The problem of sloppy science is evident in almost every issue of one of the "top tier journals".
    The authors, Amgen and F1000 are to be congratulated ... Continue reading
  • Reader Comment 10 Feb 2016
    Jasper Rine, University of California, Berkeley, USA
    10 Feb 2016
    Reader Comment
    Kudos to Kamb and Alberts for launching this mechanism of communicating problems with published data sets to the scientific community. In my experience, studies that provide non-confirmatory results run the ... Continue reading
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Alberts B and Kamb A. Publishing confirming and non-confirming data [version 1; peer review: not peer reviewed]. F1000Research 2016, 5:135 (https://doi.org/10.12688/f1000research.7847.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
NOT PEER REVIEWED
NOT PEER REVIEWED

Comments on this article Comments (2)

Version 1
VERSION 1 PUBLISHED 04 Feb 2016
  • Reader Comment 07 Mar 2016
    C. Glenn Begley, TetraLogic, USA
    07 Mar 2016
    Reader Comment
    Congratulations!
    This is a great initiative! 
    The problem of sloppy science is evident in almost every issue of one of the "top tier journals".
    The authors, Amgen and F1000 are to be congratulated ... Continue reading
  • Reader Comment 10 Feb 2016
    Jasper Rine, University of California, Berkeley, USA
    10 Feb 2016
    Reader Comment
    Kudos to Kamb and Alberts for launching this mechanism of communicating problems with published data sets to the scientific community. In my experience, studies that provide non-confirmatory results run the ... Continue reading
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.