ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Opinion Article

Make researchers revisit past publications to improve reproducibility

[version 1; peer review: 3 approved, 1 approved with reservations]
PUBLISHED 21 Sep 2017
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

Abstract

Scientific irreproducibility is a major issue that has recently increased attention from publishers, authors, funders and other players in the scientific arena.  Published literature suggests that 50-80% of all science performed is irreproducible.  While various solutions to this problem have been proposed, none of them are quick and/or cheap.  Here, we propose one way of reducing scientific irreproducibility by asking authors to revisit their previous publications and provide a commentary after five years. We believe that this measure will alert authors not to over sell their results and will help with better planning and execution of their experiments.  We invite scientific journals to adapt this proposal immediately as a prerequisite for publishing.

Keywords

Scientific irreproducibility, revisit past publications, reflection on past publications, improve scientific reproducibility, inflated research, bias, accountability

Introduction

Hardly a day goes by without a screed against perverse incentives in research. It goes like this: Scientists get better rewards for announcing breakthroughs than for producing solid work. The achievements needed to win grants, jobs, and publications - combined with researchers’ (often noble) ambitions - encourage them to build castles in the air.

After that comes a plea for large-scale change. One recent proposal would require scientists to complete rigid, time-consuming confirmation studies before publishing a single paper1.

We propose something that is quicker, cheaper, and simpler: Require researchers to write post-publication reflections five years after their papers appear.

In these self-reviews, researchers would assess how their claims held up. They should describe whether an invention or discovery was translated or commercialized, and how (or whether!) others could build on their work. The practice would provide a straightforward, non-stigmatized way to identify errors, misinterpretations, and other roadblocks.

For many, these-self-reviews would be a welcome opportunity for clarification, celebration, and even self-promotion. But the main advantage is that self-reviews would encourage scientists to think in advance how they might be wrong.

Causes of irreproducibility

How might this work? Let’s consider the sources of irreproducibility. We put this down to a half-dozen causes: Often several occur together in the same paper! Fraud captures the most attention, but is rare. Self-deception, or bias, occurs aplenty. It is easier to attribute an observation to a hoped-for reason than to imagine trivial causes. Who wants to believe that a test result depends on the brand of test tube or day of the week rather than the earliest detectable sign of disease?

Then there are unrecognized technical deficiencies; researchers who know how to operate a machine, but lack enough experience to recognize artifacts and infelicities. They enter the wrong parameters or use the wrong pipette tips without realizing that they have rendered their data meaningless. Similarly, big data and data crunchers readily produce false interpretations. In 2007, one crystallographer had to retract five prominent papers after discovering a small computer glitch2.

All of these problems are exacerbated by fragmented science. Projects are now executed in pieces in various laboratories and results knitted together without anyone knowing exactly what happened at each site, so no one is able to bring sufficient scrutiny to bear.

In each of these cases, the problems are clear with hindsight. If post-publication self-review was commonplace, some of these problems would become clear as experiments were being planned and conducted.

In our own lab, we have made a habit of reflecting on our papers (though not necessarily with a strict five-year timeframe). Though several papers led to work taken up by biotech companies and other scientists, others proved much less valuable than we had hoped. Bias and technical deficiencies are the most prominent reasons behind our papers that did not ‘succeed.’ That realization has made one of us a better mentor and supervisor over time. It has also led to several publications pointing out flaws in common reagents and lab practices.

Work by the psychologists Philip Tetlock and Jessica Lerner suggests that simple steps meant to hold people accountable for their judgment calls actually improves their judgment3. They become more accurate in their thinking and more objective when they evaluate evidence.

Accountability in science is ad hoc. Researchers get credit for a publication well before enough time has passed for the scientific community to really know whether the paper has made a valuable contribution. No wonder that researchers bent on submitting a paper are obsessed with making the best possible case for its acceptance rather than illustrating its limitations. If researchers are forced to consider how well their paper will stand up five years hence, they will be more careful when doing the work and more critical in their analysis.

About ten years ago, one of us came up with the idea of a new journal, tentatively titled Reflections in Medicine, in which authors of prominent papers could publish their post-publication thoughts, and contacted about 20 prospective authors, who all ignored or refused the request. We believe some did not want to revisit problematic results.

With the advent of electronic publishing, it is now possible for journals (or funders or other platforms, such as PubMed) to create a space for these five-year reflections and to connect them with the original paper.

Self-evaluation, based on strict criteria and instructions, can be revealing even if the authors try to inflate the impact of old work. For example, the boldest claims in a scientific paper should be annotated and addressed directly in authors’ reflections. Researchers could also be asked a series of straightforward yes/no questions about whether the results of a paper have changed clinical or scientific practice.

Journals, funders, or research institutions could oblige scientists to write self-reflections. Failing to do so would be a red flag. One can imagine a system in which publications in reference lists or literature databases could be annotated as lacking self-review, and so taken less seriously.

With luck, care, and enthusiasm, this simple, inexpensive step would counter perverse incentives. Instead of being stigmatized for correcting a paper, researchers would be stigmatized for failing to do so. Junior scientists would learn by example how to read papers critically and design more-rigorous experiments. The public would learn that a paper is not a definitive statement, but a single contributor to a gradually emerging picture of how nature works.

In short, self-reflections could demote scientific papers to their rightful place and turn a vicious cycle into a virtuous one.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 21 Sep 2017
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Fiala C and Diamandis EP. Make researchers revisit past publications to improve reproducibility [version 1; peer review: 3 approved, 1 approved with reservations]. F1000Research 2017, 6:1717 (https://doi.org/10.12688/f1000research.12715.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 21 Sep 2017
Views
7
Cite
Reviewer Report 13 Nov 2017
Jake Cosme, Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada 
Approved
VIEWS 7
This review article brings forward a recommendation of mitigating the current issues of reproducibility in academic research by encouraging authors to perform post-publication self-review on their studies in 5 years. 

The authors of the review article suggest ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Cosme J. Reviewer Report For: Make researchers revisit past publications to improve reproducibility [version 1; peer review: 3 approved, 1 approved with reservations]. F1000Research 2017, 6:1717 (https://doi.org/10.5256/f1000research.13773.r27534)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
15
Cite
Reviewer Report 07 Nov 2017
Morley D.  Hollenberg, University of Calgary, Calgary, Alberta, T2N 4N1, Canada 
Approved with Reservations
VIEWS 15
SYNOPSIS:
 
This overview deals with an enlarging key issue related to the reproducibility of published data in journals of all stripes, ranging from ‘high’ to ‘low’ impact quality.
A ‘checkpoint’ process is proposed requiring authors to ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Hollenberg MD . Reviewer Report For: Make researchers revisit past publications to improve reproducibility [version 1; peer review: 3 approved, 1 approved with reservations]. F1000Research 2017, 6:1717 (https://doi.org/10.5256/f1000research.13773.r26215)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
9
Cite
Reviewer Report 06 Nov 2017
Edward W. Randell, Dept. of Lab. Medicine, Eastern Health Authority & Faculty of Medicine, Memorial University, St. John's, NL, Canada 
Approved
VIEWS 9
The long term relevance of our published works is a point for both secret pride and disappointment for many continuing in careers with active engagement in research. This opinion article suggests that compelling researchers to reflectively evaluate their past publications ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Randell EW. Reviewer Report For: Make researchers revisit past publications to improve reproducibility [version 1; peer review: 3 approved, 1 approved with reservations]. F1000Research 2017, 6:1717 (https://doi.org/10.5256/f1000research.13773.r27535)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
21
Cite
Reviewer Report 25 Sep 2017
Georgios Pampalakis, Department of Pharmacy, School of Health Sciences, University of Patras, Patras, Greece 
Approved
VIEWS 21
Scientific irreproducibility is indeed a serious problem nowadays. In the current opinion article, the authors state the reason that govern irreproducibility and for the first time they provide a potential method to treat such results. Importantly, their suggested method is ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Pampalakis G. Reviewer Report For: Make researchers revisit past publications to improve reproducibility [version 1; peer review: 3 approved, 1 approved with reservations]. F1000Research 2017, 6:1717 (https://doi.org/10.5256/f1000research.13773.r26217)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 21 Sep 2017
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.