ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Editorial

Should there be greater use of preprint servers for publishing reports of biomedical science?

[version 1; peer review: not peer reviewed]
PUBLISHED 03 Mar 2016
Author details Author details
OPEN PEER REVIEW
NOT PEER REVIEWED

This article is included in the Research on Research, Policy & Culture gateway.

Abstract

Vitek Tracz and Rebecca Lawrence declare the current journal publishing system to be broken beyond repair. They propose that it should be replaced by immediate publication followed by transparent peer review as the starting place for more open and efficient reporting of science. While supporting this general objective, we suggest that research is needed both to understand why biomedical scientists have been slow to take up preprint options, as well as to assess the relative merits of this and other alternatives to journal publishing.

Keywords

preprint, publishing, peer review, journals, biomedicine

Editorial

Vitek Tracz (Chairman of F1000, which includes F1000Research) and Rebecca Lawrence (Managing Director of F1000) declare the current journal publishing system to be broken beyond repair (Tracz & Lawrence, 2016). They propose much greater use of immediate publication of articles as the starting place for open, interactive dialogues between study authors and commentators on reports of research (whether or not commentators are designated formally as ‘peer reviewers’). Logically, over the course of one or more iterations, this interaction should promote the evolution of more ‘stable’, trustworthy and useful records of research. Furthermore, new technology provides a much more efficient system for dealing with submitted reports of research than that offered by most current journals using traditional forms of peer-review.

As Tracz and Lawrence recognise, although the use of preprints for reporting research was introduced by particle physicists decades ago, it has not attracted much tangible support from life scientists. Why is this; and what might be the downsides of their proposed preprint-led revolution?

Part of the explanation seems likely to reflect reasonable concern about the danger that preprints reporting flawed research may prompt unwarranted changes in clinical practice that harm patients. Tabor (2016), commenting on a recent call for a prepublication culture in clinical research like that established in some areas of physics (Lauer et al., 2015), noted that “clinical studies of poor quality can harm patients who might start or stop therapy in response to faulty data, whereas little short-term harm would be expected from an unreviewed astronomy study”.

Reasonable people can identify with this concern about the potential dangers of an increased use of preprints in biomedicine. However, other reasons for the poor uptake of preprint opportunities seem likely to reflect less worthy, often perverse interests, reflecting financial or academic conflicts, or just acquiescence in sloppy science.

Despite their recognition of the increasingly acknowledged importance of collaboration, the model proposed by Tracz and Lawrence focuses on “individual researcher’s scientific output”, and they believe that the model they propose would work only if driven by authors “within a scientific framework that facilitates self-regulation”. This judgement ignores the abundant evidence that self-regulation by researchers cannot alone deal adequately with many of the systemic problems apart from journals which are leading to waste, bad science, bad ethics and harm to patients.

Although Tracz and Lawrence introduce their proposals with an allusion to “the need to remove the waste in the current system”, they don’t address sources of research waste other than current publishing processes, such as waste resulting from avoidable design flaws, and incomplete or misleading reporting (www.rewardalliance.net; Paul Glasziou and Iain Chalmers: Is 85% of health research really “wasted”?). The most clearcut example of these systemic problems is biased underreporting of research, with around half of all research going unpublished, implying $10s of billions wasted yearly.

Although underreporting can cause avoidable suffering and death, appeals to the biomedical research community beginning in the mid-1980s to deal with the problem had no discernible impact until this century (https://clinicaltrials.gov/ct2/resources/trends). Change began when the International Committee of Medical Journal Editors required researchers to register controlled trials at inception (for example, at www.clinicaltrials.gov and www.isrctn.com), followed by the substantial community awareness of the scandal which followed publication of Ben Goldacre’s popular science book Bad Pharma and the launch of the AllTrials campaign (www.alltrials.net). After these direct appeals to the public to call the research community to account, increasing numbers of research funders and regulators began to require researchers to behave more responsibly (Moher et al., 2015). But public recognition that self-regulation by researchers has failed has led to eroded trust in the biomedical research enterprise more generally.

As far as reporting of clinical trials is concerned, there has been slow progress towards the ideal of ‘threaded documents’, beginning with publication of the protocol, later publication of summary results, and going through to deposition of the final dataset (Chalmers & Altman, 1999). But post-publication corrections, updates, simple linkages to similar studies and systematic reviews are also important for those trying to use, apply, or replicate studies.

We know remarkably little, formally, about why researchers do and don’t do the things that they do and don’t do. Some efforts to secure research funding to investigate why researchers don’t publish reports of their research have not been successful (Professor Mary Dixon-Woods, personal communication). If the attractive vision of a more efficient publishing model for the life sciences is to be promoted effectively, research is needed to find answers to the questions raised by Tracz and Lawrence themselves: why are researchers reluctant to post preprints, and will sufficient other researchers post useful and critical comments on them to make the effort worthwhile?

The current journal-based publication system is one of the important weak links leading to current waste from non-reporting and poor reporting of research. There is no doubt that new models are needed, but these publishing experiments also warrant formal evaluation and comparison to assess which of them delivers most effectively the advances needed. As far as clinical trials are concerned, methods and summary results are increasingly being posted on trials registries. More broadly, the National Institute for Health Research in England now requires publication of detailed reports for all the health technology assessments it supports. Among journals, PLOS One’s decisions to publish a report are now based on whether the research has used valid methods, regardless of the results.

Tracz and Lawrence and F1000 are well placed to foster the exploratory and evaluative research needed to inform future developments in science publishing, and we very much hope they will do so.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 03 Mar 2016
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Chalmers I and P Glasziou P. Should there be greater use of preprint servers for publishing reports of biomedical science? [version 1; peer review: not peer reviewed]. F1000Research 2016, 5:272 (https://doi.org/10.12688/f1000research.8229.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
NOT PEER REVIEWED
NOT PEER REVIEWED

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 03 Mar 2016
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.