ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Opinion Article

FAIRness in scientific publishing

[version 1; peer review: 1 approved, 2 approved with reservations]
PUBLISHED 05 Dec 2016
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Research on Research, Policy & Culture gateway.

This article is included in the The Future of Scholarly Publishing collection.

Abstract

Major changes are afoot in the world of academic publishing, exemplified by innovations in publishing platforms, new approaches to metrics, improvements in our approach to peer review, and a focus on developing and encouraging open access to scientific literature and data. The FAIR acronym recommends that authors and publishers should aim to make their output Findable, Accessible, Interoperable and Reusable. In this opinion article, I explore the parallel view that we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions. The views I represent are founded on oral and written dialogue with clinicians, academics and the publishing industry. Further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to reduce inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.

Keywords

Academic publishing, peer review, impact factor, metrics, data visualization, open access

Introduction

Substantial and positive changes are currently underway in academic publishing; now is an important time to capitalize on the opportunity to explore the many potential benefits that can stem from new ways to share and disseminate scientific data1. Despite the improvements that are emerging, it remains the case that discussions in academia frequently focus on the pitfalls, frustrations and difficulties of publishing. Managing a piece of work from conception to publication can indeed be a long and complicated journey, and elements of the process can often feel ‘unfair’.

Advocates of data dissemination suggest that we should aspire to the principles enshrined in the ‘FAIR’ acronym; work should be Findable, Accessible, Interoperable and Reusable2. However, as well as endorsing these attributes of any work, I here represent the view that we should also develop a collective responsibility to make data sharing fair in the conventional sense; the way we generate, represent, review, share and use data should also be underpinned by justice. This means our handling of the whole process is fair to everyone involved, including academic institutions, funders, authors, reviewers, publishers, research participants and patients.

As well as being driven by ethical and moral imperatives to improve our approaches to publishing scientific research, questions around data sharing have to be set in the context of the exponential increases in the volume of data generated; a responsible and robust approach to archiving, cataloguing and managing access to such datasets is crucial to allow optimum, equitable and collaborative approaches to Big Data.

I embarked on this journey as a result of investigating the best way to publish a database. Rather than seeking output via a conventional ‘paper’, I was keen to produce something live, open access, creative, evolving, promoting new collaborations, and linking to other relevant resources. These discussions around the dissemination of my own data led to presentations at a meeting hosted by the University of Oxford’s Interactive Data Network (https://idn.web.ox.ac.uk/event/data-visualisation-and-future-academic-publishing) and subsequently at a conference of publishers (Annual meeting of the Association of Learned and Professional Society Publishers, http://www.alpsp.org/). In order to represent a wider cross-section of academic medicine, I collected opinions from my peers and colleagues within academic medicine using an online questionnaire (https://www.surveymonkey.co.uk/), and then followed this up with a parallel approach to seek feedback from the publishing industry.

This piece is a representation of some of the key themes that arose as a result of the two-pronged questionnaire, and the ongoing discussions between the medical research community and the publishing industry. The feedback that I present is intended to represent individual and collective opinion, to prompt and challenge further discussion, to build bridges between publishing and academia, and to help us move forward with constructive dialogue.

Questionnaire results

Details of the questionnaires, and the entire dataset collected from each of the two questionnaires used to collect quantitative and qualitative feedback from 102 academics and 37 representatives of the publishing industry, are available to view and download as PPT files from 3 and 4 respectively.

The feedback I have collated represents individual opinion, collective discussion, and sometimes emotion, and the resulting work is my own personal synthesis of this experience. This does not aspire to be a formal or scientific study, but rather to represent views on some important themes in academic publishing, and to underpin further dialogue.

Domains for discussion in academic publication

Timelines

Delays in the conventional routes to publication commonly amount to weeks and months consumed by submission, peer-review, editorial decisions, potential corrections and resubmission, followed not infrequently by a re-initiation of the whole process5. Among academic survey respondents, over 70% agreed or strongly agreed with the statement ‘I am frustrated by the length of time it takes to publish my work’3, and over 80% of publishers agreed that reducing the timelines involved in academic publication should be a ‘crucial priority’4.

Delays suppress and stifle scientific progress in a variety of ways. Over the long time courses of publication, data frequently decay such that they are out of date before they go to press, and it is impossible for authors to provide a written context for their work that reflects the very latest literature or advances in the field6,7. Delay also leads to academic paralysis: until their work is published, academics may refrain from presenting or discussing their work publicly, thereby limiting its impact, reducing the possibility of developments and collaborations, and allowing flaws and discrepancies to go unchallenged. There is also personal paralysis – delays can cause difficulty in progressing with the next phase of an existing body of work, moving on to a new project, recruiting a team, or applying for an academic post or funding3,7.

Reducing delays is clearly an important aspiration but one that comes with practical caveats. One publisher says: ‘Timeliness is important. So is quality control. The latter negatively impacts the former’4. In conventional models of publishing this may have been the case, but we should now strive to dismantle the view that long delays are an inevitable consequence of producing work that is robust, high quality, and endorsed by expert peer review. Happily, this framework is shifting as a result of parallel improvements in allowing academics to post their own work online, and in new approaches to post-publication peer review (discussed in more detail in the section below).

Peer review

Asked to respond to the statement ‘peer review functions equitably and contributes to improving the quality of my work’, 58% of academics agreed or strongly agreed3. This seems to reflect a general consensus that the process remains valuable and fit for purpose, though evidently tempered by background ambivalence and anxieties, and not endorsed by everyone.

Peer review is intended to provide quality assurance, a principle that is of universal importance to authors, readers, publishers, funders and academic institutions. However, no-one doubts the potential pitfalls of such a process: a reviewer may not be impartial, may be less expert than the authors of the work for which they are providing critique, may not give the task the time that it deserves, and may – on occasion – just get it wrong8. There can also be concern, as stated by one academic, that ‘creativity is stifled in this process’. On these grounds, peer review has continued to be accepted as the ‘least worst’ model8, only persisting for lack of a better alternative.

However, many new approaches to peer review are evolving, with support and enthusiasm from both academics and publishers3,4. These include:

  • Making peer reviews open access (e.g. F1000, https://f1000research.com and PeerJ, https://peerj.com/), or providing double-blind peer review8;

  • Using structured formats or templates for critical review, and developing collaborative peer review so that a consensus opinion is provided by a team (e.g. Frontiers, http://home.frontiersin.org/);

  • Promoting a model that seeks online feedback from the entire scientific community (now a component of many open access review systems, including those at https://f1000research.com);

  • Asking reviewers to suggest additional experiments only when these are deemed essential to the work and can be conducted within an agreed time frame (e.g. eLife, https://elifesciences.org/);

  • Improving editorial oversight and influence to ensure the process is conducted fairly and to arbitrate in cases where there is conflict of opinion.

Adjustments to the timeline that put publication ahead of review can also have substantial influence on the process. Authors have the potential to disseminate their work through pre-publication archives (e.g. BioRxiv, http://biorxiv.org/) or on data-sharing platforms (e.g. Figshare, https://figshare.com/). Alternatively, post publication peer review has been adopted as an inventive compromise that reduces delays and promotes data sharing, without sacrificing a quality assurance framework, for example by the F1000Research and Wellcome Open Research platforms (https://f1000research.com, https://wellcomeopenresearch.org/)1,9.

Recognising and rewarding the substantial contribution made by reviewers is also crucial, and strides forward are afoot in providing formal acknowledgement of the body of work undertaken by reviewers; this includes the potential for logging this in a systematic way (e.g. using Publons, https://home.publons.com/). Reviews themselves are becoming independently accredited pieces of scientific work that are a recognised part of a formal academic portfolio (including visibility on ORCID, http://orcid.org/), can be ranked and rated, are published with a DOI to make them accessible and citable, and can lead to the award of CME points10,11.

Barriers to communication

Much of the communication between academia and publishers happens in a one-way direction through rigid online portals. True open dialogue frequently seems to be lacking, potentially leading to frustrations on both sides. Only 23% of academic respondents agreed or strongly agreed that they would feel ‘comfortable and confident contacting editors and publishers to discuss work before submitting for publication’3. In response to the same question about dialogue with academics, publishers fared slightly better with over half being comfortable pursuing dialogue4.

Only one in three academic respondents reported having experienced positive interactions with editors and publishers to help them present and disseminate their work in the best way. Interestingly, academics’ views on this point also reflect a degree of uncertainty about whether discussion with editors and publishers is appropriate at all: they raise concerns that this amounts to ‘coercion’ or is in some way ‘cheating’ the system3.

Collective responses to how communication should be improved include the need for improving formal and public interdisciplinary discussion at workshops, conferences and seminars, as well as the more personal view from academics who ask editors and publishers to provide a reliable and named point of contact for authors. There is also a collective responsibility for both publishers and academics to promote and participate in communication, to recognize the ways in which appropriate dialogue can improve the content or accessibility of our work, and to promote an environment in which we work in partnership.

Metrics

The impact factor, the most widely quoted metric, has disproportionate influence over the behaviour of academics, despite never being designed as a measure of the quality of any given piece of work5. To quote one publisher, impact factor is ‘embedded in researcher culture’4. Although it can still exert a very potent effect, there has been increasing recognition that the metrics of any individual piece of work should be of more importance than the metrics of the journal in which it is published, and that we should move away from assessing ourselves, or each other, based on this criterion7,12. It is also important to be mindful that citations can be relatively easy to amass for articles written on major topics, while if you publish in a niche field, your work may be of equal scientific rigour and quality, but has a much smaller audience.

‘The impact factor is broken’ stated one academic medic3. Only 19% of publishers disagreed with this statement, and others added their own descriptions of the impact factor as ‘misused and outdated’, ‘obsolete’ and ‘a horrible obsession for editors and authors’4. We should collectively be encouraged to assess output using a much broader approach, for which an increasing number of tools is becoming available, including online resources such as Google Analytics (https://analytics.google.com/) or Google Scholar (https://scholar.google.com/), Altmetric (https://www.altmetric.com/), author-specific metrics such as h-index, and – most importantly - the application of common sense to viewing and interpreting metrics in the right context1214.

Open access

Open access publication offers a system that should be inherently fair in promoting free access to published resources. However, the challenge to equity here is an economic one15. In a traditional, non open access model, the fees required for access to a journal or individual manuscript are frequently prohibitive for individuals; access therefore depends on institutional subscriptions. In the open access model, in order to make the work freely accessible to their readers, the publisher passes the costs on to their authors. Both systems discriminate strongly against those in less affluent settings.

Unsurprisingly, open access publication can influence article metrics, as those articles that are freely available may be more frequently accessed and cited16. So authors from wealthy institutions can potentially feed their own personal metrics by publishing their work in open access fora. In reality, the situation is more complicated, as the open access citation advantage is not consistent across studies17, many publishing houses waive fees for authors from under-resourced settings, and there are now increasing options for free data sharing (including those discussed above, such as self-publishing, archiving in online repositories, or pre-print publication).

Formatting requirements

Insisting on consistency in the presentation of scientific work can be a way that individual publishers or journals contribute to quality control and maintaining their unique identity through preservation of a ‘house style’. However, academics often see the process as an array of trivial but time-consuming formatting obligations, demanded of them before the work has even been accepted for publication, and without any appreciable benefit to quality3. In addition to manuscript formatting, multiple journal-specific details are frequently requested for online submission. Among publishers, a more diverse body of opinion is reflected, with an equal split between those who are in favour of relaxing (or unifying) formatting requirements, those who have no strong opinion, and those who do not feel any change is required4.

Boundaries

The conveyor-belt process of conventional publication can be very constraining. An academic manuscript usually has to be assembled into a standardised package that meets strict formatting requirements, most obviously with respect to manuscript layout, length, and the number of figures, tables and references that can be included. This dates from the – now bygone – era in which a paper was indeed just that, printed across several pages of a glossy journal into whose binding it needed to be neatly fitted. Online publication should be providing an escape route from these constraints – albeit not one that has been consistently deployed or accepted.

However, there is also a broader boundary in operation which may be less immediately apparent – that which governs so strictly the fundamental nature of a piece of work, that which inhibits (or even prohibits) publication of a work-in-progress, or an unproved hypothesis, or results that are negative, unexplained or in conflict with previous data. Only 9% of academics agreed with the statement ‘the process of publication is flexible, supports innovation, and allows me to creative’, and none strongly agreed3.

This should be of significant concern when new ideas and novel approaches are so crucial to our collective progress, and in an era in which there is ever better recognition of the risks and costs associated with the suppression of negative results18,19. Furthermore, when new ideas and novel approaches underpin so much true scientific progress, why are such tight restraints imposed on the nature, style, content and substance of academic output? We should move towards a system that welcomes the publication of a diversity of innovation and ideas: there is much for us all to gain from encouraging dissemination of a wider body of work. This might include new concepts, methods and strategies, diverse commentary and critique, approaches that have been tried and failed, negative results, unfinished projects, protocols and registries for clinical trials, and live datasets that evolve over time.

The traditional publication of an academic ‘paper’ makes it impossible to add incremental advances or updates, and the only way to correct inconsistencies that emerge post-publication is to submit and publish a formal erratum. This is a substantial missed opportunity for quality improvement. The version control option offered by newer publishing platforms allows authors to maintain their work in its best possible form, adding updates, corrections and refinements, while preserving records of the original work. This is the approach I have been ultimately been able to pursue for my own data, via the Wellcome Open Research platform (https://wellcomeopenresearch.org/)20.

Caveats to this work

The discussions represented here took place over a short time frame and are based on opinions collected from a small section of academia3 and from an even smaller slice of the publishing fraternity4. Taking the opportunity to share feedback from academic clinicians does not mean that I represent all academic clinicians, or that the views of other sectors of academia are congruent. Although I have engaged in productive and interesting discussions with publishers, as well as seeking written anonymous feedback, it is not possible for me to represent this sector cohesively, and further commentary is undoubtedly needed.

Future challenges

Despite the marked improvements, new ideas, and increased flexibility emerging around data sharing, there are still some substantial challenges to be addressed around the publication of academic data.

A publishing process perceived as equitable by one individual or institution may not operate in the best interests of another. In particular, we have a crucial collective responsibility to be mindful of the resource gap between different settings. Generating high quality scientific output, and publishing and disseminating this appropriately, is significantly influenced by access to library services, IT infrastructure, institutional access to online resources, funding, manpower and skills. Real fairness means reallocation of resources, waivers for institutions unable to pay access or publishing fees, better sharing of skill sets, balanced review, and capacity building in resource-poor settings21.

Diminishing or diluting quality is a potential concern as we enter an era in which a greater number of authors release a more diverse pool of work without pre-publication review. However, experts in the dissemination of open access literature have argued that market forces will tend to operate to maintain quality, and that the overall benefits of increasing data availability substantially outweigh any potential risk to quality22.

Change can be difficult; old habits die hard and new approaches to data sharing can be met with suspicion or opposition5. Many authors are either overtly or subliminally wedded to the idea of a journal based impact factor and to blind peer review. Some authors also express anxiety arising from the potential conflict between wanting to share their output yet needing to retain ownership of the work. Substantial power is still held by a small subset of traditional journals and editorial boards; the undue influence of the publishing industry on science output has even been described as ‘toxic’23. It will take time for confidence in the newer publishing systems and models to grow. Vigilance is required for ‘predatory’ journals that often send unsolicited emails trying to entice authors with offers including rapid and open access publication, but that may not deliver on their promises, fail to provide suitable peer review, or publish the work only on receipt of a substantial fee9,21,24.

I have not set out to include detailed discussion of economic cost, but it is clear that a substantial financial investment is crucial to support innovative approaches to publishing, to develop new metrics, to support accredited peer review, and to maintain publishing platforms ranging from journals to internet sites. Academia has to be willing to accept and underwrite these costs, and the publishing industry to develop a system that is lean and competitive, and that offers value for money.

Conclusions

We are in an era in which the process of disseminating scientific work is becoming quicker and more flexible, in which we can retain ownership while gaining the benefits of public sharing, in which metrics are more about our own output that the collective assessment of the journal that publishes our work, and in which a ‘paper’ no longer has to be a carbon-copy manuscript of a pre-specified length and format.

There is still much progress to be made. We should continue to be flexible, creative and open-minded in developing the best ways to present and share scientific work. The process has to be underpinned by good communication between academia and publishing, and significant effort is required to dismantle taboos around communication, particularly the view that open dialogue is in some way ‘cheating’ the system. We should be more discerning about metrics, using them appropriately and in context, and not allowing impact factor to drive behaviour, stifle creativity or delay output. Careful thought is required to support, develop and sustain output from under-resourced settings, and to ensure that diverse options for data dissemination and access are not confined to wealthy institutions in rich countries.

As well as promoting the FAIR principles, changes in the way we publish scientific output are increasingly moving towards a process that is genuinely fair – something that is timely, that we can all access and judge for ourselves but that can still be scrutinized by a process of equitable peer review, that demands rigour and scrutiny while at the same time making efforts to minimise delays, that can be shared, reproduced and collectively applied for the advancement of understanding.

Comments on this article Comments (1)

Version 2
VERSION 2 PUBLISHED 17 Jan 2017
Revised
Version 1
VERSION 1 PUBLISHED 05 Dec 2016
Discussion is closed on this version, please comment on the latest version above.
  • Author Response 17 Jan 2017
    Philippa Matthews, Nuffield Department of Medicine, Nuffield Department of Medicine, Oxford, UK
    17 Jan 2017
    Author Response
    Reviewer comments are reproduced below, with specific author responses in italics. 
     
    Reviewer 1: Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden 
    Recommendation: Approved with Reservations
     
    This paper ... Continue reading
  • Discussion is closed on this version, please comment on the latest version above.
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Matthews PC. FAIRness in scientific publishing [version 1; peer review: 1 approved, 2 approved with reservations]. F1000Research 2016, 5:2816 (https://doi.org/10.12688/f1000research.10318.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 05 Dec 2016
Views
21
Cite
Reviewer Report 16 Jan 2017
Oyewale Tomori, Nigerian Academy of Science, Academy House 8A Ransome Kuti Road, University of Lagos, Akoka, Yaba Lagos, Nigeria 
Approved
VIEWS 21
The title for this article is appropriate.

The abstract adequately summarises the article. In addition, the article goes beyond the FAIR acronym (Findable, Accessible, Interoperable and Reusable), for authors and publishers and addresses FAIRNESS on equity and ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Tomori O. Reviewer Report For: FAIRness in scientific publishing [version 1; peer review: 1 approved, 2 approved with reservations]. F1000Research 2016, 5:2816 (https://doi.org/10.5256/f1000research.11114.r18988)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
24
Cite
Reviewer Report 06 Jan 2017
Dragan Pavlović, Department of Anesthesia, Pain Manahement & Perioperative Medicine, Dalhousie University, Halifax, NS, Canada 
Approved with Reservations
VIEWS 24
General description.

This is a well written “opinion” article where the author examined the parallel view that “we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Pavlović D. Reviewer Report For: FAIRness in scientific publishing [version 1; peer review: 1 approved, 2 approved with reservations]. F1000Research 2016, 5:2816 (https://doi.org/10.5256/f1000research.11114.r19008)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
44
Cite
Reviewer Report 07 Dec 2016
Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden 
Approved with Reservations
VIEWS 44
This paper is an opinion article, which discusses several areas where unresolved questions exist in the transition to more open scientific publication practices. The discussion is underpinned by survey data, although a full description of the survey and its results ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Nilsonne G. Reviewer Report For: FAIRness in scientific publishing [version 1; peer review: 1 approved, 2 approved with reservations]. F1000Research 2016, 5:2816 (https://doi.org/10.5256/f1000research.11114.r18242)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 22 Dec 2016
    Philippa Matthews, Nuffield Department of Medicine, Nuffield Department of Medicine, Oxford, UK
    22 Dec 2016
    Author Response
    Thank you Dr Nilsonne for the positive feedback and helpful critique.

    I have uploaded the metadata to Oxford University Research Archive; this record can be accessed using the following ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 22 Dec 2016
    Philippa Matthews, Nuffield Department of Medicine, Nuffield Department of Medicine, Oxford, UK
    22 Dec 2016
    Author Response
    Thank you Dr Nilsonne for the positive feedback and helpful critique.

    I have uploaded the metadata to Oxford University Research Archive; this record can be accessed using the following ... Continue reading

Comments on this article Comments (1)

Version 2
VERSION 2 PUBLISHED 17 Jan 2017
Revised
Version 1
VERSION 1 PUBLISHED 05 Dec 2016
Discussion is closed on this version, please comment on the latest version above.
  • Author Response 17 Jan 2017
    Philippa Matthews, Nuffield Department of Medicine, Nuffield Department of Medicine, Oxford, UK
    17 Jan 2017
    Author Response
    Reviewer comments are reproduced below, with specific author responses in italics. 
     
    Reviewer 1: Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden 
    Recommendation: Approved with Reservations
     
    This paper ... Continue reading
  • Discussion is closed on this version, please comment on the latest version above.
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.