Keywords
Academic publishing, peer review, impact factor, metrics, data visualization, open access
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the The Future of Scholarly Publishing collection.
Academic publishing, peer review, impact factor, metrics, data visualization, open access
Substantial and positive changes are currently underway in academic publishing; now is an important time to capitalize on the opportunity to explore the many potential benefits that can stem from new ways to share and disseminate scientific data1. Despite the improvements that are emerging, it remains the case that discussions in academia frequently focus on the pitfalls, frustrations and difficulties of publishing. Managing a piece of work from conception to publication can indeed be a long and complicated journey, and elements of the process can often feel ‘unfair’.
Advocates of data dissemination suggest that we should aspire to the principles enshrined in the ‘FAIR’ acronym; work should be Findable, Accessible, Interoperable and Reusable2. However, as well as endorsing these attributes of any work, I here represent the view that we should also develop a collective responsibility to make data sharing fair in the conventional sense; the way we generate, represent, review, share and use data should also be underpinned by justice. This means our handling of the whole process is fair to everyone involved, including academic institutions, funders, authors, reviewers, publishers, research participants and patients.
As well as being driven by ethical and moral imperatives to improve our approaches to publishing scientific research, questions around data sharing have to be set in the context of the exponential increases in the volume of data generated; a responsible and robust approach to archiving, cataloguing and managing access to such datasets is crucial to allow optimum, equitable and collaborative approaches to Big Data.
I embarked on this journey as a result of investigating the best way to publish a database. Rather than seeking output via a conventional ‘paper’, I was keen to produce something live, open access, creative, evolving, promoting new collaborations, and linking to other relevant resources. These discussions around the dissemination of my own data led to presentations at a meeting hosted by the University of Oxford’s Interactive Data Network (https://idn.web.ox.ac.uk/event/data-visualisation-and-future-academic-publishing) and subsequently at a conference of publishers (Annual meeting of the Association of Learned and Professional Society Publishers, http://www.alpsp.org/). In order to represent a wider cross-section of academic medicine, I collected opinions from my peers and colleagues within academic medicine using an online questionnaire (https://www.surveymonkey.co.uk/), and then followed this up with a parallel approach to seek feedback from the publishing industry.
This piece is a representation of some of the key themes that arose as a result of the two-pronged questionnaire, and the ongoing discussions between the medical research community and the publishing industry. The feedback that I present is intended to represent individual and collective opinion, to prompt and challenge further discussion, to build bridges between publishing and academia, and to help us move forward with constructive dialogue.
Details of the questionnaires, and the entire dataset collected from each of the two questionnaires used to collect quantitative and qualitative feedback from 102 academics and 37 representatives of the publishing industry, are available to view and download as PPT files from 3 and 4 respectively.
The feedback I have collated represents individual opinion, collective discussion, and sometimes emotion, and the resulting work is my own personal synthesis of this experience. This does not aspire to be a formal or scientific study, but rather to represent views on some important themes in academic publishing, and to underpin further dialogue.
Delays in the conventional routes to publication commonly amount to weeks and months consumed by submission, peer-review, editorial decisions, potential corrections and resubmission, followed not infrequently by a re-initiation of the whole process5. Among academic survey respondents, over 70% agreed or strongly agreed with the statement ‘I am frustrated by the length of time it takes to publish my work’3, and over 80% of publishers agreed that reducing the timelines involved in academic publication should be a ‘crucial priority’4.
Delays suppress and stifle scientific progress in a variety of ways. Over the long time courses of publication, data frequently decay such that they are out of date before they go to press, and it is impossible for authors to provide a written context for their work that reflects the very latest literature or advances in the field6,7. Delay also leads to academic paralysis: until their work is published, academics may refrain from presenting or discussing their work publicly, thereby limiting its impact, reducing the possibility of developments and collaborations, and allowing flaws and discrepancies to go unchallenged. There is also personal paralysis – delays can cause difficulty in progressing with the next phase of an existing body of work, moving on to a new project, recruiting a team, or applying for an academic post or funding3,7.
Reducing delays is clearly an important aspiration but one that comes with practical caveats. One publisher says: ‘Timeliness is important. So is quality control. The latter negatively impacts the former’4. In conventional models of publishing this may have been the case, but we should now strive to dismantle the view that long delays are an inevitable consequence of producing work that is robust, high quality, and endorsed by expert peer review. Happily, this framework is shifting as a result of parallel improvements in allowing academics to post their own work online, and in new approaches to post-publication peer review (discussed in more detail in the section below).
Asked to respond to the statement ‘peer review functions equitably and contributes to improving the quality of my work’, 58% of academics agreed or strongly agreed3. This seems to reflect a general consensus that the process remains valuable and fit for purpose, though evidently tempered by background ambivalence and anxieties, and not endorsed by everyone.
Peer review is intended to provide quality assurance, a principle that is of universal importance to authors, readers, publishers, funders and academic institutions. However, no-one doubts the potential pitfalls of such a process: a reviewer may not be impartial, may be less expert than the authors of the work for which they are providing critique, may not give the task the time that it deserves, and may – on occasion – just get it wrong8. There can also be concern, as stated by one academic, that ‘creativity is stifled in this process’. On these grounds, peer review has continued to be accepted as the ‘least worst’ model8, only persisting for lack of a better alternative.
However, many new approaches to peer review are evolving, with support and enthusiasm from both academics and publishers3,4. These include:
Making peer reviews open access (e.g. F1000, https://f1000research.com and PeerJ, https://peerj.com/), or providing double-blind peer review8;
Using structured formats or templates for critical review, and developing collaborative peer review so that a consensus opinion is provided by a team (e.g. Frontiers, http://home.frontiersin.org/);
Promoting a model that seeks online feedback from the entire scientific community (now a component of many open access review systems, including those at https://f1000research.com);
Asking reviewers to suggest additional experiments only when these are deemed essential to the work and can be conducted within an agreed time frame (e.g. eLife, https://elifesciences.org/);
Improving editorial oversight and influence to ensure the process is conducted fairly and to arbitrate in cases where there is conflict of opinion.
Adjustments to the timeline that put publication ahead of review can also have substantial influence on the process. Authors have the potential to disseminate their work through pre-publication archives (e.g. BioRxiv, http://biorxiv.org/) or on data-sharing platforms (e.g. Figshare, https://figshare.com/). Alternatively, post publication peer review has been adopted as an inventive compromise that reduces delays and promotes data sharing, without sacrificing a quality assurance framework, for example by the F1000Research and Wellcome Open Research platforms (https://f1000research.com, https://wellcomeopenresearch.org/)1,9.
Recognising and rewarding the substantial contribution made by reviewers is also crucial, and strides forward are afoot in providing formal acknowledgement of the body of work undertaken by reviewers; this includes the potential for logging this in a systematic way (e.g. using Publons, https://home.publons.com/). Reviews themselves are becoming independently accredited pieces of scientific work that are a recognised part of a formal academic portfolio (including visibility on ORCID, http://orcid.org/), can be ranked and rated, are published with a DOI to make them accessible and citable, and can lead to the award of CME points10,11.
Much of the communication between academia and publishers happens in a one-way direction through rigid online portals. True open dialogue frequently seems to be lacking, potentially leading to frustrations on both sides. Only 23% of academic respondents agreed or strongly agreed that they would feel ‘comfortable and confident contacting editors and publishers to discuss work before submitting for publication’3. In response to the same question about dialogue with academics, publishers fared slightly better with over half being comfortable pursuing dialogue4.
Only one in three academic respondents reported having experienced positive interactions with editors and publishers to help them present and disseminate their work in the best way. Interestingly, academics’ views on this point also reflect a degree of uncertainty about whether discussion with editors and publishers is appropriate at all: they raise concerns that this amounts to ‘coercion’ or is in some way ‘cheating’ the system3.
Collective responses to how communication should be improved include the need for improving formal and public interdisciplinary discussion at workshops, conferences and seminars, as well as the more personal view from academics who ask editors and publishers to provide a reliable and named point of contact for authors. There is also a collective responsibility for both publishers and academics to promote and participate in communication, to recognize the ways in which appropriate dialogue can improve the content or accessibility of our work, and to promote an environment in which we work in partnership.
The impact factor, the most widely quoted metric, has disproportionate influence over the behaviour of academics, despite never being designed as a measure of the quality of any given piece of work5. To quote one publisher, impact factor is ‘embedded in researcher culture’4. Although it can still exert a very potent effect, there has been increasing recognition that the metrics of any individual piece of work should be of more importance than the metrics of the journal in which it is published, and that we should move away from assessing ourselves, or each other, based on this criterion7,12. It is also important to be mindful that citations can be relatively easy to amass for articles written on major topics, while if you publish in a niche field, your work may be of equal scientific rigour and quality, but has a much smaller audience.
‘The impact factor is broken’ stated one academic medic3. Only 19% of publishers disagreed with this statement, and others added their own descriptions of the impact factor as ‘misused and outdated’, ‘obsolete’ and ‘a horrible obsession for editors and authors’4. We should collectively be encouraged to assess output using a much broader approach, for which an increasing number of tools is becoming available, including online resources such as Google Analytics (https://analytics.google.com/) or Google Scholar (https://scholar.google.com/), Altmetric (https://www.altmetric.com/), author-specific metrics such as h-index, and – most importantly - the application of common sense to viewing and interpreting metrics in the right context12–14.
Open access publication offers a system that should be inherently fair in promoting free access to published resources. However, the challenge to equity here is an economic one15. In a traditional, non open access model, the fees required for access to a journal or individual manuscript are frequently prohibitive for individuals; access therefore depends on institutional subscriptions. In the open access model, in order to make the work freely accessible to their readers, the publisher passes the costs on to their authors. Both systems discriminate strongly against those in less affluent settings.
Unsurprisingly, open access publication can influence article metrics, as those articles that are freely available may be more frequently accessed and cited16. So authors from wealthy institutions can potentially feed their own personal metrics by publishing their work in open access fora. In reality, the situation is more complicated, as the open access citation advantage is not consistent across studies17, many publishing houses waive fees for authors from under-resourced settings, and there are now increasing options for free data sharing (including those discussed above, such as self-publishing, archiving in online repositories, or pre-print publication).
Insisting on consistency in the presentation of scientific work can be a way that individual publishers or journals contribute to quality control and maintaining their unique identity through preservation of a ‘house style’. However, academics often see the process as an array of trivial but time-consuming formatting obligations, demanded of them before the work has even been accepted for publication, and without any appreciable benefit to quality3. In addition to manuscript formatting, multiple journal-specific details are frequently requested for online submission. Among publishers, a more diverse body of opinion is reflected, with an equal split between those who are in favour of relaxing (or unifying) formatting requirements, those who have no strong opinion, and those who do not feel any change is required4.
The conveyor-belt process of conventional publication can be very constraining. An academic manuscript usually has to be assembled into a standardised package that meets strict formatting requirements, most obviously with respect to manuscript layout, length, and the number of figures, tables and references that can be included. This dates from the – now bygone – era in which a paper was indeed just that, printed across several pages of a glossy journal into whose binding it needed to be neatly fitted. Online publication should be providing an escape route from these constraints – albeit not one that has been consistently deployed or accepted.
However, there is also a broader boundary in operation which may be less immediately apparent – that which governs so strictly the fundamental nature of a piece of work, that which inhibits (or even prohibits) publication of a work-in-progress, or an unproved hypothesis, or results that are negative, unexplained or in conflict with previous data. Only 9% of academics agreed with the statement ‘the process of publication is flexible, supports innovation, and allows me to creative’, and none strongly agreed3.
This should be of significant concern when new ideas and novel approaches are so crucial to our collective progress, and in an era in which there is ever better recognition of the risks and costs associated with the suppression of negative results18,19. Furthermore, when new ideas and novel approaches underpin so much true scientific progress, why are such tight restraints imposed on the nature, style, content and substance of academic output? We should move towards a system that welcomes the publication of a diversity of innovation and ideas: there is much for us all to gain from encouraging dissemination of a wider body of work. This might include new concepts, methods and strategies, diverse commentary and critique, approaches that have been tried and failed, negative results, unfinished projects, protocols and registries for clinical trials, and live datasets that evolve over time.
The traditional publication of an academic ‘paper’ makes it impossible to add incremental advances or updates, and the only way to correct inconsistencies that emerge post-publication is to submit and publish a formal erratum. This is a substantial missed opportunity for quality improvement. The version control option offered by newer publishing platforms allows authors to maintain their work in its best possible form, adding updates, corrections and refinements, while preserving records of the original work. This is the approach I have been ultimately been able to pursue for my own data, via the Wellcome Open Research platform (https://wellcomeopenresearch.org/)20.
The discussions represented here took place over a short time frame and are based on opinions collected from a small section of academia3 and from an even smaller slice of the publishing fraternity4. Taking the opportunity to share feedback from academic clinicians does not mean that I represent all academic clinicians, or that the views of other sectors of academia are congruent. Although I have engaged in productive and interesting discussions with publishers, as well as seeking written anonymous feedback, it is not possible for me to represent this sector cohesively, and further commentary is undoubtedly needed.
Despite the marked improvements, new ideas, and increased flexibility emerging around data sharing, there are still some substantial challenges to be addressed around the publication of academic data.
A publishing process perceived as equitable by one individual or institution may not operate in the best interests of another. In particular, we have a crucial collective responsibility to be mindful of the resource gap between different settings. Generating high quality scientific output, and publishing and disseminating this appropriately, is significantly influenced by access to library services, IT infrastructure, institutional access to online resources, funding, manpower and skills. Real fairness means reallocation of resources, waivers for institutions unable to pay access or publishing fees, better sharing of skill sets, balanced review, and capacity building in resource-poor settings21.
Diminishing or diluting quality is a potential concern as we enter an era in which a greater number of authors release a more diverse pool of work without pre-publication review. However, experts in the dissemination of open access literature have argued that market forces will tend to operate to maintain quality, and that the overall benefits of increasing data availability substantially outweigh any potential risk to quality22.
Change can be difficult; old habits die hard and new approaches to data sharing can be met with suspicion or opposition5. Many authors are either overtly or subliminally wedded to the idea of a journal based impact factor and to blind peer review. Some authors also express anxiety arising from the potential conflict between wanting to share their output yet needing to retain ownership of the work. Substantial power is still held by a small subset of traditional journals and editorial boards; the undue influence of the publishing industry on science output has even been described as ‘toxic’23. It will take time for confidence in the newer publishing systems and models to grow. Vigilance is required for ‘predatory’ journals that often send unsolicited emails trying to entice authors with offers including rapid and open access publication, but that may not deliver on their promises, fail to provide suitable peer review, or publish the work only on receipt of a substantial fee9,21,24.
I have not set out to include detailed discussion of economic cost, but it is clear that a substantial financial investment is crucial to support innovative approaches to publishing, to develop new metrics, to support accredited peer review, and to maintain publishing platforms ranging from journals to internet sites. Academia has to be willing to accept and underwrite these costs, and the publishing industry to develop a system that is lean and competitive, and that offers value for money.
We are in an era in which the process of disseminating scientific work is becoming quicker and more flexible, in which we can retain ownership while gaining the benefits of public sharing, in which metrics are more about our own output that the collective assessment of the journal that publishes our work, and in which a ‘paper’ no longer has to be a carbon-copy manuscript of a pre-specified length and format.
There is still much progress to be made. We should continue to be flexible, creative and open-minded in developing the best ways to present and share scientific work. The process has to be underpinned by good communication between academia and publishing, and significant effort is required to dismantle taboos around communication, particularly the view that open dialogue is in some way ‘cheating’ the system. We should be more discerning about metrics, using them appropriately and in context, and not allowing impact factor to drive behaviour, stifle creativity or delay output. Careful thought is required to support, develop and sustain output from under-resourced settings, and to ensure that diverse options for data dissemination and access are not confined to wealthy institutions in rich countries.
As well as promoting the FAIR principles, changes in the way we publish scientific output are increasingly moving towards a process that is genuinely fair – something that is timely, that we can all access and judge for ourselves but that can still be scrutinized by a process of equitable peer review, that demands rigour and scrutiny while at the same time making efforts to minimise delays, that can be shared, reproduced and collectively applied for the advancement of understanding.
PCM was an invited speaker at the Association of Learned and Professional Society Publishers (ALPSP) annual conference in September 2016.
PCM is funded by a Wellcome Trust Intermediate Fellowship Grant, Ref. 110110/Z/15/Z.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
This work is founded on support from several individuals and agencies, who have provided me with expert feedback and discussion, opportunities to speak at publishing meetings, and direct input into the design and distribution of questionnaires. In particular, I would like to acknowledge Robert Kiley (Wellcome Trust), Howard Noble (Academic IT, University of Oxford), Louise Page (PLOS), Juliet Ralph (Bodleian Library, University of Oxford), and Isabel Thompson (Oxford University Press). The questionnaires were distributed with the support of Oxford University Clinical Academic Graduate School (OUCAGS), the Peter Medawar Building for Pathogen Research, and the Association of Learned and Professional Society Publishers (ALPSP). I am grateful to all those individuals within academia and publishing who contributed generously to completing questionnaires in order to develop and inform the discussions represented here.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 2 (revision) 17 Jan 17 |
read | read | |
Version 1 05 Dec 16 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Reviewer 1: Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
Recommendation: Approved with Reservations
This paper ... Continue reading Reviewer comments are reproduced below, with specific author responses in italics.
Reviewer 1: Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
Recommendation: Approved with Reservations
This paper is an opinion article, which discusses several areas where unresolved questions exist in the transition to more open scientific publication practices. The discussion is underpinned by survey data, although a full description of the survey and its results are not within the scope of the paper. Areas covered in this paper are publication delays in scientific publishing, peer review, communication between scientists and publishers, metrics such as the impact factor, models for open access publication, journals' formatting requirement, and boundaries imposed by traditional publishing on paper, which need not persist in a time with online publishing, but still do. The paper provides a timely discussion, based on survey data, and on relevant and valid arguments.
The abstract promises to explore the notion of fairness from the points of view of many different stakeholders. This point of departure lacks clear justification. In particular, it is not obvious why changes in scientific publishing should need to be perceived as fair by publishers. Also, not all of the stakeholder perspectives are explicitly addressed in the main text.
> In order to address this, while at the same time being mindful about the concerns over the length of the article expressed by Reviewer 2, I have added a summary table to the conclusions of the article. This lists each of the stakeholders in the process and lists the key points regarding how fairness is pertinent to this group, and what steps are needed to work towards greater fairness.
The survey data are available in two linked slide presentations. I recommend that the data be made available as a data frame in a non-proprietary file format. This will facilitate re-use and further exploration of the data set. Best practice is to use a repository that provides access to data in a format that is time-stamped, immutable, and permanent, and with a persistent identifier and an open licence. Documentation including metadata that describes how the survey was performed can be provided with the data or in this paper.
> I have uploaded the metadata to Oxford University Research Archive; this record can be accessed using the following link: https://doi.org/10.5287/bodleian:J5aekGAMy.
In the current movement towards more open publication practices, it is important to find out how scientists and other stakeholders perceive barriers and possibilities. This paper makes a valuable contribution in gathering scientists' views and arguments surrounding publication practices. I am happy to approve it with a reservation about the format of openly published survey data.
> Thank you for the helpful and positive feedback.
Reviewer 2: Dragan Pavlović, Department of Anesthesia, Pain Management & Perioperative Medicine, Dalhousie University, Halifax, NS, Canada
Recommendation: Approved with Reservations
General description:
This is a well written “opinion” article where the author examined the parallel view that “we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions.” The views are based on oral and written dialogue (including 2 online questionnaires, 102 academics and 37 representatives of the publishing industry,) with clinicians, academics and the publishing industry. Parts of the work were presented earlier at 2 meetings. It is concluded that further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to reduce inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.
Major comments:
This is in some way a relatively “short” text, mostly presented as a letter or a superficial comment, yet as such it appears to be quite long. There is no precise analysis of the announced results of the 2 online questionnaires, with 102 academics and 37 representatives of the publishing industry. The text remains to be just, as indeed announced, an opinion. It looks to me that the text could be much shorter and much more focused on the acute problems, like expertise of the peer reviewers, negligence of the editors and the editorial boards of the journals, co-authorship and commercialization of the open access journals (‘predatory’ journals). Probably also the last paragraphs (Future challenges and Conclusions) could be substantially shortened. Or, if the questionnaires were appropriate, it would be possible to develop much more relevant and informed study. It is hard to see what how the study would look like since the questionnaires are not available.
>Thank you for this feedback. In line with these comments, I have made the following revisions:
(i) I have substantially shortened the article (cutting it by approx. 25% compared to the original text);
(ii) I have sub-divided the ‘future challenges’ section with subheadings for additional clarity;
(iii) A summary table provides clarity for the conclusions section without adding extra words to the main text of the manuscript;
(iv) As per my response to reviewer 1, in addition to the full metadata within existing references to F1000 slides, I have also uploaded this into a formal research repository, where it can now be accessed (https://doi.org/10.5287/bodleian:J5aekGAMy).
Although it is an attractive idea to present questionnaire data as a ‘study’ (and I did consider this approach), the questionnaires were designed to capture a body of opinion, and did not set out to be a robust study. A full ‘analysis’ can be found within the linked powerpoint slides (references 3 and 4), and the resource is also improved upon by the new URL for the complete metadata (as above).
The explosion of the number of the journals worldwide in the last decade or so was not discussed and there is no mention of the problem with the printed journals that are facing their slow disappearance.
> The significance of the increase in journal numbers to this particular topic is mainly as a result of the increase in numbers of predatory journals. I have expanded upon this point as follows: ‘An apparent explosion in the numbers of such enterprises is a threat to bona fide publishers, exploits authors and funders, diminishes the quality of published science. All publishing stakeholders should seek to avoid interaction with these unscrupulous publishers and remove them from the academic record’. I have added new references to support this additional point (refs 25 and 27). The issue around economic viability for printed journals is now included in the summary table as well as in the final paragraph of the ‘future challenges’ section.
The discussion does not reach deep enough to provide more concrete solutions to the problems that are presented in the paper.
> Finding solutions is not a ‘concrete’ process to be defined by a single author – it is a dynamic process that evolves over time as a consequence of input and innovation from a wide number of sources, both within academia and publishing (this point now added as the second sentence within the conclusions section). However, the article does emphasise areas where I strongly endorse a particular outcome or change, e.g. ‘we should now strive to dismantle the view that long delays are an inevitable consequence of producing high quality output’; ‘Collective responses to how communication should be improved include...’.
In the absence of any absolute solutions, the article is instead intended to highlight the way that current developments are indeed offering incremental improvements; each section outlines a problem or hurdle, followed by a solution or potential solution(s); for example:
(i) Section headed ‘timelines’ outlines the reasons for delays, the adverse consequences of delays, and concludes with the emerging solution that ‘this framework is shifting as a result of parallel improvements in allowing academics to post their own work online, and in new approaches to post-publication peer review’;
(ii) Section headed ‘peer review’ represents the anxieties that surround this process before moving on to a number of solutions and improvements presented as a list of bullet points.
(iii) Section headed ‘barriers to communication’ outlines some of the difficulties before concluding that we should be ‘encouraging routine and transparent dialogue between publisher and academic.’
Minor comments
Peer review
Insisting on the expertise of the reviewers is justified, although the existing methods - some are mentioned in the text, do not guarantee it. It should be mentioned that the journals should have some more secure methods to choose the relevant experts for the peer review. May be the reviewers should supply some evidence what kind of the expertise they have in the relation to the paper that they give an opinion and the journals should be obliged to respect it.
> Thank you, I have added this to the list of bullet points in the peer review section, to the section on predatory journals, and to the new summary table.
Metrics
Problem of co-authorship and possible unjustified benefits for the co-authors was not mentioned.
> Authorship is indeed a valid question to raise, and I have therefore added this as an additional short section. As well as the point raised here about the potential for unjustified benefits, I have also taken the opportunity to add comment about team authorship, and to add a relevant reference (‘Improving recognition of team science contributions in biomedical research careers’; https://www.acmedsci.ac.uk/viewFile/56defebabba91.pdf: Academy of Medical Sciences; 2016.)
Open access
Problem of commercialization (of the ‘predatory’ journals) could be more elaborated.
> As per my response to the previous comment regarding elaboration of predatory journals, I have expanded upon this section and added two new references.
Formatting requirements
Probably some negative comments are not fully justified. I personally find impossible to review an article that, even if well written, is badly formatted. Badly presented text, even if it is of high quality, inevitably loses its impact. Please revise if you agree that your judgment was not carefully measured.
> I have made every effort to address concerns around formatting and clarity by shortening the manuscript, condensing the sections about ‘formatting requirements’ and ‘boundaries’, adding subheadings to the ‘future challenges’ section, including a summary table, and making the conclusion more punchy and concise.
My opinion is based on every effort to be ‘carefully measured’; it is this concern that prompted me to seek a wide body of opinion through questionnaires. This does not necessarily make my views representative of the entire community, and this is highlighted explicitly within the article, e.g. ‘taking the opportunity to share feedback from academic clinicians does not mean that I represent all academic clinicians, or that the views of other sectors of academia are congruent’.
Reviewer 1: Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
Recommendation: Approved with Reservations
This paper is an opinion article, which discusses several areas where unresolved questions exist in the transition to more open scientific publication practices. The discussion is underpinned by survey data, although a full description of the survey and its results are not within the scope of the paper. Areas covered in this paper are publication delays in scientific publishing, peer review, communication between scientists and publishers, metrics such as the impact factor, models for open access publication, journals' formatting requirement, and boundaries imposed by traditional publishing on paper, which need not persist in a time with online publishing, but still do. The paper provides a timely discussion, based on survey data, and on relevant and valid arguments.
The abstract promises to explore the notion of fairness from the points of view of many different stakeholders. This point of departure lacks clear justification. In particular, it is not obvious why changes in scientific publishing should need to be perceived as fair by publishers. Also, not all of the stakeholder perspectives are explicitly addressed in the main text.
> In order to address this, while at the same time being mindful about the concerns over the length of the article expressed by Reviewer 2, I have added a summary table to the conclusions of the article. This lists each of the stakeholders in the process and lists the key points regarding how fairness is pertinent to this group, and what steps are needed to work towards greater fairness.
The survey data are available in two linked slide presentations. I recommend that the data be made available as a data frame in a non-proprietary file format. This will facilitate re-use and further exploration of the data set. Best practice is to use a repository that provides access to data in a format that is time-stamped, immutable, and permanent, and with a persistent identifier and an open licence. Documentation including metadata that describes how the survey was performed can be provided with the data or in this paper.
> I have uploaded the metadata to Oxford University Research Archive; this record can be accessed using the following link: https://doi.org/10.5287/bodleian:J5aekGAMy.
In the current movement towards more open publication practices, it is important to find out how scientists and other stakeholders perceive barriers and possibilities. This paper makes a valuable contribution in gathering scientists' views and arguments surrounding publication practices. I am happy to approve it with a reservation about the format of openly published survey data.
> Thank you for the helpful and positive feedback.
Reviewer 2: Dragan Pavlović, Department of Anesthesia, Pain Management & Perioperative Medicine, Dalhousie University, Halifax, NS, Canada
Recommendation: Approved with Reservations
General description:
This is a well written “opinion” article where the author examined the parallel view that “we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions.” The views are based on oral and written dialogue (including 2 online questionnaires, 102 academics and 37 representatives of the publishing industry,) with clinicians, academics and the publishing industry. Parts of the work were presented earlier at 2 meetings. It is concluded that further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to reduce inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.
Major comments:
This is in some way a relatively “short” text, mostly presented as a letter or a superficial comment, yet as such it appears to be quite long. There is no precise analysis of the announced results of the 2 online questionnaires, with 102 academics and 37 representatives of the publishing industry. The text remains to be just, as indeed announced, an opinion. It looks to me that the text could be much shorter and much more focused on the acute problems, like expertise of the peer reviewers, negligence of the editors and the editorial boards of the journals, co-authorship and commercialization of the open access journals (‘predatory’ journals). Probably also the last paragraphs (Future challenges and Conclusions) could be substantially shortened. Or, if the questionnaires were appropriate, it would be possible to develop much more relevant and informed study. It is hard to see what how the study would look like since the questionnaires are not available.
>Thank you for this feedback. In line with these comments, I have made the following revisions:
(i) I have substantially shortened the article (cutting it by approx. 25% compared to the original text);
(ii) I have sub-divided the ‘future challenges’ section with subheadings for additional clarity;
(iii) A summary table provides clarity for the conclusions section without adding extra words to the main text of the manuscript;
(iv) As per my response to reviewer 1, in addition to the full metadata within existing references to F1000 slides, I have also uploaded this into a formal research repository, where it can now be accessed (https://doi.org/10.5287/bodleian:J5aekGAMy).
Although it is an attractive idea to present questionnaire data as a ‘study’ (and I did consider this approach), the questionnaires were designed to capture a body of opinion, and did not set out to be a robust study. A full ‘analysis’ can be found within the linked powerpoint slides (references 3 and 4), and the resource is also improved upon by the new URL for the complete metadata (as above).
The explosion of the number of the journals worldwide in the last decade or so was not discussed and there is no mention of the problem with the printed journals that are facing their slow disappearance.
> The significance of the increase in journal numbers to this particular topic is mainly as a result of the increase in numbers of predatory journals. I have expanded upon this point as follows: ‘An apparent explosion in the numbers of such enterprises is a threat to bona fide publishers, exploits authors and funders, diminishes the quality of published science. All publishing stakeholders should seek to avoid interaction with these unscrupulous publishers and remove them from the academic record’. I have added new references to support this additional point (refs 25 and 27). The issue around economic viability for printed journals is now included in the summary table as well as in the final paragraph of the ‘future challenges’ section.
The discussion does not reach deep enough to provide more concrete solutions to the problems that are presented in the paper.
> Finding solutions is not a ‘concrete’ process to be defined by a single author – it is a dynamic process that evolves over time as a consequence of input and innovation from a wide number of sources, both within academia and publishing (this point now added as the second sentence within the conclusions section). However, the article does emphasise areas where I strongly endorse a particular outcome or change, e.g. ‘we should now strive to dismantle the view that long delays are an inevitable consequence of producing high quality output’; ‘Collective responses to how communication should be improved include...’.
In the absence of any absolute solutions, the article is instead intended to highlight the way that current developments are indeed offering incremental improvements; each section outlines a problem or hurdle, followed by a solution or potential solution(s); for example:
(i) Section headed ‘timelines’ outlines the reasons for delays, the adverse consequences of delays, and concludes with the emerging solution that ‘this framework is shifting as a result of parallel improvements in allowing academics to post their own work online, and in new approaches to post-publication peer review’;
(ii) Section headed ‘peer review’ represents the anxieties that surround this process before moving on to a number of solutions and improvements presented as a list of bullet points.
(iii) Section headed ‘barriers to communication’ outlines some of the difficulties before concluding that we should be ‘encouraging routine and transparent dialogue between publisher and academic.’
Minor comments
Peer review
Insisting on the expertise of the reviewers is justified, although the existing methods - some are mentioned in the text, do not guarantee it. It should be mentioned that the journals should have some more secure methods to choose the relevant experts for the peer review. May be the reviewers should supply some evidence what kind of the expertise they have in the relation to the paper that they give an opinion and the journals should be obliged to respect it.
> Thank you, I have added this to the list of bullet points in the peer review section, to the section on predatory journals, and to the new summary table.
Metrics
Problem of co-authorship and possible unjustified benefits for the co-authors was not mentioned.
> Authorship is indeed a valid question to raise, and I have therefore added this as an additional short section. As well as the point raised here about the potential for unjustified benefits, I have also taken the opportunity to add comment about team authorship, and to add a relevant reference (‘Improving recognition of team science contributions in biomedical research careers’; https://www.acmedsci.ac.uk/viewFile/56defebabba91.pdf: Academy of Medical Sciences; 2016.)
Open access
Problem of commercialization (of the ‘predatory’ journals) could be more elaborated.
> As per my response to the previous comment regarding elaboration of predatory journals, I have expanded upon this section and added two new references.
Formatting requirements
Probably some negative comments are not fully justified. I personally find impossible to review an article that, even if well written, is badly formatted. Badly presented text, even if it is of high quality, inevitably loses its impact. Please revise if you agree that your judgment was not carefully measured.
> I have made every effort to address concerns around formatting and clarity by shortening the manuscript, condensing the sections about ‘formatting requirements’ and ‘boundaries’, adding subheadings to the ‘future challenges’ section, including a summary table, and making the conclusion more punchy and concise.
My opinion is based on every effort to be ‘carefully measured’; it is this concern that prompted me to seek a wide body of opinion through questionnaires. This does not necessarily make my views representative of the entire community, and this is highlighted explicitly within the article, e.g. ‘taking the opportunity to share feedback from academic clinicians does not mean that I represent all academic clinicians, or that the views of other sectors of academia are congruent’.