The effects of an editor serving as one of the reviewers

Background Publishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications. Methods Here we examine an element of the editorial process at  , in which the eLife Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions to   since June 2012, of which eLife 2,747 were sent for peer review. This subset of 2747 papers was then analysed in detail. Results The Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and five days faster for papers that were rejected after peer review (n=1,099). Moreover, editors acting as reviewers had no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates. Conclusions An important aspect of  ’s peer-review process is shown to be effective, eLife given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.


Background
Although pre-publication peer review has been strongly criticisedfor its inefficiencies, lack of speed, and potential for bias (for example, see 1 and 2) -it remains the gold standard for the assessment and publication of research 3 . eLife was launched to "improve [...] the peer-review process" 4 in the life and biomedical sciences, and one of the journal's founding principles is that "decisions about the fate of submitted papers should be fair, constructive, and provided in a timely manner" 5 . However, peer review is under pressure from the growth in the number of scientific publications, which increased by 8-9% annually from the 1940s to 2012 6 , and growth in submissions to eLife would inevitably challenge the capacity of their editors and procedures.
eLife's editorial process has been described before 7,8 . In brief, each new submission is assessed by a Senior Editor, usually in consultation with one or more members of the Board of Reviewing Editors, to identify whether it is appropriate for in-depth peer review. Traditionally, editors recruit peer reviewers and, based on their input, make a decision about the fate of a paper. Once a submission is sent for in-depth peer review, however, the Reviewing Editor at eLife has extra responsibility. First, the Reviewing Editor is expected to serve as one of the peer reviewers. Second, once the reviews have been submitted independently, the Reviewing Editor should engage in discussions with the other reviewers to reach a decision they can all agree with. Third, when asking for revisions, the Reviewing Editor should synthesise the separate reviews into a single set of revision requirements. Fourth, wherever possible, the Reviewing Editor is expected to make a decision on the revised submission without re-review. At other journals, the Reviewing Editor may instead be known as an Academic Editor or Associate Editor.
Since editors have extra responsibility in eLife's peer-review process, here we focus our analysis on the effect of the Reviewing Editor serving as one of the peer reviewers, and we examine three outcomes: 1) the effect on decision times; 2) the effect on the decision type (accept, reject or revise); and 3) the citation rate of published papers. The results of the analysis are broken down by the round of revision and the overall fate of the submission. We do not consider the effect of the discussion between the reviewers or the effect of whether the Reviewing Editor synthesizes the reviews or not.

Methods
We analyzed a dataset containing information about 9,589 papers submitted to eLife since June 2012 in an anonymised format. The dataset contained the date each paper was first submitted, and, if it was sent for peer review, the dates and decisions taken at each step in the peer-review process. Information about authors had been removed, and the identity of reviewers and editors was obfuscated to preserve confidentiality.
As a pre-processing step, we removed papers that had been voluntarily withdrawn, or where the authors appealed a decision, as well as papers where the records were corrupted or otherwise unavailable. After clean up, our dataset consisted of a total of 8,905 submissions, of which 2,750 were sent for peer review. For the rest of the paper, we focus our analysis on this subset of 2,750 papers, of which 1,405 had been accepted, 1,099 had been rejected, and the rest were still under consideration. The article types included are Research Articles (MS type 1), Short Reports (MS type 14), Tools and Resources (MS type 19), and Research Advances (MS type 15). Registered Reports are subject to a slightly different review process and have not been included.
Before discussing the results, we introduce a few definitions: the "eLife Decision Time" is the amount of time taken by eLife from that version of the submission being received until a decision has been reached for a particular round of review. The "Author Time" is the amount of time taken by the authors to revise their article for that round of revision. The "Total Time" is the time from first submission to acceptance, or amount of time taken for eLife to publish a paper from the moment it was first received for consideration. By definition, the "Total Time" is equal to the sum of the "eLife Decision Time" and the "Author Time" across all rounds, including the initial submission step. "Revision Number" indicates the round of revision. We distinguish between Reviewing Editors who served as one of the reviewers during the first round of review and Reviewing Editors who did not serve as one of the reviewers (i.e., those who undertook more of a supervisory role during the review process) with the "Editor_As_Reviewer" variable (True or False).
We illustrate the variables with a real example taken from the dataset ( Table 1).
The example submission from Table 1 was received as an "initial submission" (MS TYPE 5) on 20 June 2012. One day later, the authors were encouraged to submit a "full submission" (MS TYPE 1) that would be sent for in-depth peer review. The full submission was received on 27 June 2012, when the Reviewing Editor was assigned and reviewers were contacted. In this example, the Reviewing Editor also served as one of the reviewers (indicated by the "Editor_ As_Reviewer" variable). Since we are focusing on the role of the editors in the peer review process, in the rest of the paper we will ignore the time spent in the pre-review stage.
All of the statistical analyses were performed using R and Python.
On the Python side, we used statsmodels, scipy, numpy, and pandas for the data manipulation and analysis. To plot the results we used bokeh, matplotlib, and seaborn. Details of all the analysis, together with code to reproduce all image and tables in the paper are available on the companion repository of this paper here: https:// github.com/FedericoV/eLife_Editorial_Process.
To obtain the citation numbers, we used a BeautifulSoup to scrape the eLife website, which provides detailed information about citations for each published paper.

Results and discussion
First, we examined the effect of the Reviewing Editors serving as one of the reviewers on the time from submission to acceptance or from submission to rejection after peer review (Total Time). When the Reviewing Editor served as a reviewer (Editor_As_Reviewer = True), the total processing time was 10 days faster in the case of accepted papers and more than 5 days faster in the case of papers rejected after peer review ( Figure 1). Both differences are   Table 2 for details). Intuitively, regardless of the role of the Reviewing Editor, rejection decisions are typically much faster than acceptance decisions, as they go through fewer rounds of revision, and are not usually subject to revisions from the authors.
One possible reason why submissions reviewed by the Reviewing Editor have a faster turnaround is because fewer people are involved (e.g., the Reviewing Editor in addition to two external reviewers, rather than the Reviewing Editor recruiting three external reviewers), and review times are limited by the slowest person. To test this, we built a linear model to predict the total review time as a function of editor type (whether the Reviewing Editor served as a reviewer or not), decision (accept or reject), and the number of unique reviewers across all rounds (see Table S1). Indeed, the total review time did increase with each reviewer (7.4 extra days per reviewer, p < 0.001) and the effect of a Reviewing Editor serving as one of the reviewers remained significant (-9.3 days when a Reviewing Editor served as one of the reviewers, p < 0.0001).
Next, we examined this effect across all rounds of review (rounds 0, 1, 2) and decision types (accept, reject and revise). The results are shown in Figure 2 and summarized in Table 2. Again, we see that processing times are consistently faster across almost every round, when the editors serves as one of the peer reviewers, except in the cases where the sample size was very small.
Interestingly, when the Reviewing Editor serves as one of the peer reviewers, the eLife Decision Time is reduced, but the time spent on revisions (Author Time) does not change. This suggests that the actual review process is more efficient when the Reviewing Editor serves as a reviewer, but the extent of revisions being requested from the authors remains constant.
We next examined the chances of a paper being accepted, rejected or revised when a Reviewing Editor served as one of the reviewers. We found no significant difference when examining the decision type on a round-by-round basis (Table 3) (chi-squared test, p = 0.33).
To test whether eLife's acceptance criteria changed over time, we built a logit model including as a predictive variable the number of days since eLife began accepting papers and whether the Reviewing Editor served as one of the reviewers. The number of days since publication had a very small (-0.003) but significant effect (p < 0.02) while the effect of the Reviewing Editor serving as a reviewer was not significant (see Table S2). We also tested whether a Reviewing Editor serving as a reviewer had an effect on the number of rounds of revision before the final decision and found no significant effect (see Table S3).
The final outcome we examined was the number of citations (as tracked by Scopus) received by papers published by eLife. Papers accumulate citations over time, and, as such, papers published earlier tend to have more citations (Figure 3).
We examined this effect using a generalized linear model. As variables, we considered whether the Reviewing Editor served as a reviewer (Editor_As_Reviewer, true or false), as well as the number of days between eLife publishing its first manuscript and the day the Scopus database was queried. The presence of a Reviewing Editor serving as a reviewer had no significant effect on the number of citations (see Table S4). Papers with longer total review times tended to be cited less (this effect is small but significant).   One of the most noticeable effects of a Reviewing Editor serving as one of the peer reviewers at eLife is the faster decision times. However, serving as a Reviewing Editor and one of the reviewers for the same submission is a significant amount of work. As the volume of papers received by eLife has increased, the fraction of editors willing to serve as a reviewer has decreased. While in 2012 almost all editors also served as reviewers, that percentage decreased in 2013 and 2014. There are signs of a mild increase in the percentage of editors willing to serve as reviewers in 2015 ( Figure 4).

Conclusions
Due to an increasingly competitive funding environment, scientists are under immense pressure to publish in scientific journals, yet the peer-review process remains relatively opaque at many journals. In a systematic review from 2012, the authors conclude that "Editorial peer review, although widely used, is largely untested and its effects are uncertain" 9 . Recently, journals and conferences (e.g., 10) have launched initiatives to improve the fairness and transparency of the review process. eLife is one such example. Meanwhile, scientists are frustrated by the time it takes to publish their work 11 .  We report the analysis of a dataset consisting of articles received by eLife since launch and examine factors that affect the duration of the peer-review process, the chances of a paper being accepted, and the number of citations that a paper receives. In our analysis, when an editor serves as one of the reviewers, the time taken during peer review is significantly decreased. Although there is additional work and responsibility for the editor, this could serve as a model for other journals that want to improve the speed of the review process.
Journals and editors should also think carefully about the optimum number of peer reviewers per paper. With each extra reviewer, we found that an extra 7.4 days are added to the review process. Editors should of course consider subject coverage and ensure that reviewers with different expertise can collectively comment on all parts of a paper, but where possible there may be advantages, certainly in terms of speed and easing the pressure on the broader reviewer pool, of using fewer reviewers per paper overall.
Insofar as the editor serving as a reviewer is concerned, we did not observe any difference in the chances of a paper being accepted or rejected, but we did notice a modest increase in the overall number of citations that a paper receives when an editor serves as one of the reviewers, although this effect is very small. An interesting result from our analysis is that a longer peer-review process or more referees does not lead to an increase in citations, so this is another reason for journals and editors to carefully consider the impact of the number of reviewers involved, and to strive to communicate the results presented in a timely manner for others to build upon. As eLife is a relatively young journal, we can verify if the citations trend we observe will hold over longer periods as different papers accumulate citations.

Data and software availability
All code for the analysis as well as the datasets: https://github.com/ FedericoV/eLife_Editorial_Process To reproduce Figure 4, we pre-processed the raw dataset that contained the identity of the editors to avoid disclosing any information about the identity of reviewers. Competing interests Andy Collings is Executive Editor at eLife. The other authors declare that they have no competing interests.

Grant information
Andy Collings is employed by eLife Sciences Publications Ltd. eLife is supported by the Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.  Table S2. Linear model for the chances of a paper being accepted. We used logit regression to estimate the chances of a paper being accepted as a function of whether the Reviewing Editor served as one of the reviewers (Editor_As_Reviewer), the number of unique reviewers, and the number of days between when a paper was published and the first published paper by eLife. The only significant variable is the days since eLife started accepting papers for publication (although the effect on the chances of a paper being accepted is very small).  Thank you for the opportunity to review this manuscript. This is a well-done study, and the conclusions follow from the results. We would recommend accepting the article once all clarifications and revisions have been made, or the lack of doing so adequately justified.
A. While brevity is generally to be admired, we would recommend a bit more detail about the statistical analyses. These are critical, but are reduced to 3 sentences and a referral to the programming language through an external link. We would suggest that the main text include the (brief) discussion of the analyses done, and rationale for them, rather than have those relegated to the external link.
B. The interpretation of the findings seems to be attributing causal factors -an A leads to B consideration -for which the control of variables is too limited. We believe that interpreting these as associations would be more consistent with the findings.
Consider the statement: "Journals and editors should also think carefully about the optimum number of peer reviewers per paper. With each extra reviewer, we found that an extra 7.4 days are added to the review process." Given that there appeared to be no inclusion of either article quality or complexity in the evaluation, is it not possible that issues within the article itself required the use of additional reviewers (i.e. a B leads to A perspective)? Perhaps extra reviewers with specific expertise was required, or concerns with potential problems in the manuscript led to consultations with other reviewers. It does not seem safe to assume that it was the addition of the reviewer that added extra days.
Similarly, the study centers around the role of the editor in the reviewing process, and the discussion suggested that the involvement of the reviewing editor as a peer reviewer expedited the process. There was little discussion of other factors that could have accounted for the statistical results. For example, perhaps the reviewing editor selected articles that piqued his or her interest, or were more clearly presented. Perhaps the reviewing editor selected to review at times more convenient to his or her workload, while other reviewers did not have such an option. The reviewing editor might select to review articles perceived to be of greater or timelier value to the journal itself, which may increase the speed of the review.
Specific questions: A. According to their method section, the authors state that they began with an initial N=9,589. After purging other articles they had an N=8,905. They then isolated a total of 2,750 articles subjected to the peer review process for the study: "For the rest of the paper, we focus our analysis on this subset of 2,750 papers, of which 1,405 had been accepted, 1,099 had been rejected, and the rest [which would equal 246] were still under consideration." Looking at the Excel spreadsheet for citation counts, there are 1407 lines with entry numbers. For peer-reviewed papers, excel spreadsheet has 2747 (after removing duplicate entries based on the MS NO column) entries for manuscripts numbered up to 12621. The excel spreadsheet for unique reviewers has 2747 entries, with a final MS NO of 12621.
The numbers do not appear to match, and there is no explanation for that in methods. Exactly how many manuscripts were reviewed, how many rejected and why, and how many were tracked?
B. In the Excel spreadsheet for citations, the second column was titled "Citations," but these figures do not appear to have any relation to the Scopus citation numbers. What numbers were used for the actual citation counts?
We also note that we find the suggestions by other reviewers compelling, and would be happy to review a revision of this manuscript should that be considered useful.
Competing Interests: AA is an employee of The Center For Scientific Integrity, which operates Retraction Watch. IO is executive director of The Center For Scientific Integrity.
We confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however we have significant reservations, as outlined above.

Author Response 30 Sep 2016
Federico Vaggi, Fondazione Edmund Mach, San Michele, Italy We thank the reviewers for a very detailed examination of our manuscript. They raise important issues, in particular, that certain findings which were exclusively correlations were treated as causative. They also caught a minor mistake in the reported number of papers.
We are currently in the process of submitting a revised manuscript that we believe addresses most of the issues they raise.
For the generic comments:

A)
We tried to expand in more detail about what variables and models were used for the different analysis. However, we believe that detailed descriptions are not as useful as mathematical formulas and the computer code that allows anyone to reproduce the analysis. As a companion to the paper, we made a literate programming document (an IPython notebook) that shows and reproduces all the statistical analysis in the paper.

B)
We agree completely. In the revised version of the paper, we tried to better explain the process through which an editor decides whether or not to serve as a reviewer. Unfortunately, as this is a purely observational study without direct intervention, we cannot identify causal factors.
We now address the specific questions: A) -As the reviewers correctly point out, the correct number of papers in the dataset is 2747, not 2750. There were 3 other corrupted papers that were discarded that we accidentally included in the original count. The papers that were dropped were those for which the database entries were corrupted (the data of resubmission was prior to the date where the original decision was made) or where data was missing.
-For the citation file, we only added rows for papers that received citations (in scopus or otherwise). When we merge the citation data with the other information about the paper, we implicitly treat all missing values as zeros (this can be seen in our script). We did not imagine that people would try to manually reproduce the analysis using Excel -so we apologize if this caused additional difficulties.

B)
We downloaded all the different metrics that eLife makes available for all published papers (,Citations,Likes,en.search.wordpress.com,en.wikipedia.org,europepmc.org,f1000.com,scholar.google.com,t The column we used for all the analysis in the paper is Scopus. We were surprised to find out that the different citation sources (Scopus, Pubmed, citeulike, etc) can have significantly different values. This is important to take into account, as, unless discussed, this gives researchers a significant amount of degrees of freedom to pick the metric that best supports their hypothesis.

Bernd Pulverer
European Molecular Biology Organization (EMBO), Heidelberg, Germany Giordan et al analyzed 2,750 manuscripts sent out at the journal eLife for peer review (of which 1,405 ended up published). The authors compare papers in which the editors functions as 'reviewing editor' that is as one of three referees. Globally, and at almost every decision stage the process is accelerate significantly if the reviewing editor functions as one of the referees, with no or very small impact in author revision time and citation rates, respectively.
The authors calculate that every additional external referee adds 7.4 days to the process and suggest that journals strive to balance the need for covering all the required expertises carefully with the negative effect on the speed of evaluation.
The quality and speed of the peer review process are topics of active debate. Despite widespread criticism, the publication in certain peer reviewed journals continues to directly impact research assessment by both funders and institutions. The quality and fairness of the process is therefore paramount not only to assure the reliability of the literature, but also to inform research assessment in a balanced manner. Notwithstanding the slow delivery of this particular referee report, speed matters in particular in fast moving and highly competitive research areas like the biosciences.
Quantitative evidence that well defined aspects of an editorial process has positive effect on quality and or speed is therefore of significant importance.
The authors have carefully analyzed a decent sized dataset and report a statistically significant effect of a well defined change in the editorial process, while also showing evidence that this change has no detrimental effect on the quality of the editorial assessment, at least as far as the outcome is analyzed (here, in terms of two parameters: revision time and citation rate).
While this manuscript makes a significant contribution, I have a number of suggestions I would invite the authors to consider in revision: Textural: Abstract/main text; Background: It is not merely the growth of the number publications that puts the system under pressure (after all, in principle the editorial/peer review process may well be able to scale with increased research output), but rather the increased pressure publish in a small number of high Impact Factor journals in an effort to optimize chances of a positive impact on research assessment.

1.
Please introduce the journal eLife, including the scientific scope, as different communities have widely different peer review and citation cultures and this will likely affect the findings reported here.

2.
the individuals selected by e-Life or due to policing or incentives provided by the journal? After all, similar strategies could be applied to outside referees. On a related point, it would be useful to quantify if the reports by the reviewing editors were qualitatively different (e.g. length). One assumes the ultimate decision on the manuscript was as also much better correlated with the reports by reviewing editors than those of the outside referees. Non-essential further reaching analysis (suggestions): it would have been useful to measure and present the acceptance/rejection rates of manuscripts assess by three outside referees compared with two referees + reviewing editor. 1.
it would have been useful to quantify the % of agreement between the reviewing editor and the outside referees, compared with agreement between the outside referees.

2.
Competing Interests: BP is head of scientific publications at EMBO and chief editor of The EMBO Journal.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Author Response 30 Sep 2016
Federico Vaggi, Fondazione Edmund Mach, San Michele, Italy We thank the reviewer for some very constructive comments, and for appreciating the manuscript. We are now in the process of submitting a revised draft that we hope addresses all the major concerns. Speaking as the corresponding author, I also completely agree that the current publishing system causes very major distortions in the behaviour of scientists who seek high impact publications for grants/tenure. Unfortunately, given the currently economic climate and the funding situation for science, this is unlikely to change in the near future.
By publishing this report, as well as making all the data available, we hope to at least make the process a bit more transparent, and give authors more information on how decisions are carried out.
7. Papers that were rejected had significantly fewer rounds of revision, so that's likely to be a significant cause of the shorter review time. We also break down the data in detail in Figure 2, as well as Table 2. 8. As there is a lot of data condensed in those figures, we include a table in the text with all the numbers that are available. Putting all the numbers inside the figure legend would make the text very hard to read.
What we do not know from this paper, is whether or not two or more of the 282 Reviewing Editors sometimes choose to review the same paper. At the eLife website, the following is noted: "The Reviewing editor usually reviews the article him or herself, calling on one or two additional reviewers as needed". Are the additional reviewers always from the outside? If not, how would this change the authors' hypothesis related to the 'effects of an editor serving as one of the reviewers'?
The methods used for the data analysis are explained very well, with the exception of one detail: How did the authors acquire the initial dataset of 9,589 papers? This information is presented in the 'Acknowledgements' section, but could have also been added to the Methods section, for more clarity.
The graphs related to the authors' findings are clear and present interesting information, but I am not sure how the citation data were collected from Scopus for the peer-reviewed papers in eLife and whether or not 'citation windows' were used for the papers depending on the year in which they were published. Essentially the authors are correct in saying that "papers accumulate citations over time, and, as such, papers published earlier tend to have more citations", hence citation windows are used to correct for this. The highest rates of citation (especially in the life sciences and biomedicine) will appear within three-to-five years following an article's date of publication. For this reason, bibliometricians usually count citations within this three-to-five year time-frame to determine an article's initial impact. Since the articles used in this study had been "submitted to eLife since June 2012" the authors should have focus on three things: 1) the involvement of a Reviewing Editor as a peer reviewer or not, 2) the number of days between start of the submitted paper's acceptance and publication, and 3) the papers' citation rate following 3-5 years after final publication.

Competing Interests:
No competing interests were disclosed.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.