Keywords
Altmetrics, bibliometrics, publication impact
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Interactive Figures collection.
The EMPIRE (EMpirical Publication Impact and Reach Evaluation) Index is a value-based, multi-component metric framework to assess the impact of medical publications in terms of relevance to different stakeholders. It comprises three component scores (social, scholarly and societal impact), each incorporating related altmetrics that indicate a different aspect of engagement with the publication. Here, we present an exploratory investigation of whether publication types or disease indications influence EMPIRE Index scores.
Article-level metrics were extracted and EMPIRE Index scores were calculated for 5825 journal articles published from 1 May 2017 to 1 May 2018, representing 12 disease indications (chosen to reflect a wide variety of common and rare diseases with a variety of aetiologies) and five publication types.
There were significant differences in scores between article types and disease indications. Median (95% CI) social and scholarly impact scores ranged from 1.2 (0.3–1.6) to 4.8 (3.1–6.6), respectively, for phase 3 clinical trials, and from 0.3 (0.3–0.4) to 2.3 (1.9–2.6), respectively, for observational studies. Social and scholarly impact scores were highest for multiple sclerosis publications and lowest for non-small cell lung cancer publications. Systematic reviews achieved greater impact than regular reviews. Median trends in the social impact of different disease areas matched the level of public interest as assessed through Google search interest. Although most articles did not register societal impact, mean societal impact scores were highest for migraine publications.
The EMPIRE Index successfully identified differences in impact by disease area and publication type, which supports the notion that the impact of each publication needs to be evaluated in the context of these factors, and potentially others. These findings should be considered when using the EMPIRE Index to assess publication impact.
Altmetrics, bibliometrics, publication impact
Additional statistical analyses have been conducted. These are described in detail in the supplementary material.
See the authors' detailed response to the review by Shir Aviv-Reuven
See the authors' detailed response to the review by José Luis Ortega
See the authors' detailed response to the review by Arman Yurisaldi Saleh
Article-level measures of publication impact (ALMs), including alternative metrics, play a crucial role in providing a more comprehensive understanding of the impact of research publications compared with traditional bibliometrics like the journal impact factor (JIF) and CiteScore. While JIF and similar metrics are useful for identifying journals with high readership, they fall short in assessing the quality and impact of individual research articles1–3. This limitation arises from the fact that JIF primarily measures the average number of citations to articles in a journal, which may not reflect the significance or reach of a specific article within that journal4. ALMs offer a more nuanced view by considering various factors beyond citations, such as social media mentions, downloads, and online discussions, providing insights into the broader impact of research across different audiences and contexts2.
To date, most uses of alternative metrics to assess medical publications have looked at their correlation with conventional metrics, especially citations. For example the correlation between alternative and traditional bibliometrics has been examined in publications in total joint arthroplasty research5, neurology6, COVID-197, critical care medicine8 and medical professionalism9. Other uses of alternative metrics include the creation of an ‘altmetric impact factor’ for medical journals10. They have also been used to identify trending topics in breast cancer11, to explore the impact of multidisciplinary research groups12, and to assess the convergence of altmetrics and societal impact assessments using data from the UK Research Excellence Framework13. These studies have used either individual ALMs (Tweets, Mendeley saves, News mentions etc) or the Altmetrics Attention Score (AAS), an aggregated, weighted score that unifies multiple ALMs into a single metric. The AAS provides a convenient way of providing an summary of multiple article level metrics, but it excludes some key metrics (Mendeley saves and citations). The methodology by which it is calculated is not fully transparent and it is not open to adaptation by the users depending on their intended use.
We have previously described a novel and holistic approach to summarising ALMs, the EMPIRE (EMpirical Publication Impact and Reach Evaluation) Index, which uses weighted article-level metrics to assess the impact of medical publications in terms relevance to different stakeholders14. The EMPIRE Index provides component scores for scholarly, social and societal impact, as well as a total impact score and predictive reach metrics (Figure 1). The scholarly component correlates weakly with CiteScore while the social score correlates closely with the AAS. The societal component, comprised of citations in guidelines, policy documents and patents, represents a distinct score.
HCP, healthcare provider; NEJM, New England Journal of Medicine.
The EMPIRE Index’s ability to correlate scholarly components with CiteScore and social components with AAS showcases its alignment with established metrics while providing a more nuanced evaluation of impact14. This nuanced evaluation is particularly valuable in the medical field, where research impact transcends academic citations to societal influence and practical applications in healthcare settings.
For researchers, the EMPIRE Index provides a detailed evaluation of publication value by offering component scores for scholarly, social, and societal impact, in addition to total impact scores and predictive reach metrics14. This multifaceted assessment allows researchers to gain insights into the broader impact of their work beyond traditional citation-based metrics, enabling them to comprehend the reach and significance of their research across various dimensions.
Policymakers can derive value from the EMPIRE Index as it includes a societal impact component that takes into account citations in guidelines, policy documents, and patents14. This feature is crucial for policymakers as it offers valuable insights into how research publications influence policy decisions and contribute to the development of guidelines and patents, assisting policymakers in making informed decisions based on the societal relevance and impact of research findings.
Healthcare providers can utilize the EMPIRE Index to evaluate the impact of medical publications in terms of societal implications and reach, allowing them to better grasp the practical implications of research in guiding clinical practice, policy development, and healthcare innovations14. By considering societal impact metrics, healthcare providers can improve patient care and outcomes by staying abreast of impactful research in their field.
The EMPIRE Index uses randomized controlled clinical trials published in the NEJM as benchmark, which studies that are selected by the journal editors as likely to be of the highest impact for the practice of medicine. However, this standard cannot be uniformly applied to all types of studies, across different disease areas and development stages of a therapy. It is widely recognised that publication metrics vary by discipline, and to facilitate the comparison of publication impact across different fields, field-normalised citation impacts are frequently calculated15. Although evidence is limited, some research suggests that ALMs can also vary by publication type. For example, review articles in pharmacology journals can receive twice as many citations as original articles16 and differences between quantitative and qualitative articles published in BMJ have been found for altmetric scores, but not citations17.
Therefore, the nuances of each field and publication type should be considered in order to understand what constitutes a typical score. This additional context is an important consideration as it allows users to compare scores with a relevant frame of reference and so to accurately interpret and utilise publication metrics and to derive meaningful insights.
It is likely that EMPIRE Index scores vary by disease area and by publication type. However, the scale and nature of these variations are unknown, which complicates efforts to compare scores of individual publications. This in turn limits the utility of the scale to identify ‘high impact’ publications, since what counts as an atypically high score will vary across therapy areas publication types.
In this brief report we present an exploratory investigation of the interaction between disease indications and publication types and typical EMPIRE Index scores. We sought to explore the variations that may exist in typical (i.e. average and median) EMPIRE Index scores across different therapy areas and publication types, and to explore the magnitude of these variations. This will allow the metrics of individual publications to be placed in a richer context of potentially comparable publications to facilitate interpretation of individual publication scores.
We also sought to provide additional context for the observed typical scores in two ways. Citescore is a journal-level metric that is related to the average number of citations per paper in that journal18 and is an indicator of journal prominence in the academic community. Journal Citescore is modestly correlated with publication EMPIRE Index Scores, especially the Scholarly Impact components14. We therefore compared mean EMPIRE Index scores across therapy areas and publication types with the mean Citescores of the journals in which the papers were published, to see if journal prominence could explain any variations seen.
Google search trends indicate the level search engine activity around a topic and have been used as a measure of the prominence of health topics among the general public19. We therefore compared EMPIRE scores with search trends for each of the therapy areas we examined, to see if public interest in a therapy area was related to engagement with publications measured via article-level metrics, especially with regard to the EMPIRE Index Social Impact component.
This exploratory study investigated 12 disease indications, purposefully chosen to reflect a variety of common and rare diseases with a variety of aetiologies. Six of these were rare diseases, selected as a convenience sample of disease indications with which the authors were most familiar. No formal statistical power analysis was undertaken. However, we aimed for disease samples from approximately 1000 publications, which would enable publication type sub-analyses. The six rare disease samples were, therefore, pooled.
Relevant publications were identified for each disease by the appearance of the disease name in the publication title. We limited the search period to items with publication dates between 1 May 2017 and 1 May 2018, to give sufficient time for metrics to accumulate while also minimising the time-dependent variation in metrics.
The searches were conducted on PubMed between 22 June 2020 and 3 July 2020, using the following search string:
We chose five publication types that are commonly used in medical research and communication: clinical trials, phase 3 clinical trials, observational studies, reviews, and systematic reviews. Definitions of each publication type are provided by US National Library of Medicine20. Clinical trials are experimental studies that test the efficacy and safety of new interventions or compare different interventions in human participants. Phase 3 clinical trials are the last stage of clinical research of new drug treatments and are typically large scale and provide evidence intended to guide treatment practice. Observational studies are non-experimental studies that examine the associations between exposures and outcomes in natural settings, without manipulating any variables. Reviews are articles that summarize and synthesize the existing literature on a specific topic, providing an overview of the current state of knowledge and identifying research gaps. Systematic reviews are a type of review that follow a rigorous and predefined methodology to search, select, appraise, and synthesize the evidence from multiple primary studies on a specific question.
For each disease, we conducted secondary searches for each publication type using PubMed tags for those of interest (i.e. the search string above and either “review”, "systematic review", "clinical trial, phase iii", “clinical trial” or "observational study"). PubMed publication types are metadata supplied by PubMed and derive originally from publisher submissions.
PubMed IDs were entered into the Altmetric Explorer and PlumX dashboards and ALMs downloaded over the period 23 June 2020 to 11 July 2020. ALMs were assumed to be zero for any publication for which Altmetric Explorer did not return a result. We also obtained the journal CiteScore for all publications18.
EMPIRE Index scores were calculated for all publications as described previously14. Briefly, selected ALMs that compose the EMPIRE Index were weighted and aggregated to form three component scores (social impact, scholarly impact and societal impact), which were then summed to form a total impact score.
Each disease area comprised a different mixture of publication types, which we expected could confound the analysis; multivariate analysis on such a heterogenous, non-normal and zero-inflated data set is problematic. Therefore, we opted to create standardised samples through random polling.
A sample was created for each disease area with a standardised mix of publication types chosen to maximise the total number of publications retained (the standardised publication types [SPT] set). First, the two least common publication types (phase 3 clinical trials and systematic reviews) were excluded because of the high variation between disease areas and because they are largely subsets of other publication types (clinical trials and reviews, respectively). Although the observational studies publication type was only slightly more common than systematic reviews, it was retained as it was considered to be functionally very different from clinical trials and reviews. The proportions of each of the remaining three publication types were calculated for each disease set, as well as for the overall set. Publications were then trimmed from each disease set by random sampling, as needed, to match the proportions in the overall set. The trimmed publication sets formed the SPT set.
Similarly, each publication type comprised a different mix of diseases. A standardised disease areas (SDA) set was created by random sampling using a similar approach that ensured each publication type included the same mix of diseases, while maximising the total number of publications retained.
We downloaded weekly Google Trends data on relative interest over time for the period of interest for these diseases (May 1 2017 to May 1 2018). Google Trends measures the relative frequency of searches for a given term on Google, normalized by the total number of searches in a given time and region. A score of 100 indicates the maximum interest in any week over the search period. We used each therapy area as the target term (e.g. asthma, multiple sclerosis) adding each to a comparative dataset so that the interest in each term is expressed relative to the maximum interest across any of the search terms of interest. The year averages presented here are expressed relative to that maximum score.
As these analyses were exploratory and primarily intended to assess the magnitude of impact scores observed for different publication types and disease areas, we primarily provide descriptive statistics, supported by some statistical analysis of comparative differences. Intra-group differences were assessed using Kruskal-Wallis one-way analysis of variance, a non-parametric test for equality of population means (a significant result indicates that that at least one population median of one group is different from the population median of at least one other group).
Exploratory analyses were conducted using a two-part modelling approach, chosen to account for the zero-inflated and skewed nature of our impact measures. The model comprised a logistic regression to predict the probability of a publication having any impact (non-zero impact), and a generalized linear model (GLM) with a log-link function to analyse the magnitude of impact for publications with non-zero impact. Both components of the model included disease area, publication were type, and CiteScore as predictor. Asthma and clinical trials chosen arbitrarily as the reference categories for disease area and publication type, respectively.
To assess the relationship between public interest in a disease area and publication impact, we conducted correlation analyses at the disease area level. We calculated both simple correlations and partial correlations controlling for average CiteScore, to account for potential confounding effects of publication quality across disease areas.
All analyses were performed using Python (version 3.10.5) with the statsmodels library (version 0.14.4). Detailed methodology and code are provided in the supplementary materials21.
In total, 20 577 publications were identified across the 12 disease areas21, of which 5825 (28%) were tagged with one of the publication types of interest (Table 1). Table 1 also shows the Google search interest for each of these diseases.
Google search interest is the average weekly interest across the search period and is a relative score 0–100 where 100 is the maximum score for any disease in any individual week.
The numbers of publications retained in the SDA set used for publication type comparisons (i.e. with the same disease indication composition for each publication type) are shown in Table 2.
Median EMPIRE Index scores and CiteScores for each disease in the SDA set are shown in Figure 2 and Table 3. Mean EMPIRE Index scores, shown in Figure 3, broadly reflect the median scores. Statistical analysis indicated that there was some significant variation in the medians of each component as well as the total impact score and journal CiteScore. In general, the ranking of publication type is relatively consistent across different types of impact. Notably, phase 3 clinical trials had the highest median and mean scores, while observational studies had the lowest. Systematic reviews had higher impact than reviews. Most articles across all publication types had no societal impact, and significant differences in societal impact were driven by outliers. Of note, eight of the ten publications with the highest societal impact were clinical trials, and six of those were in non-small cell lung cancer (NSCLC).
CI, confidence interval.
The interactive version (online only, accessible here: https://s3.eu-west-2.amazonaws.com/ox.em/webflow/p29ieu21/chart1.html) also shows mean EMPIRE Index scores for each disease by publication type (full set).
Results from the mixed model and regression analysis are presented in the supplemental material21. Compared with non-phase 3 clinical trials, systematic reviews and phase 3 publications had higher social, scholarly, societal and total impact, although this was not consistent across logistic regression and linear models. In the logistic regression, systematic reviews consistently showed higher odds of having any impact across all impact types. Phase III clinical trials showed higher impact in the GLM analysis for scholarly and total impact, but this wasn't consistent across other impact types. In the GLM, observational studies had lower social, scholarly and total impact.
The numbers of publications retained in the SPT set used for disease comparisons (i.e. with the same publication type composition for each disease indication) are shown in Table 4.
Median EMPIRE Index scores and journal CiteScores for each disease in the SPT set are shown in Figure 4 and Table 5. Kruskall–Wallis testing indicated at least one significant pairwise difference in the total scores, each component score and journal CiteScore. Migraine and multiple sclerosis (MS) had the highest impact across social and scholarly component scores as well as the total impact score, while NSCLC and psoriasis had the lowest. Most articles across all diseases had no societal impact, with significant differences in societal impact driven by outliers. The eight publications with the highest societal impact were all important clinical outcomes trials (three in type 2 diabetes, three in NSCLC and one each in migraine and asthma).
MS, multiple sclerosis; NSCLC, non-small cell lung cancer; T2D, type 2 diabetes.
Mean EMPIRE Index scores for each disease in the SPT set are shown in Figure 5. The interactive version of Figure 5 (online publication only) also shows the mean EMPIRE Index scores by disease for each publication type (full data set). Mean scores do not show clear trends for differences between disease indications, although societal impact appears to be lower for asthma and MS, and higher for migraine than other diseases. The high societal impact for migraine was driven by review articles; 16 of the 23 migraine articles with societal impact scores above zero were review articles. The scholarly impact for rare diseases appears to be higher than for other disease areas, albeit with low confidence owing to small numbers of publications included.
The interactive version (online only, accessible here: https://s3.eu-west-2.amazonaws.com/ox.em/webflow/p29ieu21/chart2.html) also shows mean EMPIRE Index scores for each disease by publication type (full set). MS, multiple sclerosis; NSCLC, non-small cell lung cancer; T2D, type 2 diabetes.
Results from the mixed model and regression analysis are presented in the supplemental material21. Compared with the reference indication (asthma), other disease areas tended to have a lower probability of any total impact but, among publications with at least some impact, the average impact was higher with other diseases compared with asthma. This pattern was mostly consistent across the impact subscales (social, scholarly and societal).
Google search interest (public interest) did not correlate with any of the impact scores in the simple correlation. However, adjusted for CiteScore, the correlation between social impact and public interest was moderately strong and marginally significant (r=0.69, p=0.0894).
This analysis found that typical EMPIRE Index scores vary across both disease indications and publication types. These results provide valuable contextual information for interpreting EMPIRE Index scores and publication metric findings in general, for individual publications. For example, these findings can be used to help to understand whether a particular publication has notably high (or low) metrics.
We found considerable differences between disease areas, which broadly reflected public interest in the disease as assessed through Google search interest (Table 1). The variations in impact across disease areas highlight the complex nature of research dissemination and engagement. While publications in areas other than asthma tended to have lower odds of any impact, those that did have impact often showed higher magnitudes. This suggests that certain disease areas may have fewer but more impactful publications, possibly due to factors such as disease prevalence, research funding, or public health priorities.
We found that the three diseases with the highest median EMPIRE Index scores, especially social impact, were migraine, MS and asthma (they also tended to have the greatest effect sizes in the model). These also had the highest public interest. These differences were not observed in journal CiteScores, meaning that the disease areas with higher EMPIRE Index impact were not necessarily published in ‘high impact’ journals. NSCLC had low public interest (‘lung cancer’ as a general term was higher, but still lower than any of the other five major disease areas examined). Publications in NSCLC also had low median total impact scores, particularly in terms of social impact scores, despite being published in journals with higher median CiteScores. Of all the three impact scores, only social impact correlated meaningfully with public interest. Overall, this suggests that the reason publications in some disease areas attract higher social impact scores (driven by citations in news articles and social media) is that these disease areas are of greater interest to the general public. Although this suggests distinct differences between diseases in terms of publication impact, it should be noted that the period of interest was only a single year. The findings could therefore have been influenced by the completion of important clinical studies, which can vary from year to year across disease areas.
A clear picture is seen for publication types, with phase 3 trials demonstrating much higher metrics than other types. Phase 3 clinical trials, the last stage of clinical research of new drug treatments, are typically large scale and provide evidence intended to guide treatment practice22. The higher impact observed for this publication type likely relates to higher public interest, higher scholarly interest, and greater likelihood of citations in guidelines and policy documents. Systematic reviews had higher impact than general reviews; interestingly, this was despite being published in journals with similar median CiteScores. This likely reflects that the methodological approach to synthesising systematic literature reviews makes them more impactful. Observational studies had the lowest impact, suggesting observational analyses are still generally regarded as having lower interest, despite the fact that they are increasingly regarded as valuable complementary research to interventional studies23. These findings from simple comparisons were also supported by results from the exploratory model.
In general, across both publication types and disease indications, median scores were higher for scholarly impact than for social or societal impact, while mean and maximal scores were broadly similar (or lower). This suggests that score distribution is more skewed for social and societal impact, with many papers generating little interest despite some scholarly impact.
A key strength of this study is the use of an automated approach to identify a large pool of publications for analysis. However, the automated process used depends on the reliability of the underlying data. For example, disease areas were identified through a PubMed search on article titles, which may have excluded some relevant articles or included irrelevant ones. The PubMed search engine uses automatic term mapping, which usually makes the search more inclusive but can introduce inconsistencies24. Publication types were identified by metadata tags, but these can often be inconsistently applied or missing. It can also result in duplication; for example, some phase 3 clinical trial publications in our sample were also classified as clinical trials.
A further limitation of this study is that it only includes publications from 2019, which may not reflect the long-term impact of publications over time. We acknowledge that some of the publications may have been published towards the end of the period, which could affect the relationship with Google Trends search data. Moreover, the EMPIRE Index relies on data sources that may have incomplete or inaccurate information, such as missing citations, delayed updates, or misclassified publications. Furthermore, the definition or interpretation of publication impact varies based on the stakeholder, intended audience and channel of dissemination, so other aspects such as clinical utility, policy influence, or educational value, may require more qualitative or specific indicators. Therefore, the findings of this study should be complemented by other methods of publication evaluation. Despite these potential limitations, it is important to remember that the original aim of the EMPIRE Index was to allow for flexibility to the users to define publication impact in terms of which individual ALM they would like to emphasise on through more weighting, which then customizes the measurement in the context of the intended pre-defined objectives.
The findings of this study have several implications for publication planning and evaluation in academic and also the pharmaceutical industry. First, they highlight the importance of tailoring communication strategies to different disease areas and publication types, as different audiences may have different preferences and expectations for the content and format of publications. For example, publications on rare diseases may benefit from more from sharing on social media or patient engagement, while publications on oncology may benefit from explication of study results in clinical contexts.
Second, they demonstrate the value of using a multidimensional index that captures various aspects of publication impact, rather than relying on a single metric such as citations or Altmetric Attention Score. The EMPIRE Index provides a comprehensive and balanced view of how publications perform across scholarly, social and societal domains, and can help identify strengths and weaknesses of different publications and inform future improvement.
Third, they suggest that publication impact is not solely determined by the quality and novelty of the research, but also by the relevance and timeliness of the topic, the appropriateness and effectiveness of the dissemination channels, and the engagement and influence of the stakeholders. Therefore, publication planning and evaluation should consider not only the scientific merit of the publication, but also the context and the audience of the publication.
In conclusion, the EMPIRE Index successfully identified differences in impact by disease indication and publication type. This supports the notion that there is no universal gold standard metric for publications, and instead the impact of each publication needs to be evaluated in the context of the type of publication, disease area and potentially other factors. These findings should be considered when using the EMPIRE Index to assess publication impact.
Figshare: EMPIRE Index disease and publication type analysis. https://doi.org/10.6084/m9.figshare.17072435.v221
This project contains the following underlying data:
SMA metrics unlinked 11Jul20.xlsx
Psoriasis metrics unlinked 11Jul20.xlsx
NSCLC metrics unlinked 5Jul20.xlsx
NET metrics unlinked 11Jul20.xlsx
NASH metrics unlinked 11Jul20.xlsx
MS metrics unlinked 5Jul20.xlsx
Migraine metrics unlinked 5Jul20.xlsx
Google search interest (30Jul21).xlsx
DLBCL metrics unlinked 11Jul20.xlsx
Asthma metrics unlinked 5Jul20.xlsx
TSC metrics unlinked 11Jul20.xlsx
TNBC metrics unlinked 11Jul20.xlsx
T2DM metrics unlinked 5Jul20.xlsx
Mixed mode analysis supplemental.docx
All_data_integrated_7Oct24_deduplicated.csv
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Neurorestoration, neurobehavior, neuroprotective agent, bibliometric
Competing Interests: No competing interests were disclosed.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
No
If applicable, is the statistical analysis and its interpretation appropriate?
No
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Neurorestoration, neurobehavior, neuroprotective agent, bibliometric
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
I cannot comment. A qualified statistician is required.
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Bibliometrics, Scientometrics, academic search, journals impact
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Bibliometrics, altmetrics, academic search engines, scholarly social networks
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
No
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
No
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Bibliometrics, altmetrics, academic search engines, scholarly social networks
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 5 (revision) 30 Oct 24 |
read | ||
Version 4 (revision) 16 Sep 24 |
read | read | |
Version 3 (revision) 10 Mar 23 |
read | ||
Version 2 (revision) 12 Apr 22 |
read | ||
Version 1 27 Jan 22 |
read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)