Keywords
Promotion and tenure, researcher assessment, incentives and rewards
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Innovations in Research Assessment collection.
Several open science-promoting initiatives have been proposed to improve the quality of biomedical research, including initiatives for assessing researchers’ open science behaviour as criteria for promotion or tenure. Yet there is limited evidence to judge whether the interventions are effective. This review aimed to summarise the literature, identifying open science practices related to researcher assessment, and map the extent of evidence of existing interventions implemented to assess researchers and research impact.
A scoping review using the Joanna Briggs Institute Scoping Review Methodology was conducted. We included all study types that described any open science practice-promoting initiatives proposed or implemented to assess researchers and research impact, in health sciences, biomedicine, psychology, and economics. Data synthesis was quantitative and descriptive.
Among 18,020 identified documents, 27 articles were selectedfor analysis. Most of the publications were in the field of health sciences (n = 10), and were indicated as research culture, perspective, commentary, essay, proceedings of a workshop, research article, world view, opinion, research note, editorial, report, and research policy articles (n = 22). The majority of studies proposed recommendations to address problems regarding threats to research rigour and reproducibility that were multi-modal (n = 20), targeting several open science practices. Some of the studies based their proposed recommendations on further evaluation or extension of previous initiatives. Most of the articles (n = 20) did not discuss implementation of their proposed intervention. Of the 27 included articles, 10 were cited in policy documents, with The Leiden Manifesto being the most cited (104 citations).
This review provides an overview of proposals to integrate open science into researcher assessment. The more promising ones need evaluation and, where appropriate, implementation.
https://osf.io/ty9m7
Promotion and tenure, researcher assessment, incentives and rewards
There have been several initiatives to improve the quality of biomedical research through open science practices. Some approaches have evidence supporting their effectiveness.1 However, sometimes the best available evidence is based on data that lack transparency, have high uncertainty, or unknown reproducibility.2 Adopting Open Data “Badges” is one example of an intervention designed to promote open science (and improve research quality). Badges are simple ways of signalling and incentivizing desirable behaviour.3 Journals can offer badges to authors who meet their criteria for open science practices, thereby signalling that the journal values transparency, and that authors have met the transparency standards for their research.3 Open Data Badges developed by the Center for Open Science (USA) to acknowledge open science practices were associated with an increase in data sharing (from 1.5% to 39%) at the journal Clinical Psychological Science.3 Hundreds of journals used this evidence and endorsed Open Data Badges. However, in a randomized controlled trial, Open Data Badges did not noticeably improve data sharing by researchers (odds ratio 0.9, 95%CI: 0.1 to 9.0).4 In addition to badges for data sharing, other signals of transparency and openness5 are also promoted.
Before encouraging broad scaling and adoption of open science practice-promoting initiatives, including for researcher assessment, it is prudent to evaluate whether proposed and/or implemented open science interventions are effective; are they ‘fit for purpose’? Furthermore, should one assume that pursuing such initiatives will immediately and automatically lead to effective implementation? Could other factors (i.e., quality of research, certainty of its conclusions, and generalizability2) boost or hinder the effects of interventions to promote open science practices? There is a need to evaluate if the hypothesised benefits are met, and if there are unintended consequences when the proposed interventions are implemented.
We aimed to provide an overview of literature identifying criteria for the assessment of researchers (for example for career advancement) that relate to signals and/or practices of open science, such as badges, preprints, registrations, data sharing. We also aimed to map the extent of evidence of existing interventions implemented to assess researchers and research impact (i.e., improvements to society, such as more transparent research and research that is shared to help improve the speed of knowledge).
Scoping reviews are an appropriate research method to provide an overview of a topic or broader domain using a systematic and rigorous method. We used the Joanna Briggs Institute Scoping Review Methodology to conduct this scoping review. Prior to conducting the review, the protocol was registered. We restate our study methods here, in brief, taking large sections of the methods directly from the original protocol. Any discrepancies or deviations from the study protocol have been explained. We followed the PRISMA-ScR statement to guide our reporting of this scoping review.
Inclusion criteria
We included studies from the health sciences, biomedicine, psychology, and economics fields, as our impression is that open science developments are more prevalent in these disciplines, which are also closest to our knowledge and experience. We defined each discipline.6
We included studies that described any open science proposals or implemented open science practices by researchers, funders, academic institutions, societies (e.g., The Royal Society), to assess researchers and research impact, and focus on the following issues: researcher assessment (e.g., Declaration of Research Assessment – DORA), interventions (e.g., reporting guidelines7,8), or incentives to increase the speed of access to new knowledge (e.g., preprints); the availability of all knowledge (e.g., registration; open access publication); improve the quality of published research (e.g., reporting guidelines7,8); improve reproducible research (e.g., money back incentives9); and to increase sharing of data, code, and materials (e.g., open science badges3,4).
Our focus was open science proposed or implemented signals/practices, as defined by the United Nations Educational, Scientific and Cultural Organization.10 Based on the Framework for Open and Reproducible Research Training (FORRT)11 and the Transparency and Openness Promotion (TOP) guidelines,12 open science topics include: reproducibility [crisis] and/or replication; design, methods, data [code] material transparency; registration and/or preregistration; publishing of research and publication models; conceptual and statistical knowledge; and academic life and culture.
Examples of studies that met our eligibility criteria include: studies evaluating the use of badges to increase data sharing3,4; studies evaluation practices including pre-registration5; use of reporting guidelines7,8 and change in editorial policy13 to improve reporting and compliance in published articles; and, new publication models14 or proposals of other models9 as solutions to the reproducibility crisis in research.
We included original research reported in English. We also included policy documents from funders (e.g., Wellcome Trust; National Institute of Health (NIH), Canadian Institutes of Health Research (CIHR)), academic institutions (e.g., Utrecht University) and researchers (e.g., open science Minerva8,14). We included all article types, including but not limited to, conference proceedings, commentaries, and editorials.
Exclusion criteria
We excluded literature that was not published in English, and studies published before year 2000 due to logistics and resource constraints.
For our detailed search strategy please see Extended data.63 An experienced medical information specialist (APA) developed and tested the search strategy in consultation with the project leads (DM, CA and MG). The search strategy was developed in OVID MEDLINE, and peer-reviewed by another librarian.15 We searched a range of databases to achieve wide cross-disciplinary coverage, which included: OVID MEDLINE, OVID EMBASE, OVID APA PsycINFO, EBSCO Cumulative Index to Nursing and Allied Health Literature (CINAHL), and EBSCO ECONLit. The search strategies were translated using each database platform’s command language, controlled vocabulary, and appropriate search fields. Medical subject headings (MeSH), EMTREE terms, American Psychological Association thesaurus terms, CINAHL headings and text words were used to search the concepts and synonyms of “open science”, “data sharing”, “tenure” and “academic promotion”. Language limits were applied to capture articles in English. We performed all searches on October 3, 2022.
We also searched the grey literature to help identify proposed and/or implemented interventions to promote open science as part of researcher assessment. The grey literature search was limited to sources familiar to the research team. Purposive searching ensured we had the resources to complete the project in a timely manner. Supplemental search sources were: Open Science Centre; Research on Research Institute; METRICS and METRICB (Berlin); RECOGNITION & REWARDS; Open Scholarship Knowledge Base; and DORA. We performed forward citation searching of the included studies using citationchaser,16 Scopus, and Web of Science Core Collection. Additionally, we reviewed the bibliography of included studies. All search strategies are reported using PRISMA-S.17
Due to the volume of literature search results, bibliographic records were identified for screening using the Continuous Active Learning® (CAL®) tool, which uses supervised machine learning to rank titles and abstracts from most to least likely to be of interest. The CAL® tool was chosen due to its prior use by some of our team members.18
The results of the database searches were imported into the CAL® tool, without removing duplicates. The CAL® tool learns from the results of manual screening by reviewers to identify and rank titles and abstracts most likely to meet the inclusion criteria.
For manual screening by reviewers, a screening form based on the review’s eligibility criteria was prepared to aid in making consistent judgements on article relevance. To calibrate, two reviewers screened the first 100 titles/abstracts using the CAL® tool, independently. Discrepancies were identified. A third reviewer selected a pilot set of 10 titles/abstracts that were similar to the discrepant abstracts. The two reviewers screened the pilot set and discussed discrepancies. Subsequently, abstract screening was conducted using the CAL® tool by one reviewer.
Several hundred titles/abstracts were screened until the yield of relevant records diminished. After screening 1000 records, we started to look ahead among the successive batches of 100 records that we screened. We stopped when the yield from the successive batches dipped below 5 (per 100) relevant records. After stopping, the results from the titles/abstracts screening were de-duplicated prior to full-text retrieval. For the screening questions, see Extended data.63
We developed and piloted data extraction forms prior to data extraction. The extracted data included: (1) Corresponding author; (2) Discipline; (3) Publication year; (4) Journal or platform where the study was published; (5) The proposed and/or implemented interventions to assess researchers or research impact; (6) The stakeholders of the proposed/implemented intervention (e.g., academic institution, funder, publisher, and researcher); (7) Intervention type (single or multi-modal); and (8) Funding information of the study. Details of the forms used are provided in Extended data.63
Our data analysis included both quantitative (i.e., frequencies and percentages) and qualitative (i.e., narrative interpretation) methods. Data collection was conducted by MG, DM, AA, and JYN. As an additional step, MG and AA discussed and verified 20 of the 27 full-text articles (open Science interventions proposed/implemented). Discussions were focused on how best to categorize whether a proposed intervention was single or multi-modal, and which stakeholders were targeted. Multi-modal interventions targeted several open science practices, although some recommendations can conceptually be linked to each other. Single-modal interventions, were those targeting a single open science practice. Stakeholders were those who were actors in conducting the proposed interventions. As a measure of impact, we entered the document object identifier of included each article in the BMJ Impact Analytics tool, to extract if the article was cited in policy documents.
We created a flow diagram to detail how articles captured in our search were screened. We also calculated the frequencies (and percentages), described narratively the characteristics of all data extracted from included documents, and present the results in summary tables. Our data extraction and presentation plan copied and pasted the text relevant information and present it from the included studies. This is noted with quotation marks at the beginning and end of each cut and paste. We reasoned this approach was more efficient and would more accurately report what the individual authors stated rather than any paraphrasing we did.
The literature search retrieved 18,020 records (Figure 1). We screened 1512 titles and abstracts until the yield of relevant ones fell below 2 (per 375). Two hundred and eighty-nine records were considered eligible for full-text screening. After duplicates were removed in Endnote, 208 records remained for full-text screening. We identified 21 papers that potentially described any open science proposals or practices evaluated and/or implemented to assess researchers and research impact. One record was later excluded during the data extraction phase. A forward citation check of these 21 studies yielded 3901 records. Using search terms “research assessment” and “open science and research assessment”, resulted in 111 records and 19 relevant records for title and abstract screening and full text screening, respectively. Of these, 7 reports met our inclusion criteria. Finally, we included 2719–45 studies in our scoping review.
The 27 articles included for epidemiological data extraction were published between 2013 and 2022, with the number of studies describing open science practices for research assessment increasing gradually in more recent years (Table 1 and Table 2 – underlying data).63 The corresponding authors were from 8 countries, with North America (e.g., USA and Canada) leading the vast majority of studies published (n = 12 and n = 6, respectively). Most of the publications were indicated as research culture, perspective, commentary, essay, proceedings of a workshop, research article, world view, opinion, research note, editorial, report, and research policy (n = 22). Only three records19,31,33 in our scoping review were empirical research. The majority of the records did not report any funding information (n = 15), but of those that mentioned if they had received funding, seven records (47%) reported receiving funding, and similarly six records (53%) reported receiving no funding.
The majority of the studies proposed recommendations for researcher assessment that were multi-modal (n = 20), although some recommendation can conceptually be linked to each other, targeting several open science practices. To this purpose, several proposals have been developed such as the San Francisco Declaration on Research Assessment (DORA),40,46 The Leiden Manifesto for research metrics,42 The Metric Tide,45 and the Hong Kong Principles for Assessing Researchers.37 Some of the studies in our scoping review based their proposed recommendations on further evaluation or extension of previous initiatives. For example, Gagliardi et al.25 identified and prioritized 10 measures (that were further grouped through consensus as five measures of high importance and five measures of moderate importance) for assessing research quality and impact (DORA-compliant measures). Hatch et al.,27 formulated “practical guidance for research institutions looking to embrace reform … for driving institutional change at a meeting convened by DORA and the Howard Hughes Medical Institute”. While another study, Schmidt et al.,34 discussed the implementation of DORA and anecdotally describes practices used for hiring junior faculty members in their department. Additionally, Hatch et al.35 presented “a tool called SPACE that institutions can use to gauge and develop their ability to support new approaches to assessment, … built on design principles released by DORA, and a five-step approach to responsible evaluation called SCOPE47”. The authors claim the tool “can be adapted to different institutional contexts, geographies, and stages of readiness for reform, thus enabling universities to take stock of the internal constraints and capabilities that are likely to impact their capacity to reform how they assess research and researchers”.35 It proposes criteria such as, “diversifying career paths, focusing on research quality and academic leadership, and stimulating open science”. Understanding the relationship between SPACE and SCOPE is not inherently obvious.
Other multi-modal recommendations were included that did not mention any of the previously developed statements mentioned above. One study20 proposed to ensure data sharing, registration/pre-registration, and to not assess researchers based on traditional metrics. Another study,21 with authors comprised of “The Committee on Developing a Toolkit for Fostering Open Science Practices”, drafted elements for a toolkit, discussing topics on communication of the benefit of open science, (e.g., compiling a database of research articles that describe the ways open science is beneficial, resources to signal an organization’s interest in open science activities). An anecdotal discussion that faculty promotion must assess reproducibility by new assessment practices, that ensure “researchers use proper experimental design and analysis”, and require institutions to incentivize data sharing and transparency, was discussed by another.24 While another author26 discussed several ways to reward research transparency during various stages of researcher’ career, encompassing the hiring process, promotion and tenure evaluations, and “society and organization honors and awards”.
Aiming to develop and evaluate “more appropriate, non-traditional indicators of research” that “facilitate changes in the evaluation practices for rewarding researchers”, Rice and colleagues33 conducted a cross-sectional study “to determine the presence of a set of pre-specified traditional and non-traditional criteria used to assess scientists for promotion and tenure in faculties of biomedical sciences” among 170 randomly selected universities worldwide. Moher et al.,36 “completed a selective literature review of 22 key documents critiquing the current incentive system” prior to convening a 22-member (e.g., academic leaders, funders, and scientists) expert panel workshop asking “how the authors perceived the problems of assessing science and scientists, the unintended consequences of maintaining the status quo for assessing scientists, and details of their proposed solutions. The resulting table was used as a seed for participant discussion”, which “resulted in six principles for assessing scientists and associated research and policy implications”.
Majority of included studies were in the field of health sciences (n = 10), followed by biomedicine/and or health sciences and multidisciplinary, (n = 7 and n = 6, respectively). One study32 proposed “open access for measuring research assessment in Earth and Natural Sciences”. Several studies targeted their initiatives mainly towards redesigning or modifying researcher assessments via new metrics, measures, methods, and incentives. One study19 designed a new metrics for evaluating researchers (e.g., ‘ε-index’), while another study23 conceptualized “data-level metrics” “(DLMs) as indicators of scientific merit related to the production and (re-)use of datasets, … to capture and make data-sharing efforts visible”. Individualized research-philosophy statements, annotated curriculum vitae (CV), and “the use of a formal evaluative system that captures behaviors that support reproducibility” were also proposed,22 as a method to improve the transparency of scholarly reporting. The PQRST index – (research that is productive, high-quality, reproducible, shareable, and translatable) – is proposed by Ioannidis et al.28 for appraising and rewarding research. Nicholson et al.30 provided “a brief tutorial about impact metrics and how to use these metrics to document scientific impact, … detailing six steps to documenting the scientific and clinical impact of one’s research. Steps include: establishing an ORCID (Open Researcher and Contributor ID) account, creating research profiles on academic platforms, engaging in broad dissemination activities, harvesting bibliometric and altmetric data, adding bibliometric and altmetric information to one’s curriculum vitae, and submitting one’s promotion and tenure portfolio with confidence”. A discussion on how a researcher’s department has modified the incentive structure to support open science and replication work as well as innovation is provided by Lundwall et al.29 The author describes the designing of “incentive changes to support faculty in both quantity (i.e., publication counts) and quality (i.e., … the journal impact factors, journal rejection rates, and newly added methods that support sound science) efforts … using a self-rating rubric … to guide faculty to succeed”. Strintzel et al.44 proposed “ways of improving academic CVs for fairer research assessment”. Another study29 proposed “a novel policy to provide incentives for open science … by offering open-source (OS)-endowed professorships”. Fischman et al.41 outlined “an alternative proposal in which trustworthiness and usability of research would complement traditional metrics of scholarly relevance”41 mainly in institutions.
The ramifications of research promotion and tenure scholarship ecosystem is of significant societal impact. The policy document by the European Commission43 on the “evaluation of research careers acknowledging open science practices” also speaks to the recognition of international bodies in changing current practices. Moher et al.38 highlights the importance of the proposed interventions for open sciences practices in relation to COVID-19.
The role of stakeholders is evident in a recent publication by Bonn et al.39 captured the perspectives of research stakeholders regarding implementing changes over a five-year period. They translated their findings into four actions needed for fostering better research, with one of the action points relating to research assessment.
For the studies included in our scoping review, we extracted data on whether or not any of the proposed interventions were implemented (Table 3 – underlying data).63 Most of the studies (n = 20) did not discuss implementation of their proposed intervention. The remainder of the studies (n = 7) discussed ‘near’ implementation strategies such as, changes in how junior faculty members are identified and recruited at the author’s department, piloting and an audit and feedback process for their proposed open science intervention, and discussions on initiatives of various stakeholders for more responsible assessment of higher education (scholarship) for the society. Of the 27 included articles in our scoping review, 10 were cited in policy documents, with The Leiden Manifesto being cited the most often (n = 104; Table 4 – underlying data).63
We identified 27 proposals, across several disciplines, to integrate open science practices when assessing researchers. Most of these proposals are in the form of reports or journal publications. Some of the publications are editorials, opinions, narrative reviews, and other publication categories. Few of the proposals have been evaluated. The practices that have been evaluated have not used randomized trials or systematic reviews, often considered the highest level of evidence for testing interventions. Medicine, and perhaps other disciplines, has a long history of demonstrating that when ‘promising interventions are subjected to randomized trials, the effect of the intervention decipitates.48,49
Open science is a relatively new movement more commonly endorsed in Europe and the United Kingdom’s (UK) research ecosystem. For example, the UK government has made a strong commitment to embracing the importance of reproducibility by funding the establishment of the UK’s Reproducibility Network (UKRN).50 Similarly, the European Union’s Horizon program, is a major funder of several long-term programs that are addressing several open science issues. For example, the recently funded sharing and re-using of clinical trial data to maximize impact is a three-year project that will train early career researchers to implement data sharing and data re-use.51 In the United States, the Centre for Open Science has made substantial contributions to open science, such as the Transparency and Openness Promotion (TOP) guidance.12 Similarly, there are some emerging developments aiming to integrate open science into a reimaged research assessment framework, such as the Higher Education Leadership Initiative for Open Scholarship (HELIOS).52 Finally, the Coalition for Advancing Research Assessment (CoARA) is an exciting initiative that is generating considerable attention. It was not formally included in our review as it is still in the early stages of development. The 2022 agreement, and associated 10 commitments, is available on CoARA. It has been signed by more than 350 international organizations. Like HELIOS, CoARA is aimed at senior university administrators able to make decisions and change culture.
From our broad sweep of the literature, it is gratifying to see the emergence of several imaginative proposals to integrate open science practices into researcher assessments. For example, developing innovative ways to harmonize academic CVs that reward researchers for open science practices, such as data sharing and transparency, was proposed by several articles included in our scoping review. Similarly, moving away from traditional research metrics, and modifying the incentive structure to reward open science was mentioned in several articles we assessed. There may be some merit in considering a smaller number of core practices for all proposals to work towards evaluating and implementing. Such an approach might provide a larger evidence base for using these practices as part of a researcher’s assessment. Similarly, disseminating and training opportunities focused on a smaller core number of practices might be easier to achieve.
Currently, most researcher assessments are based on traditional quantitative metrics that are less useful (e.g., the number of publications), not evidence-based (e.g., journal impact factors), and do not incorporate other ways of assessing a researcher’s impact (e.g., registering research prior to its initiation). One measure of researcher impact might be the ability of others to reproduce methods and/or results of a researcher’s published research. This does not appear possible in the current climate of a substantive problem concerning reproducibility across disciplines.6,49 For reproducibility to be examined, it is important that researchers share their data and the analytical code underpinning their results. Neither behaviour is common in medicine, and other disciplines.53 To improve the use and impact of these and other open science practices, universities and other research organizations might want to integrate data sharing as part of researcher assessment. This practice is in keeping with what patients have requested.54 Similarly, to make any real sense of reproducibility, is it essential that authors provide a comprehensive and transparent description of what the research team did and found, namely, the methods and results. To help researchers better report their research, reporting guidelines have been developed and for some of them, there is evidence that their use is associated with improved quality of reporting.7 Traditional assessments of researchers do not ask about the use of reporting guidelines, even though there is an extensive body of evidence indicating that the quality of reporting research is suboptimal; it is wasteful.55–61
Few proposals for researcher assessment have been evaluated. It may be because few funders have explicit requirements for this type of research. Another possibility is that most universities are not particularly interested in incorporating open science practices into their researcher assessments. In Canada, only 47 organizations have signed the Declaration of Research Assessment; only two are universities. Universities may see little benefit from investing in this type of research. It might be that the Horizon Europe funding program, which has already invested in open science research, and other national funders, such as France’s L’Agence Nationale de la Recherche, will fund research that focuses on the integration of open science into researcher assessment. There has been a recent call for more dialogue between universities and journals to address research misconduct.62 Perhaps something similar between funders, universities, and journals might result in advancements in research assessments.
Our results did not identify proposals for integrating open science into researcher assessment from the Global South. While universities in some countries, such as Singapore (e.g., Nanyang Technological University), have implemented open science practices, there is little data about the uptake of open science in the Global South (e.g., African Open Science Platform). It could be that those responsible for researcher assessment in some countries feel compelled to use traditional metrics, allowing them to compare themselves against other regional, national, or international institutions in the university rankings game. It is also possible that proposals that promote qualitative metrics as part of researcher assessment cannot be used in some parts of the Global South. There are typically insufficient numbers of people trained in qualitative methods regardless of jurisdiction. There are likely unique issues that warrant hearing from the Global South. That said, there may be enough proposals from the Global North to advocate for a pause on any new ones. It may be more productive to evaluate some of the more promising ones, perhaps focusing on a small core set of open science practices.
While the United Nations Educational, Scientific and Cultural Organization recommendation on open science is commendable and includes the importance of research integrity and inclusivity, we found none of the included proposals make any explicit mention of equity, diversity, and inclusion (EDI) as part of researcher assessment. For example, how diverse are research teams and do they integrate EDI as part of their research program? Do research team leaders provide EDI training to early career members of their research teams? We think that EDI principles fit nicely with UNESCO’s recommendation on open science. As such, current proposals should reconsider how they can integrate EDI as part of open science when assessing researchers.
One issue not discussed is how organizations that introduce open science practices when assessing researchers will affect faculty members moving from an institution that has integrated open science practices when assessing researchers, to an institution that has not yet developed an open science matrix for assessing researchers. How competitive will these researchers be when seeking a new position? These issues, and others, are beyond the scope of this review. Nevertheless, they likely need a deeper discussion.
Our research is not without limitations. It is possible that we missed eligible studies for inclusion. Given our resource limitations, we only searched for eligible studies from 2000 to present. It is possible that relevant publications exist prior to this date and might influence our results. Similarly, we only included papers in English. It is likely that relevant work exists in languages other than English and may influence the current results. That said, we believe our search strategy was broad and comprehensive. While we used an extensive sample of papers to test and inform the artificial intelligence (AI) screening, it is possible that the AI approach was imperfect. From a face validity perspective, the papers we were familiar with were screened in by the CAL® tool. We did not search or examine all disciplines. We focused on disciplines in which we have some expertise and credibility. That said, we think the results have merit to other disciplines. Finally, the open science proposals table (Table 3 – underlying data)63 was not very precisely described in our protocol and especially the categorization of single modal and multimodal practices somewhat subjective in its development. Still, the table does provide a complete overview of the practices we retrieved and the distinction between single modal and multimodal may be a starting point for further investigations in the complexity of these assessment criteria.
We aspire for this scoping review to promote discussion across different communities. While there are many proposals to integrate open science into researcher assessment, few of these have been implemented and evaluated. Strong evaluation methods include randomized stepped wedge designs and cluster randomized trials, among other approaches. These methodologies will require teamwork. As these evaluations increase in the coming years, it is likely important to consider how individual efforts can be used to provide a cumulative evidence base of their effects. Such meta-analytical approaches can help provide a ‘living’ view of the cumulative evidence as to the benefits and challenges of using open science as part of researcher assessment.
OSF: Open science interventions proposed or implemented to assess researcher impact, https://doi.org/10.17605/OSF.IO/GXY5Z. 63
This project contains the following underlying data:
OSF: Open science interventions proposed or implemented to assess researcher impact, https://doi.org/10.17605/OSF.IO/GXY5Z. 63
This project contains the following extended data:
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Are the rationale for, and objectives of, the Systematic Review clearly stated?
Yes
Are sufficient details of the methods and analysis provided to allow replication by others?
Partly
Is the statistical analysis and its interpretation appropriate?
Not applicable
Are the conclusions drawn adequately supported by the results presented in the review?
Partly
If this is a Living Systematic Review, is the ‘living’ method appropriate and is the search schedule clearly defined and justified? (‘Living Systematic Review’ or a variation of this term should be included in the title.)
Not applicable
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: research evaluation, science policy, scientometrics
Are the rationale for, and objectives of, the Systematic Review clearly stated?
Partly
Are sufficient details of the methods and analysis provided to allow replication by others?
Yes
Is the statistical analysis and its interpretation appropriate?
Yes
Are the conclusions drawn adequately supported by the results presented in the review?
Partly
References
1. Claesen A, Gomes S, Tuerlinckx F, Vanpaemel W: Comparing dream to reality: an assessment of adherence of the first generation of preregistered studies.R Soc Open Sci. 2021; 8 (10): 211037 PubMed Abstract | Publisher Full TextCompeting Interests: While I believe that I am objective and unbiased, I certainly critiqued the authors' description of activities such as open science badges, which is an initiative that we support: https://cos.io/badges It is difficult for me to confirm that my opinion was not affected by that fact and I would rather leave it to the reader to assess my critique and affiliation and decide for themselves if it was unbiased and objective.
Reviewer Expertise: Open science, policy, meta-science, ecology, citizen science
Are the rationale for, and objectives of, the Systematic Review clearly stated?
Yes
Are sufficient details of the methods and analysis provided to allow replication by others?
Yes
Is the statistical analysis and its interpretation appropriate?
Not applicable
Are the conclusions drawn adequately supported by the results presented in the review?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: meta-research, open science, neuroimaging, neurology, mental health, statistics
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 23 Oct 23 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)