Keywords
Open Science, Data Sharing, Audit, Survey, Facilitators and barriers
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Data: Use and Reuse collection.
Open Science, Data Sharing, Audit, Survey, Facilitators and barriers
Open science refers to the practices of making research outputs openly available for others to use and build upon. Open science can be seen as an alternative to the typically ‘closed’ model of science which tends to stress concern over intellectual property and controlling access to scientific knowledge and data.1 Benefits to open science include increased transparency, the ability to replicate research, equity in access to research information, reputational gains and increased chances of publication.2 Recognizing the value of open science, many stakeholders have begun to implement policy mandates to see that open science practices are being implemented in the community. This includes the creation of federal roadmaps and policies, organization policies, and funder policies, such as Canada’s Roadmap to Open Science,3 the second French national plan for open science,4 the United Nations Educational, Scientific and Cultural Organization’s open science policy,5 the European Union’s open science policies,6 and others.
Beyond these stakeholders, academic institutions have an important role in implementing open science.1 Despite their seemingly obvious role, academic institutions have tended not to feature in discussions around implementing open science. Indeed, academic institutions have been previously criticized for not playing more of a role in addressing issues of research reproducibility7 which is conceptually related to open science. The reality is that few academic institutions have strong transparent processes in place to encourage open science. Despite this, research institutions are uniquely positioned to help contribute to defining incentives in research and to valuing open science practices. Shifting researchers’ behavior towards openness will require education, training, as well as a culture shift. Research institutions could provide the environment of this training to verify common understanding of open science and to help shift culture. Inaction by research institutions to play their role in implementing open science will have downstream consequences for how research is disseminated.
In Canada, a notable exception is the Montreal Neurological Institute-Hospital (henceforth called the ‘Neuro’) which has committed to becoming an open science institute. Having made the scientific and cultural decision to become ‘open’ after a significant consultation and buy-in process,8 the Neuro is now positioned to implement open research practices. The Neuro has made a structural decision to run a focused implementation program wherein a small set of open science practices are initially focused on, and additional practices then added in an incremental manner.
At present there is a focus on data sharing, namely making the data underlying the results reported in a given publication publicly available for others to use and build upon, or to verify the work reported. The rationale of choosing data sharing from the potential open research practices was: 1. Few researchers have the skills required to share their data and most researchers are not trained in data sharing9; 2. Data sharing has the potential to lead to novel discovery and enhances the transparency of disseminated research findings; 3. The largest government health funder in Canada, Canadian Institutes of Health Research, recently announced new polices on data management.10 By adopting data sharing now, the Neuro stands to be ahead of incoming mandates and may offer insight for others to follow. Finally, while some data sharing infrastructure exists, a significant investment is being made at the federal level to build capacity for Canadian researchers.11,12
This article describes the results of two studies examining data sharing practices at the Neuro. In the first study, we audited all publications produced by Neuro researchers in 2019. In the second study, we surveyed Neuro-based researchers about barriers and facilitators to data sharing. The results will provide the Neuro with a better understanding of barriers and facilitators to data management and sharing and will identify educational needs related to data sharing that can be implemented to address barriers. Further, the audit of publications can be used to benchmark for improvements over time and to monitor change.
The study received ethical approval from the Ottawa Health Science Research Network Research Ethics Board (OHSN-REB #20210514-01H).
We used the STROBE guideline to inform our reporting of the audit13 and the CHERRIES guideline to inform reporting of the survey.14
Search strategy
We identified the Neuro publications produced in 2019 by searching the Web of Science (WoS)15 and using its meta-data to capture the Neuro’s output including searching two preprint servers MedRxiv16 and bioRxiv.17 The search strategy was developed by trained information specialists and librarians (Sanam Ebrahimzadeh and Alex Amar-Zifkin). For the full search strategy please see Extended Data.
Patient and public involvement
Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research.
Eligibility criteria
We aimed to include all research papers disseminated by researchers based at the Neuro in 2019. We included research papers within any field. We used the last listed date of publication on each paper to determine if the publication fits within this timeframe. We included all research publications irrespective of the role of the Neuro based author (e.g., including trainees, graduate and postdoctoral students, early career researchers and more established researchers). We included all publications where a Neuro-based author was listed, irrespective of where they were named on the author byline. We included publications in English only. We included publications in traditional peer-reviewed journals as well as those on preprint servers.
Screening and extraction
DistillerSr18 was used to manage study records retrieved by the search, this process could also have been accomplished manually, but DistillerSr helped with organization of files. We obtained all full-text documents, and two reviewers independently screened these records against our inclusion criteria for included articles, we then extracted basic epidemiological information from each included study, including the names of the Neuro based author(s), the outlet of publication, and we classified articles based on their study design and content area. In addition, we extracted information related to data sharing practices. This included information on whether the publication contained a data sharing statement, whether or not data sharing occurred, and if so, what format and tools were used for this sharing. In addition, we extracted information on a range of other open science practices as per Table 1.
Data analysis
Once all extracted data were in agreement (i.e. discrepancies between assessors resolved) the complete dataset for all included articles was exported from DistillerSR into SPSS 2819 where data were cleaned. We presented the total number of included articles, and basic descriptive analyses for all items extracted using count data and percentages. Also, we used Unpaywall20 to check the open access status of the articles captured in our sample. To do so, we inserted the DOIs we extracted from the included articles into the Unpaywall tool.
Sampling
The survey was closed (i.e., only open to those we invited) and was administered using SurveyMonkey software.21
Survey items
After providing informed consent, participants were presented with a 14-item survey (See Extended data). The survey was custom-built but draws on items previously reported by Van Panhuis.22 We presented participants with a series of questions regarding their willingness to share data that were developed using the 14 domains of the Theoretical Domains Framework (TDF) to help structure the survey items pertaining to barriers and facilitators to data management and actual data sharing. The TDF was created to better understand health professional behavior and is an integrated theoretical framework.23 Participants were asked to indicate their level of agreement with each statement on a Likert scale of 1 (strongly disagree) to 5 (strongly agree) was used to allow the participants to express how much they agree or disagree with each item. The Neuro ‘Open Science Grassroots committee’, a group of Neuro-based researchers targeting improvements in open science at the institution and beyond,24 reviewed and provided feedback on the survey which we incorporated. The survey was also piloted by three researchers prior to dissemination for clarity.
Survey recruitment
We included all the Neuro’s currently employed graduate students, postdoctoral candidates, research support staff, and independent investigators. The list of researchers was provided to us by the Neuro’s human resources. Participants were invited to complete the survey by the Director of the Neuro, via email, using a standard recruitment script. Participants were asked to complete informed consent online. The consent form described the aims of the study, specify that data collected would be anonymous, and detail our data management plan which includes making data openly available. Completion of the survey was by as implied consent.
Participants were originally sent the survey on September 20th, 2021, with a standardized reminder sent after one week (September 27th, 2021). Some targeted internal email strategies were also implemented to help maximize the response rate. We closed the survey on November 3rd, 2021.
Data analysis
Data was analyzed using IBM SPSS 28. We also report the completion rate for individual items, and report frequencies and percentages, or means and standard deviation, for each of the survey items. We report comparative data, using a Chi square test, illustrating findings by researcher category and gender.
For open-ended survey questions, we conducted a thematic analysis25 of responses provided. Here, two researchers familiarized themselves with the qualitative responses. Then, each of them independently coded responses. Codes were discussed and iteratively reviewed until there was consensus. Then, key themes were identified, and codes were grouped within these. Themes were defined and described in the results section.
Sample
We initially identified 623 publications from our search. We then removed 15 duplicates. We screened a total of 608 unique references, 313 publications met our inclusion criteria (See Figure 1). Of the retrieved 623 publications, about half were not research outputs. Epidemiological characteristic of included publications is provided in Table 2.
In about half of the included publications (52%; n = 162), the Neuro based author was the corresponding author on the paper. Neurology 2.9% (n=9), NeuroImage 2.9% (n=9), PLOS One 2.6% (n=8), Brain 2.6% (n=8) and Proceeding of the National academy of science 2.6% (n=8) were the most prevalent journals that Neuro authors published in (see Table 2). The number of authors in the included papers ranged between 1-374 authors. Studies with more than 100 authors were typically a systematic analysis, such as GBD 2016 Traumatic Brain Injury and Spinal Cord Injury Collaborators. Global, regional, and national burden of traumatic brain injury and spinal cord injury, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016.26 The median number of authors was eight. In addition, we investigated the number of authors with a Neuro affiliation. We found that 31% (n=104) of the publications had a Neuro affiliated author. The number of authors with the Neuro affiliation ranged between 1-12 in the included publications. The median number of the Neuro affiliated authors on a given publication was two.
Seventy percent of the included papers reported on clinical research (n=220), while 18.2% (n=57) reported non-human data, and 11.5% (n=36) used another form of data, such as literature review, protocol, both clinical research and non-human and simulation analyses.
When we examined the type of clinical data shared, we found that biological data (e.g., implementation of an antibody) was used in 18.2% (n=55) of publications, DNA in 11.2% (n=34), brain MRI in 32.2% (n=97), non-MRI imaging in 9.6% (n=29), behavioral data in 18.2% (n=55), sequencing data in 4.0% (n=12) and clinical chart in 3% (n=9) of publication. Sleep studies, web-based surveys, stem cells, brain tissue from autopsies, OCT testing, interviews, EEG recordings were used in the “other” 3.6% (n=11) publications.
We also examined data sharing of non-human research publication. We found that animal data was being used in 91.2% (n=52) publications, five (8.8%) publications used other data (e.g., radiology technology, model). Mice 48.1% (n=25) and rats 34.6% (n=18) were being used in most of the publications. Sixty-seven percent (n=210) of the publications reported quantitative data and 10.9% (n=34) reported qualitative data. Twenty-two percent (n=69) of publications reported both quantitative and qualitative data. Forty percent of publication (n=138) were observational studies.
For full details of types of data, please see Table 2.
Data sharing audit findings
Complete results of our data sharing audit are reported in Table 3; here we report key findings. More than half of the publications (66.5%; n=208) included a data sharing statement. Two-thirds of the data sharing statements were found at the end of the paper in a ‘declaration’ section of the publication (66.0%; n=139/208). Other statements were found in the manuscript methods section 19.7% (n=41), in the manuscript results section 2.9% (n=6), or in other sections 26.0% (n=54) such as the front page, abstract, Dryad, or the Supplementary Materials section of the publication. Of the papers that had a data sharing statement, we were able to download 74.5% (n=155/208) of data from the included publications directly. Of the entire sample of articles included, 49.5%; 155/313) of the papers had openly available data. When examining publications with data sharing statements that did not have openly available data, 58.7% (n=27 of 46) of them specified to contact the author to gain access to the data. Eleven percent (n=5) of the publications identified other ways to access the data (e.g., registration to a site required, request through the Vivli platform which gives access to anonymized individual participant-level data (IPD) or the cleaned raw data that is collected during a clinical trial.27 Thirty percent (n=14) of these publications did not specify a way to gain access to the data.
When data was shared openly it was done so on: journal websites (n=120, 66.0%), the Open Science Framework (n=8, 4.4%), Institutional repository (n=17, 9.3%), Figshare (n=7, 3.8%), or using other platforms (n=30, 16.5%). Data was shared mostly in PDF format (n=61, 30.0%), Word format (n=59, 29.0%), and Excel format (n=29, 14.2%). Most of the data 66.0% (n=103) shared was published without a unique identifier, with just 29.0% publications (n=45) providing a Digital Object Identifier (DOI). It was unclear for 2.6% (n=4) publications if a DOI was used.
Open scripts and materials
We found that most publications 75.7% (n=237) indicated that there is no analysis script and/or statistical analysis plan (SAP) available. Twenty percent (n=64) of the publications had analysis/SAPs available. Four (1.3%) publications stated that the analysis scripts and statistics data are available upon request. Two percent (n=8) were other (e.g. available from the corresponding author on reasonable request). The analysis scripts were available via a personal or institutional webpages (n=9, 11.5%), a journal webpage (n=16, 20.5%), supplementary information hosted by the journal or a pre-print server (n=16, 20.5%), or an online third party repository (e.g., OSF, Figshare, etc.) (n=26, 33.4%).
When publications shared an analysis script, the vast majority, 93.7% (n=60/64), could be downloaded directly. There was only one publication (n = 64; 1.6%) where we could not download the analysis script.
We defined “Materials availability” as sharing materials used that were to conduct the stud (e.g., such as video of Cognitive Behavior Therapy for psychological interventions; surveys; cell lines; reporting checklist(s), supplementary files, Gene banks). We found that (n=137, 43.8%) of the publications indicated that study materials were available. Less than one percent (n=2) of the publications indicated that the materials were available upon request. Materials were made available via the journal publication webpage (n=51, 30.9%), supplementary information hosted by the journal (n=51, 30.9%), using personal or institutional webpages (n=15, 9.2%), or via an online third-party repository (e.g., OSF, Figshare) fourteen percent (n=23). A small number of articles indicated materials were available by request (n=6, 4.4%). Six presents of materials (n=9) could not be downloaded directly, 92% (n=126) of materials in the publications could be downloaded, and two (1.5%) publications did not use them in their publications. Both publications were review.28,29
All of these results are included in Table 4.
Transparency and open science practices
Results from our audit of transparency and open science practices are reported in Table 5. Fifteen percent (n=47) of the publications did not mention a conflict-of-interest statement, 15% (n=47) indicated that there were one or more conflicts of interests and 68.8% (n=215) indicated that there was no conflict of interest. In addition, we investigated whether the publications reported funding and, if so, funding sources. Most of the publications (88%; n=296) reported the funding statements. The primary sources of funding were the federal public funder, the Canadian Institutes of Health Research (CIHR; n=149) and the provincial funder, the Fonds de Recherche du Québec – Santé (FRQS; n=71).
Most publications (94.9%; n=297) did not link to an accessible protocol; only 2.9% (n=9) did. 1.6% (n=5) of the publications reported that a protocol was available upon request, and two (0.6%) publications were actual reports of publication protocols.
Most publications (95.8%; n=300) did not include a study registration statement. Only 2.6% (n=8) of the publications indicated registration, 1.6% (n=5) of them indicated that there was no registration. Most registrations 87.5% (n=7) were for clinical trials referring to registration on ClinicalTrials.gov; 12.5% were located on the International Prospective Registry of Systematic Reviews, the PROSPERO database (n=1).
For studies in which ethics approval was considered approval, 66.6% (n=208) reported this information. Ethics approval was not relevant or not required for 20.4% (n=64) publications. Most publications (96.5%; n=302) did not report using a reporting guideline to improve the completeness and transparency of the completed research, only 3.5% (n=11) publications explicitly mentioned adherence to reporting guidelines.
We found that 70.0% (n=219) of the articles were open access, while the 29.0% (n=91) were not openly available. Data were missing for 1.0% (n=3) of the journals. Among the studies that were openly available, 37.9% (n=83 of 219) were in ‘gold’ (i.e., the final published version of your article (or Version of Record) is immediately permanently and freely available online) open access journals. Thirteen percent (n=30 of 219) were published in hybrid journals, 20.1% (n=44 of 219) were in ‘bronze’ (i.e., the final published version of your article is free to read on the publisher page but without a creative commons license) journals, and 28.3% (n=62 of 219) were made available green (i.e., when you place a version of the your manuscript into a repository, often after an embargo, making it freely accessible for everyone) open access via a repository (see Table 5).
Participants
Of the 553 individuals who were emailed the survey, 22.42% (n=124) completed it. We removed 10 participants for analyzing the data as they had answered only demographics question.
Demographic details of these participants are provided in Table 6. Forty-seven (41.1%) participants were female and 66 (57.9%) were male. one participant did not respond to the question concerning gender (0.9%). Most of the participants were between 35 and 44 years of age, 91.4% (n=70 of 114) had a PhD degree, Of the 114 participants who answered the question about their research role, slightly more than a quarter of them 36.8% (n=42) were staff (manager, associate, assistant). Approximately one in three 32.5% (n=37 of 114) were trainees (e.g., MSc, PhD students, postdoctoral fellow).
Data sharing practices and Training in data sharing
We asked participants two questions about their data sharing practices in the past 12 months (see Table 7). Over half of those who responded 61.4% (n=70 of 114) reported that they published a first or last authored paper in the last 12 months. Of these, 70.0% (n=49) indicated that they openly shared research data related to one of their publications.
We asked one question about past and future engagement in training about data sharing. The results are presented in Table 7. Approximately 40% of the respondents indicated that they had never engaged in training related to data sharing (e.g., online webinars, workshops, or a course) (n=49 of 113). Approximately a third of the respondents (38%; n=43) reported having engaged in some form of training (e.g., online webinars, workshops, or a course) around data sharing within the last 12 months. Also, twenty (17.8%) respondents reported having engaged in training (online webinars, workshops, or a course) around data sharing within the last three years and few respondents (3.5%; n=4) indicated they engaged in training around data sharing more than three years ago.
Participants indicated a preference for an online training video that they could return to and a series of several modules, each lasting about 10 minutes and a live webinar were less preferred by participants. These results are reported in Table 8.
Participants showed a preference for an online handbook walking through the practical steps of data sharing. Next, they indicated they would value having access to a data sharing expert (hired by the Neuro) to directly consult with when questions arise about data sharing. The order of the preference for other resources was: an online interactive learning module, an online video walking through the practical steps of data sharing (why, where, how), a central data sharing expert that facilitates data sharing for projects working directly with the project team, a collection of best practice case study examples (see Table 9).
Perceptions about data sharing
Participants were most familiar with patient privacy considerations when sharing data (Mean=3.14, SD=1.091), the ethical considerations when sharing their data (Mean=3.13, SD=1.001) and practical steps involved to share their data (Mean= 2.91, SD=1.085). Respondents were less familiar with concepts including First Nations Principles of Ownership, Control, Access, and Possession (OCAP)7 (Mean=1.45, SD=1.710) and new metrics to measure data sharing contributions (Mean=1.86, SD=0.872). See Table 10 for full results.
Most of the respondents expressed their opinion that data sharing helps to stimulate new hypotheses from the study (Mean=4.38, SD=0.671), they want to help others to use their study results (Mean=4.34, SD=0.691), data sharing helps the advancement of their research by allowing additional investigators to access the data for future research (Mean=4.33, SD=0.730), that they want to help others to reproduce their study (Mean=4.31, SD=0.767), that they want to help others to transparently assess their study (Mean=4.28, SD=0.720), and that they are optimistic that efforts to adhere to data sharing best practices will help support greater reproducibility and transparency of research (Mean=4.27, SD=0.862). Most of the respondents think that the benefit of data sharing is delayed and uncertain (Mean=2.51, SD=0.989). Also, most respondents disagreed that there is sufficient financial support to help them to adhere to data sharing best practices in the coming year (Mean=2.57, SD=1.016), that they feel stressed out when they think about how to adhere to best practices regarding data sharing for their studies in the coming year. (Mean=2.72, SD=1.014). For full results, see the Extended data.
Thematic analysis
For the item “What incentives do you think the Montreal Neurological Institute-Hospital could introduce to recognize data sharing?” we classified text-based responses into six themes. The themes were: 1) financial support, 2) recognize and incentivize data sharing, 3) provide infrastructure to support data sharing, 4) enforcement of specific and clear data sharing standards, 5) change research and recognition priorities, and 6) provide educational support for data sharing (see Table 11).
For the question “Is there anything else you want to share about data sharing?” we classified responses into eight themes. The themes were: 1) lack of resources for data sharing, 2) barriers to data sharing, 3) recommendations for standards surrounding data sharing, 4) relevancy of data sharing, 5) provide training for data sharing, 6) privacy considerations when sharing data, 7) technical requirements for data sharing, 8) data sharing requires more resources (see Table 12).
The aim of the first study was to audit all publications produced by Neuro researchers in 2019. We found that 66.5% (n=208) of publications had a data sharing statements and 61% (n=31) of publications specified to contact the author to gain access to the data. Sharing by request creates burden and barriers for other researchers to access and use data and should only be implemented in instances where open data is not possible. Most of the publications indicated that there is no registration statements and protocol availability. We note that the authors are more eager to share their publications material than analysis scripts. Most publications did not report using a reporting guideline to maximize the transparency and completed of their research.
In the second study, to identify barriers and facilitators to data sharing, we found that there was a preference that training in the form of an online video they could return to, a series of several modules, each lasting about 10 minutes and a single module lasting about 2 hours be developed. A key finding was that more than a third of the respondents (40.7%), had not engaged in training around data sharing. We identified learning gaps in some key areas (e.g., OCAP Principles, metrics to measure data sharing contributions). Respondents noted that barriers they faced when sharing data included: financial support, training, and technical support.
We recommend an educational and training intervention devoted to data sharing practices to further normalize and support this practice, including training on sharing statistical analysis plans and sharing of other materials related to data sharing. This training could then be followed by additional sessions on further open science practices. Further research is needed to examine the universities’ data sharing practices to track needs and preferences over time and investigate to provide clear data sharing standards for the staff, researchers, and managers.
Also, it is important to repeat the audit over time to track changes in the Neuro publications and researcher perceptions and barriers and facilitators. Hence, it is important to repeat the audit biennially with the same questions in survey. It is also should be mentioned that in addition to the same questions, additional questions should be added to gain a better understanding about certain data sharing practices in universities and scientific institutes.
The results of the two studies will provide the Neuro with a better understanding of open access statues and barriers and facilitators to data management and sharing and will identify educational needs related to data sharing that can be reduce barriers to data sharing. We hope this report also provides other organizations wanting to engage their communities about data sharing with a valid approach to the topic and some comparative data. Further, the audit of publications can be used to benchmark for improvements over time and to monitor change. Also, decision-makers in government and universities will be able to structure future open access by providing training for their researchers. Also, this study can serve as a baseline to benchmark for improvements in data sharing and other open science practices and to measure progress over time.
We acknowledge certain limitations. First, as less than a quarter of the Montreal Neurological Institute-Hospital’s staff completed this survey, the results of the study cannot be generalized. Also, it is hard to measure changes in the community unless two or more surveys are done in different time periods. Hence, we recommend annual surveys of the Neuro community to track changes in needs and preferences over time.
This study was registered a priori (i.e., before any data collection) using the Open Science Framework (https://osf.io/3tafc).
All study data and materials have been made publicly available.
OSF: Openness and data sharing at The Neuro (Montreal Neurological Institute-Hospital), https://doi.org/10.17605/OSF.IO/3TAFC. 30
This project (https://osf.io/mx6rp/) contains the following underlying data:
‐ Raw survey data (individual participant survey responses).
‐ Audit data (information extracted from publications).
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
OSF: Supplementary materials and extended data, https://doi.org/10.17605/OSF.IO/3TAFC. 30
This project (https://osf.io/mx6rp/) contains the following extended data:
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
We thank Alex Amar-Zifkin for helping us to design the search strategy, Dylan Roskams-Edris for providing information on the Montreal Neurological Institute-Hospital staff lists and for facilitating the circulation of the survey among the Montreal Neurological Institute-Hospital researchers and the open science grassroots committee of the Montreal Neurological Institute-Hospital for providing feedback on our extraction form and the survey. The previous version of this article has been published in a preprint server and can be accessed via https://doi.org/10.1101/2022.08.03.22278384.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Bailey JJ, Kaiser L, Lindner S, Wüst M, et al.: First-in-Human Brain Imaging of [18F]TRACK, a PET tracer for Tropomyosin Receptor Kinases.ACS Chem Neurosci. 2019; 10 (6): 2697-2702 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Medical data sharing
Is the work clearly and accurately presented and does it cite the current literature?
No
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Winkler-Schwartz A, Yilmaz R, Mirchi N, Bissonnette V, et al.: Machine Learning Identification of Surgical and Operative Factors Associated With Surgical Expertise in Virtual Reality Simulation.JAMA Netw Open. 2019; 2 (8): e198363 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Open science; open data; data sharing; data reuse; research data management
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 18 Oct 23 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)