Innovative strategies to fight antimicrobial resistance: crowdsourcing to expand medical training [version 1; peer review: 1 approved, 1 approved with reservations]

Background: Antimicrobial resistance is a serious public health concern across the world, but public awareness is low, few educational resources on diagnostics exist and professional interest in infectious diseases is waning. To spur interest in infectious disease, emphasize the role of diagnostics in management of resistant infections and develop educational resources to support antimicrobial stewardship. Methods: We employed crowdsourcing methods, using an open challenge contest to solicit clinical cases on antimicrobial resistance and clinical diagnostics. Results: We received 25 clinical cases from nine countries. After screening, 23 cases were eligible for judging. Three cases emerged as the top finalists and were further developed into an open access learning module on diagnostics and antimicrobial resistance. Conclusions: Crowdsourcing methods are beneficial for generating interest in infectious disease and developing educational resources to support antibiotic stewardship.


Introduction
Antimicrobial resistance (AMR) is a major public health threat. Reports from the World Health Organisation (WHO) global surveillance and Global Point Prevalence Survey show high rates of inappropriate antibiotic use 1 , suggesting the need for enhanced antimicrobial stewardship. Yet public awareness of AMR remains low 2 and the pipeline from medical school into careers relating to infectious diseases is weak. The numbers of US trainees entering infectious diseases decreased 41% between 2009 and 2017 3 .
There is a need to respond to the lack of interest in infectious disease training through various methods, including expanding medical training on infectious diseases 3 . Most medical schools give limited teaching on AMR and even less on the use of diagnostics to reduce inappropriate use of antibiotics 4 . A European study reported that in all but one of seven medical schools studied, the majority of students wanted further education on antibiotic prescribing, which is an essential aspect of AMR management 5 . A US study also suggested that physicians receive inadequate training on the interpretation of antibiograms 6 . Other studies from the UK 7 , China 8 and Ethiopia 9 , suggest the need for more AMR and diagnostics content in medical curricula. The WHO recently highlighted the importance of undergraduate training in prudent prescribing and research highlights the need for more AMR and diagnostics training for healthcare workers around the world 10 .
In response to this need, we organised a crowdsourcing project, soliciting clinical cases on diagnostics/AMR from medical students and physicians. Crowdsourcing is a bottom-up approach that allows many individuals to attempt to solve a problem and then shares solutions with the public 11 . Crowdsourcing contests typically convene a steering committee, engage citizens or a particular group to participate, evaluate entries, recognize finalists and share solutions with the public (Table 1). It has been used in medical research and piloted as a tool to develop medical education materials 12 .
Our crowdsourcing approach focused on developing an educational resource for medical students and trainees on infectious disease diagnostics/AMR. The overall goal of the challenge contest was to spur enthusiasm for infectious diseases and increase infectious disease diagnostics/AMR knowledge to support antibiotic stewardship. The aim of the call was to encourage medical students, trainees, physicians, and others to collect or write clinical cases to develop educational materials for AMR.

Crowdsourcing for clinical AMR cases
We set up a global steering committee comprised of 19 individuals from six WHO regions (seven women and 12 men). The steering committee included experts in laboratory science, medicine, public health, health communications, and medical education. Members were selected based on sex, geographic region, and expertise. We accepted clinical cases focused on used of diagnostics in AMR from mid-March until May 1, 2018. This call was disseminated through social media channels (including Facebook, Twitter and YouTube), partner organizations, professional association mailing lists, and in-person events. The call for Table 1. Steps in a crowdsourcing challenge contest.

Organize a steering committee
To support a strong community, buy-in from the start which resonates with local language/culture/ preferences The contest had a steering committee of nineteen experts from seven WHO regions

Engage the community
To clarify the rules/guidelines of the contest for community members and encourage participation The call for entries was promoted through a website, social media and in-person events. We received 25 entries received from 9 countries

Evaluate Entries
To determine a group of judges from the crowd, steering committee, or others to evaluate entries based on a pre-specified criterion  13 were also printed and displayed in some hospitals and medical schools in London to create awareness about the contest. All cases were screened for eligibility by two steering committee members and then sent to four physician judges for evaluation using pre-specified criteria. Judges included infectious disease fellows and residents identified by the steering committee. Eligibility criteria published alongside details of the open contest included a focus on AMR clinical case, written in English, using less than 2000 words, and included at least one image. Participants were also required to obtain consent from patients to share cases. The selected judges were neither part of the steering committee nor involved in the design of the call. At this stage, each case was scored between 0 and 10 by the judges. Criteria set for judging the cases include focus on diagnostics and AMR, relevance to medical teaching and capacity to enhance appropriate antibiotic use. The judging rubric is available as Extended data 13 .
Individual scores were collated, and the mean score for each case was calculated. At the end of the judging process, the finalists (cases that achieved a mean score of 7/10 or greater) were announced. These clinical cases had a second level of review from three independent judges with detailed feedback on specific areas to improve overall quality and understanding of the cases presented.
All case authors were provided commendation certificates for participation. The finalists were provided with individualized feedback and supported to attend a multidisciplinary diagnostics/AMR focused symposium in London. Finalist cases were disseminated through Partners ID Images. Partners ID Images is an open access online library focused on infectious diseases.

Delphi approach
We used a modified Delphi method as part of a one-day symposium to identify and prioritize key learning objectives for the AMR learning module that was developed using finalists cases 14 . The Delphi survey was included as part of activities in a one-day AMR symposium held in London on November 1, 2018, which had experts in AMR research and practice in attendance. The symposium had a line up different AMR research presentations and a panel discussion on diagnostics and AMR. A total of 30 participants, including physicians, medical microbiologists, clinical researchers and medical post graduate students, who attended the one-day AMR symposium and were asked to participate in a survey. Participants were briefed about the survey contents and were also informed that participation was voluntary. The initial round in the morning session consisted of 30 participants and the second round in the afternoon had 21 participants, as some participants attended only the morning session. After two rounds, there was consensus to include 12 objectives (see Extended data, supplementary file 3) in the AMR learning module.

Crowd voting
A crowd voting platform was set up on Partners ID images for the general public to select a crowd favourite from the three finalists. The voting page was open for two weeks and received 334 votes, mostly from the US, Peru, Australia and the UK. A random number generator was used to select 17 voters (5% of total votes) to receive a free digital Sanford Guide app.

Results
We received 25 clinical case entries from nine countries. Peru had the highest number of entries -seven (28%), followed by the US with six entries (17%). Other submissions were received from Nigeria (n=4), China (n=3), Australia (n=1), Canada (n=1), Ethiopia (n=1), Paraguay (n=1), and Zambia (n=1). After screening, 23 cases were eligible for the next stage of judging (see Underlying data 13 ); two entries were excluded for not being relevant to the subject of the call. In total, 19 were original cases written by participants, and four were adapted from already published literature. All cases were submitted alongside a signed declaration of patients' consent from the authors. The cases described clinical presentations of drug resistant organisms and the role of diagnostics in their management. An overview is given on the SESH Global website.
Out of the 23 eligible cases, three cases emerged as finalists with mean scores of seven or above. Most cases achieved scores greater than five; only six entries (26%) had mean scores less than five. Underlying data shows scores for each case 13 .
The crowd favourite was a clinical case on pyelonephritis caused by metallo-beta-lactamases producing Pseudomonas aeruginosa. All three finalist cases (see Extended data, Supplementary file 1) 13 were further reviewed with medical education experts from partner organizations and integrated into an online diagnostics/AMR interactive learning module (see Extended data, Supplementary file 2) 13 . This was done with the help of the results of the Delphi approach (see Underlying data for results) 13 .

Discussion
Crowdsourcing provides several advantages in the development of medical training materials compared to conventional approaches, including duration of time needed, cost, and global coverage. Our challenge was able to solicit over 20 cases in less than two months, faster than most medical education development. The total cost of the challenge was less than having experts develop clinical cases. Finally, the global composition of our steering committee and case contributors allowed us to develop a resource that may be relevant in diverse global settings.
However, our crowdsourcing contest had several limitations that warrant consideration. This contest received fewer clinical cases than other crowdsourcing contests. The low number of cases may have been related to the lack of official partnerships with a larger conference or professional associations, the relatively high requirements -including need to obtain patient's consent -in the guidelines for submissions, and the timing of the call. Our call for cases coincided with a similar call for clinical cases on infectious diseases from the Infectious Diseases Society of America. Despite the overall low number of submissions, we had sufficiently strong cases to create the learning module and develop new case content for the database.
One gap identified during the challenge was the limited strategic priorities within AMR medical education. The purpose of the learning module is to increase awareness and understanding of diagnostics and AMR in adult and paediatric medicine. This could be used as Continuing Medical Education (CME) or integrated into undergraduate medical curricula.
While we have created an open access learning resource on diagnostics and AMR (supplementary file 2), there is still a need for more educational resources related to AMR. Crowdsourcing may be a useful adjunct in the development of medical education materials. Crowdsourcing could be used in a variety of settings to encourage interest in infectious diseases, decrease unnecessary antibiotic use and promote antibiotic stewardship.

Consent
Written informed consent for publication of their clinical details and/or clinical images was obtained from each of the patients. The authors of the manuscript describe a crowdsourcing approach to identifying best practice in antimicrobial therapy and infectious disease care to develop new training tools for infectious disease specialists. The design of the study is sound and the aims of the study are very relevant considering the threat of AMR in pathogens and the spread of COVID-19 globally.
The main issue I have with the study is that the results and discussion are lacking in detail and do not give the reader a full idea of the relevance of the results. For the most part the writing is fine, however, there are some grammatical errors that should be addressed. Unfortunately, as the manuscript was not provided to me with each line numbered I could not provide the authors with specific corrections. However, I strongly suggest the authors re-read the manuscript and make corrections where needed as some sentences were difficult to understand due to poor grammar and/or sentence structure.

Abstract;
I would say that interest in infectious diseases and diagnostics is at the forefront of the publics mind thanks to the COVID-19 pandemic.
Last sentence in the background section does not address the purpose of the study.
The results section doesn't really give the reader an informed idea of what was found in the study besides saying that a new learning module was created. Likewise, the conclusion is also vague and doesn't explain how these results are novel.

Introduction;
2 nd last sentence of 1 st paragraph: Describing the pipeline as "weak" does not give the reader a clear idea what the author is referring to.

Methods;
4 th sentence of 1 st paragraph: "Used diagnostics" does not make sense in this context.

Results;
Please provide more information on how the top three studies were scored the highest. Was it just based off the criteria described in the methodology or were there aspects of the cases that were of particular merit? As identifying these traits seems to be one of the main aims of this study, it may be useful to include more specifics of the cases in the results section.

Discussion;
In the first paragraph no references are provided to support the authors statements that the crowdsourcing approach is quicker and cheaper than other methods.
The discussion focused on the limitations of the study but did not describe the relevance of the results of the study. For example, it would be insightful for the reader to know how the authors used the data to develop their learning module, how best practice methodologies were identified and how these compared with standard treatment, how the data could be used to encourage more medical graduates to specialise in infectious diseases and the relevance of the project during the COVID-19 pandemic. It would also be interesting to discuss how the crowdsourcing model could be used to identify inappropriate, ineffective, or harmful practices being used by the medical community to draw attention to areas in medical training that need to improve.

Is the work clearly and accurately presented and does it cite the current literature? Partly
Is the study design appropriate and is the work technically sound? Yes

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes However, I have provided a few suggestions below:

Introduction:
In second paragraph, you mentioned: "There is a need to respond to the lack of interest in infectious disease training through various methods…" Could there exist other reasons for this lack of interest in infectious disease, for example, the fear of contracting and then dying from infectious diseases, etc.?

Methods:
Did you seek an approval from any ethics committee for this study? If so, please state this in your paper.
You said, "The call for participation was translated into the six official languages of the WHO, but only entries in English were accepted." Was there any particular reason why the call for participation was translated into six official languages, but acceptance for entries was only in English? Why do you think this was the most appropriate method to use for this study, and how can this be justified? To what extent did this affect the final selection of entries and the final results? These could be reflected in your limitations, as well as discussing any potential bias. Also, what did you do enhance the credibility of your results?
Delphi approach: please provide readers with a brief description of the Delphi method & process.
What advantage does the Delphi have over the other methods such as brainstorming, analytical hierarchy process, data-intensive decision making, etc.? On the same paragraph: please change, "…AMR symposium and were asked to participate in a survey." to, "…AMR symposium were asked to participate in a survey." Crow voting: here you stated that, "A random number generator was used to select 17 voters (5% of total votes) to receive a free digital Sanford Guide app." How did you arrive at that decision to select only 5% of votes, instead of the entire 100%?