Keywords
Artificial Intelligence (AI), Medical Education, Healthcare Transformation, AI Ethics, Medical Students, Curriculum Development, Clinical Practice Integration
This article is included in the AI in Medicine and Healthcare collection.
Artificial intelligence (AI) is swiftly emerging as a core component in the transformation of global healthcare systems, with its effectiveness contingent upon the readiness of the workforce, especially future physicians. The incorporation of AI into medical education must uphold essential principles, including ethical considerations, the preservation of the physician-patient relationship, and the primacy of human judgment. Preparing medical students for this evolution equips them to effectively leverage emerging technologies in clinical practice.
This policy brief aims to establish a framework for preparing medical students for the future of AI in healthcare, assisting policymakers in universities, governments, and health authorities in developing effective educational programs. It presents prompt engineering as an innovative skill for medical students, facilitating personalized AI interactions in clinical simulations and ethical decision-making, while addressing existing gaps in current curricula.
Incorporating findings from a 2025 cross-sectional study involving 1,619 medical students, this brief indicates a moderate level of AI readiness (61.34/100), with cognition identified as the weakest domain. This underscores the necessity for targeted curricula to bridge gaps in AI knowledge and cultivate practical skills such as prompt engineering for clinical simulations.
Artificial Intelligence (AI), Medical Education, Healthcare Transformation, AI Ethics, Medical Students, Curriculum Development, Clinical Practice Integration
The integration of artificial intelligence (AI) and related technologies into various aspects of human life has become increasingly prevalent, significantly influencing the healthcare sector.1 AI technologies have developed sophisticated algorithms to analyze diverse health data, including clinical, behavioral, environmental, and pharmaceutical information, by leveraging patient data and biomedical literature.2 However, the incorporation of these technologies into medical practice requires a workforce skilled in the technical, ethical, and practical aspects of AI.3 With enhanced access to health data and substantial investments by technology companies in AI, its applications in medicine are becoming increasingly valuable. For example, AI systems now assist healthcare professionals in fields such as radiology, pathology, and precision oncology.4 Moreover, AI is instrumental in improving patient care through innovations such as remote patient monitoring, telemedicine, and virtual support systems.5
Artificial Intelligence (AI) is a prominent and rapidly evolving topic within technological advancements,6 with considerable potential to impact the healthcare industry, particularly in medical education. AI has the capacity to transform medical education by providing personalized and adaptive learning experiences, enhancing diagnostic accuracy, and facilitating data-driven decision-making processes.7 In contrast to traditional approaches that often employ a uniform, rote-learning model for all students, AI allows for the customization of learning processes to meet individual needs, enabling students to concentrate on areas requiring further practice.8 Modern medicine generally adopts a forward-looking perspective toward these identified challenges. This future-oriented approach enhances the appeal of AI applications in healthcare, which appear increasingly integrated into the medical field.9 As noted, futurist Eric Topol asserts, “Virtually every physician in the future—from specialists to paramedics—will utilize AI technologies, particularly deep learning.” This statement underscores the extensive scope of AI’s application in medicine.2 Furthermore, AI can assist educators in designing individualized curricula, continuously monitoring learners’ progress, and providing immediate feedback. Although prior studies have primarily focused on examining specific types of AI and their effectiveness in medical education.10
Numerous studies have examined the essential knowledge that medical students should gain concerning artificial intelligence (AI) in medicine. Additionally, some research has highlighted the importance of integrating health AI ethics education into medical school curricula. Students view AI as a promising enhancement to the future of medicine and argue that it should be considered a collaborator rather than a competitor. Furthermore, they believe that training in AI can significantly impact their career trajectories.11,12
Emerging skills such as prompt engineering— the art of crafting precise inputs for large language models (LLMs) like ChatGPT— are critical for future physicians. Prompt engineering can enhance medical education by generating realistic patient scenarios, multiple-choice questions for assessments, or personalized explanations of complex concepts, thereby bridging the gap between theoretical knowledge and practical application. For instance, in decision support systems, well-engineered prompts can optimize AI for accurate diagnosis and ethical considerations, reducing algorithmic biases. However, current curricula often overlook this skill, leading to suboptimal AI utilization in healthcare.13–15
Despite these advancements, the implementation of artificial intelligence (AI) education in medical schools globally remains inconsistent. Key deficiencies include the absence of standardized curricula, inadequate practical application of AI tools, a shortage of faculty with expertise in AI, limited knowledge and skills among students, and varied attitudes and levels of preparedness among learners.1,16 Moreover, critical aspects such as the ethical implications and policy considerations of AI are often insufficiently addressed in current educational programs.17,18 These deficiencies underscore the urgent need for the development of comprehensive and standardized educational frameworks to adequately prepare future physicians for integrating AI into healthcare systems. Medical students, as a vital stakeholder group, are central to discussions about the future of healthcare, and their perspectives on AI applications are significant. Research indicates that, in many instances, medical students believe they understand the concept of AI; however, when asked to define it, the majority are unable to do so.11,12 The existing literature emphasizes the necessity of incorporating AI application training into medical curricula, highlighting that current education in this area is neither sufficient nor satisfactory.11,12,19 Although students anticipate that AI will transform and revolutionize healthcare, they recognize that current training in this domain is inadequate.11 The objective of this policy brief is to establish a framework for AI readiness in medical education, assess current gaps, and evaluate the future outlook for AI in healthcare. This framework will assist policymakers in universities, governments, and health authorities in designing effective educational programs.
This policy brief presents the findings of a study conducted in 2024 involving 1,916 medical students from years one to five at kermanshah University of Medical Sciences. The study utilized a census sampling method. The instrument employed was the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS), developed by Karaca et al., which consists of 22 questions across four subscales: cognition, ability, vision, and ethics. All participants provided written informed consent before completing the questionnaire. The consent process included comprehensive information regarding the study’s purpose, procedures, potential risks and benefits, and the voluntary nature of participation. Participants were informed of their right to withdraw at any time without facing any consequences. Since all participants were medical students aged 18 years or older, no minors were involved, and therefore, parental consent or assent was not required. Responses were measured using a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree).20 The overall mean score for the scale in this study was 3.45 ± 0.40. Among the subscales, “vision” received the highest score, while “cognition” received the lowest ( Figure 1). The validity and reliability of the Persian version of this questionnaire were previously established by Ghalibaf et al. (2023) among medical students at Mashhad University of Medical Sciences, with Cronbach’s alpha coefficients of 0.886, 0.905, 0.865, and 0.856 for the cognition, ability, vision, and ethics subscales, respectively, and an overall Cronbach’s alpha of 0.944 for the entire scale.21 Similarly, a study by Rezazadeh et al. among medical students in Kerman confirmed the questionnaire’s validity and reliability, reporting an overall Cronbach’s alpha of 0.94.22 In the present study, the Cronbach’s alpha coefficient was 0.762.
The findings indicate that the mean medical artificial intelligence readiness score was 3.45, with a standard deviation of 0.40, suggesting a moderate level of preparedness. Building upon the original findings (mean: 3.45 ± 0.40), a larger cohort analysis conducted in 2025 reaffirms moderate readiness (61.34 ± 10.13). ANOVA results indicate significant differences across academic years (p < 0.001), with third-year students exhibiting the highest readiness levels at 64.85 ± 8.01. Additionally, regression analysis identifies prior exposure to AI as a predictor of cognition and ability (β = 0.092, p = 0.020; β = 0.113, p = 0.004, respectively). Gender differences were found to be non-significant for AI subscales (p > 0.05). The highest score in the vision subscale reflects optimism regarding AI’s potential to enhance diagnostic accuracy and improve patient outcomes. Scores in the ability and ethics subscales demonstrate moderate confidence in the utilization of AI tools, along with an awareness of the ethical challenges associated with them. Conversely, cognition was identified as the weakest subscale, highlighting a limited understanding of AI concepts such as machine learning. The variability in scores suggests disparities in access to AI education. These findings are consistent with global trends, as only 10% of medical schools worldwide have integrated AI into their curricula.23
The cross-sectional design limits the ability to establish causality, while the use of a single-institution sample restricts generalizability. Future directions include longitudinal tracking of AI readiness over multiple years and integrating interventions such as prompt engineering workshops to assess improvements in cognitive outcomes.
The moderate readiness score of 3.45 is consistent with global findings, highlighting the necessity for targeted interventions in cognitive skills, where prompt engineering could be instrumental. By equipping students with the ability to formulate effective prompts, educators can enhance AI literacy, facilitating the improved integration of large language models (LLMs) in clinical practice. This strategy not only improves diagnostic accuracy but also tackles ethical challenges, including bias mitigation through optimized inputs. Incorporating prompt engineering into medical curricula has the potential to transform medical education, rendering it more adaptive and personalized.
The World Health Organization (WHO) guidelines provide a comprehensive framework for preparing medical students for the future of artificial intelligence (AI) in medicine. These guidelines emphasize ethical, technical, and practical training and align with the findings of the policy brief, which identified gaps in knowledge, ethical education, and the practical application of AI in medical education. The proposed policy recommendations, grounded in WHO principles, aim to empower future physicians to integrate AI into healthcare safely, effectively, and ethically.
Medical schools should mandate the incorporation of training on the fundamentals of artificial intelligence (AI), including technical concepts such as machine learning, as well as its capabilities and limitations, into the curriculum. This education should emphasize the interpretation of AI outputs and their application in clinical decision-making, equipping future physicians to utilize this technology safely and effectively.24,25 For example, Curricula should incorporate modules on prompt engineering to equip students with the skills necessary to design AI interactions for clinical simulations, including the generation of patient histories and ethical dilemmas utilizing large language models (LLMs).26
Educational programs must focus on addressing ethical challenges, including algorithmic bias and data privacy, while also enhancing communication skills necessary for conveying AI-driven recommendations to patients. Specialized courses should be developed to strengthen ethical analysis and facilitate the effective communication of AI-based recommendations, thereby equipping physicians to navigate complex decision-making scenarios.24 These training sessions may incorporate workshops that address the management of sensitive situations, including the handling of adverse algorithmic predictions.
In light of the global disparities in AI education, it is advisable to establish standardized curricula in partnership with international organizations, such as the World Health Organization (WHO). These programs should integrate practical training with AI tools and promote the cultivation of critical thinking skills necessary for assessing emerging technologies.7
Medical schools should collaborate with experts in data science, ethics, and technology to develop interdisciplinary educational programs. These programs should be integrated into lifelong learning initiatives to ensure that physicians stay informed about advancements in artificial intelligence.27 For instance, annual refresher courses could be implemented to introduce the latest AI tools and their applications.
Curricula should integrate practical training with AI tools, including decision-support systems and remote monitoring technologies. The implementation of case studies and simulations can effectively bridge the gap between theoretical knowledge and practical application.27 Furthermore, students should receive training in evaluating the evidence that supports AI tools to ensure their safety and effectiveness.
Governments and universities should prioritize investment in the development of faculty expertise in artificial intelligence while fostering partnerships with private sector organizations and technology institutes.27 Such collaborations may encompass knowledge exchange programs or the establishment of AI research centers within medical schools.
• Mandatory AI education in medical curricula
Medical schools should incorporate mandatory training on the fundamentals of artificial intelligence (AI), including technical concepts such as machine learning, along with an understanding of its capabilities and limitations. This education should prioritize the interpretation of AI outputs and their application in clinical decision-making to ensure safe and effective utilization by future physicians, with a particular emphasis on prompt engineering for the development of adaptive learning tools.
• Development of standardized global curricula
In light of global disparities in AI education, standardized curricula should be developed in collaboration with international organizations such as the World Health Organization (WHO) and global medical associations. These programs should include practical training and foster critical thinking skills necessary for evaluating emerging technologies.
• Establishment of interdisciplinary educational programs
in AI.
• Practical training with AI tools
• Ethics and communication training
Educational programs should address ethical challenges such as algorithmic bias, data privacy, and accountability. Specialized courses should be developed to enhance ethical analysis and improve the ability to effectively communicate AI-driven recommendations to patients. This can include prompts designed to simulate bias scenarios and promote equitable AI usage.
Prompt Engineering Workshops: Conduct hands-on workshops designed to equip students with the skills to engineer prompts for large language models (LLMs), thereby enhancing their capabilities in personalized education and clinical decision-making.
• Investment in faculty development
• Collaboration with technology institutions and the private sector
• Evidence-based evaluation training
• Continuous professional development programs
• Promoting positive attitudes toward AI
Educational programs should incorporate activities designed to alleviate students’ concerns regarding AI, such as fears of physician replacement, and foster a positive perception of AI as a supportive tool. Interactive workshops and discussion sessions can play a crucial role in achieving this objective.
AZ conceptualized and designed the survey, conducted the investigation, analyzed the data, revised the manuscript, and performed grammatical editing. AZ has reviewed and approved the final manuscript.
The data collection in the present study was conducted after the approval of Kermanshah University of Medical Sciences, and Publication Ethics Boards the number IR.KUMS.REC.1402.472. We confirm that all methods used in this study were carried out in accordance with relevant guidelines and regulations. The participation of students was completely voluntary and informed consent was obtained from all participants.
The anonymised survey data are not publicly available due to ethical restrictions. Data may be shared with qualified researchers upon reasonable request to the corresponding author (arashziapoor@gmail.com) in accordance with the confidentiality policies of Kermanshah University of Medical Sciences (Ethics approval ID: IR.KUMS.REC.1402.472).
| Views | Downloads | |
|---|---|---|
| F1000Research | - | - |
|
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)