ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Policy Brief

Policy Brief: AI Readiness in Medical Education: Assessing Current Gaps and Future Outlook

[version 1; peer review: awaiting peer review]
PUBLISHED 10 Nov 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the AI in Medicine and Healthcare collection.

Abstract

Artificial intelligence (AI) is swiftly emerging as a core component in the transformation of global healthcare systems, with its effectiveness contingent upon the readiness of the workforce, especially future physicians. The incorporation of AI into medical education must uphold essential principles, including ethical considerations, the preservation of the physician-patient relationship, and the primacy of human judgment. Preparing medical students for this evolution equips them to effectively leverage emerging technologies in clinical practice.

This policy brief aims to establish a framework for preparing medical students for the future of AI in healthcare, assisting policymakers in universities, governments, and health authorities in developing effective educational programs. It presents prompt engineering as an innovative skill for medical students, facilitating personalized AI interactions in clinical simulations and ethical decision-making, while addressing existing gaps in current curricula.

Incorporating findings from a 2025 cross-sectional study involving 1,619 medical students, this brief indicates a moderate level of AI readiness (61.34/100), with cognition identified as the weakest domain. This underscores the necessity for targeted curricula to bridge gaps in AI knowledge and cultivate practical skills such as prompt engineering for clinical simulations.

Keywords

Artificial Intelligence (AI), Medical Education, Healthcare Transformation, AI Ethics, Medical Students, Curriculum Development, Clinical Practice Integration

The policy problem

The integration of artificial intelligence (AI) and related technologies into various aspects of human life has become increasingly prevalent, significantly influencing the healthcare sector.1 AI technologies have developed sophisticated algorithms to analyze diverse health data, including clinical, behavioral, environmental, and pharmaceutical information, by leveraging patient data and biomedical literature.2 However, the incorporation of these technologies into medical practice requires a workforce skilled in the technical, ethical, and practical aspects of AI.3 With enhanced access to health data and substantial investments by technology companies in AI, its applications in medicine are becoming increasingly valuable. For example, AI systems now assist healthcare professionals in fields such as radiology, pathology, and precision oncology.4 Moreover, AI is instrumental in improving patient care through innovations such as remote patient monitoring, telemedicine, and virtual support systems.5

Artificial Intelligence (AI) is a prominent and rapidly evolving topic within technological advancements,6 with considerable potential to impact the healthcare industry, particularly in medical education. AI has the capacity to transform medical education by providing personalized and adaptive learning experiences, enhancing diagnostic accuracy, and facilitating data-driven decision-making processes.7 In contrast to traditional approaches that often employ a uniform, rote-learning model for all students, AI allows for the customization of learning processes to meet individual needs, enabling students to concentrate on areas requiring further practice.8 Modern medicine generally adopts a forward-looking perspective toward these identified challenges. This future-oriented approach enhances the appeal of AI applications in healthcare, which appear increasingly integrated into the medical field.9 As noted, futurist Eric Topol asserts, “Virtually every physician in the future—from specialists to paramedics—will utilize AI technologies, particularly deep learning.” This statement underscores the extensive scope of AI’s application in medicine.2 Furthermore, AI can assist educators in designing individualized curricula, continuously monitoring learners’ progress, and providing immediate feedback. Although prior studies have primarily focused on examining specific types of AI and their effectiveness in medical education.10

Numerous studies have examined the essential knowledge that medical students should gain concerning artificial intelligence (AI) in medicine. Additionally, some research has highlighted the importance of integrating health AI ethics education into medical school curricula. Students view AI as a promising enhancement to the future of medicine and argue that it should be considered a collaborator rather than a competitor. Furthermore, they believe that training in AI can significantly impact their career trajectories.11,12

Emerging skills such as prompt engineering— the art of crafting precise inputs for large language models (LLMs) like ChatGPT— are critical for future physicians. Prompt engineering can enhance medical education by generating realistic patient scenarios, multiple-choice questions for assessments, or personalized explanations of complex concepts, thereby bridging the gap between theoretical knowledge and practical application. For instance, in decision support systems, well-engineered prompts can optimize AI for accurate diagnosis and ethical considerations, reducing algorithmic biases. However, current curricula often overlook this skill, leading to suboptimal AI utilization in healthcare.1315

Despite these advancements, the implementation of artificial intelligence (AI) education in medical schools globally remains inconsistent. Key deficiencies include the absence of standardized curricula, inadequate practical application of AI tools, a shortage of faculty with expertise in AI, limited knowledge and skills among students, and varied attitudes and levels of preparedness among learners.1,16 Moreover, critical aspects such as the ethical implications and policy considerations of AI are often insufficiently addressed in current educational programs.17,18 These deficiencies underscore the urgent need for the development of comprehensive and standardized educational frameworks to adequately prepare future physicians for integrating AI into healthcare systems. Medical students, as a vital stakeholder group, are central to discussions about the future of healthcare, and their perspectives on AI applications are significant. Research indicates that, in many instances, medical students believe they understand the concept of AI; however, when asked to define it, the majority are unable to do so.11,12 The existing literature emphasizes the necessity of incorporating AI application training into medical curricula, highlighting that current education in this area is neither sufficient nor satisfactory.11,12,19 Although students anticipate that AI will transform and revolutionize healthcare, they recognize that current training in this domain is inadequate.11 The objective of this policy brief is to establish a framework for AI readiness in medical education, assess current gaps, and evaluate the future outlook for AI in healthcare. This framework will assist policymakers in universities, governments, and health authorities in designing effective educational programs.

The study

This policy brief presents the findings of a study conducted in 2024 involving 1,916 medical students from years one to five at kermanshah University of Medical Sciences. The study utilized a census sampling method. The instrument employed was the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS), developed by Karaca et al., which consists of 22 questions across four subscales: cognition, ability, vision, and ethics. All participants provided written informed consent before completing the questionnaire. The consent process included comprehensive information regarding the study’s purpose, procedures, potential risks and benefits, and the voluntary nature of participation. Participants were informed of their right to withdraw at any time without facing any consequences. Since all participants were medical students aged 18 years or older, no minors were involved, and therefore, parental consent or assent was not required. Responses were measured using a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree).20 The overall mean score for the scale in this study was 3.45 ± 0.40. Among the subscales, “vision” received the highest score, while “cognition” received the lowest ( Figure 1). The validity and reliability of the Persian version of this questionnaire were previously established by Ghalibaf et al. (2023) among medical students at Mashhad University of Medical Sciences, with Cronbach’s alpha coefficients of 0.886, 0.905, 0.865, and 0.856 for the cognition, ability, vision, and ethics subscales, respectively, and an overall Cronbach’s alpha of 0.944 for the entire scale.21 Similarly, a study by Rezazadeh et al. among medical students in Kerman confirmed the questionnaire’s validity and reliability, reporting an overall Cronbach’s alpha of 0.94.22 In the present study, the Cronbach’s alpha coefficient was 0.762.

77b0ed0d-e43e-439a-a07c-4ca16c20634e_figure1.gif

Figure 1. The mean of the artificial intelligence variables of medical students.

The findings indicate that the mean medical artificial intelligence readiness score was 3.45, with a standard deviation of 0.40, suggesting a moderate level of preparedness. Building upon the original findings (mean: 3.45 ± 0.40), a larger cohort analysis conducted in 2025 reaffirms moderate readiness (61.34 ± 10.13). ANOVA results indicate significant differences across academic years (p < 0.001), with third-year students exhibiting the highest readiness levels at 64.85 ± 8.01. Additionally, regression analysis identifies prior exposure to AI as a predictor of cognition and ability (β = 0.092, p = 0.020; β = 0.113, p = 0.004, respectively). Gender differences were found to be non-significant for AI subscales (p > 0.05). The highest score in the vision subscale reflects optimism regarding AI’s potential to enhance diagnostic accuracy and improve patient outcomes. Scores in the ability and ethics subscales demonstrate moderate confidence in the utilization of AI tools, along with an awareness of the ethical challenges associated with them. Conversely, cognition was identified as the weakest subscale, highlighting a limited understanding of AI concepts such as machine learning. The variability in scores suggests disparities in access to AI education. These findings are consistent with global trends, as only 10% of medical schools worldwide have integrated AI into their curricula.23

Limitations and future directions

The cross-sectional design limits the ability to establish causality, while the use of a single-institution sample restricts generalizability. Future directions include longitudinal tracking of AI readiness over multiple years and integrating interventions such as prompt engineering workshops to assess improvements in cognitive outcomes.

Discussion

The moderate readiness score of 3.45 is consistent with global findings, highlighting the necessity for targeted interventions in cognitive skills, where prompt engineering could be instrumental. By equipping students with the ability to formulate effective prompts, educators can enhance AI literacy, facilitating the improved integration of large language models (LLMs) in clinical practice. This strategy not only improves diagnostic accuracy but also tackles ethical challenges, including bias mitigation through optimized inputs. Incorporating prompt engineering into medical curricula has the potential to transform medical education, rendering it more adaptive and personalized.

World Health Organization guidelines on artificial intelligence in medical education

The World Health Organization (WHO) guidelines provide a comprehensive framework for preparing medical students for the future of artificial intelligence (AI) in medicine. These guidelines emphasize ethical, technical, and practical training and align with the findings of the policy brief, which identified gaps in knowledge, ethical education, and the practical application of AI in medical education. The proposed policy recommendations, grounded in WHO principles, aim to empower future physicians to integrate AI into healthcare safely, effectively, and ethically.

  • Integration of AI education into medical curricula

Medical schools should mandate the incorporation of training on the fundamentals of artificial intelligence (AI), including technical concepts such as machine learning, as well as its capabilities and limitations, into the curriculum. This education should emphasize the interpretation of AI outputs and their application in clinical decision-making, equipping future physicians to utilize this technology safely and effectively.24,25 For example, Curricula should incorporate modules on prompt engineering to equip students with the skills necessary to design AI interactions for clinical simulations, including the generation of patient histories and ethical dilemmas utilizing large language models (LLMs).26

  • Development of ethics and communication skills training pertaining to artificial intelligence

Educational programs must focus on addressing ethical challenges, including algorithmic bias and data privacy, while also enhancing communication skills necessary for conveying AI-driven recommendations to patients. Specialized courses should be developed to strengthen ethical analysis and facilitate the effective communication of AI-based recommendations, thereby equipping physicians to navigate complex decision-making scenarios.24 These training sessions may incorporate workshops that address the management of sensitive situations, including the handling of adverse algorithmic predictions.

  • Development of standardized educational programs and international collaboration

In light of the global disparities in AI education, it is advisable to establish standardized curricula in partnership with international organizations, such as the World Health Organization (WHO). These programs should integrate practical training with AI tools and promote the cultivation of critical thinking skills necessary for assessing emerging technologies.7

  • Establishment of interdisciplinary and continuous educational programs

Medical schools should collaborate with experts in data science, ethics, and technology to develop interdisciplinary educational programs. These programs should be integrated into lifelong learning initiatives to ensure that physicians stay informed about advancements in artificial intelligence.27 For instance, annual refresher courses could be implemented to introduce the latest AI tools and their applications.

  • Enhancing practical and evidence-based training

Curricula should integrate practical training with AI tools, including decision-support systems and remote monitoring technologies. The implementation of case studies and simulations can effectively bridge the gap between theoretical knowledge and practical application.27 Furthermore, students should receive training in evaluating the evidence that supports AI tools to ensure their safety and effectiveness.

  • Promoting inter-institutional collaboration and investment in faculty development

Governments and universities should prioritize investment in the development of faculty expertise in artificial intelligence while fostering partnerships with private sector organizations and technology institutes.27 Such collaborations may encompass knowledge exchange programs or the establishment of AI research centers within medical schools.

Policy recommendations for integrating artificial intelligence into medical education

  • Mandatory AI education in medical curricula

    • Medical schools should incorporate mandatory training on the fundamentals of artificial intelligence (AI), including technical concepts such as machine learning, along with an understanding of its capabilities and limitations. This education should prioritize the interpretation of AI outputs and their application in clinical decision-making to ensure safe and effective utilization by future physicians, with a particular emphasis on prompt engineering for the development of adaptive learning tools.

  • Development of standardized global curricula

    • In light of global disparities in AI education, standardized curricula should be developed in collaboration with international organizations such as the World Health Organization (WHO) and global medical associations. These programs should include practical training and foster critical thinking skills necessary for evaluating emerging technologies.

  • Establishment of interdisciplinary educational programs

    • Medical schools should collaborate with experts in data science, computer engineering, and ethics to create interdisciplinary educational programs. These programs should be integrated into lifelong learning frameworks to ensure that physicians remain aligned with advancements

in AI.

  • Practical training with AI tools

    • Curricula should feature hands-on training with AI tools, including decision-support systems and remote monitoring technologies. The utilization of clinical simulations and case studies can effectively bridge the gap between theoretical knowledge and practical application.

  • Ethics and communication training

    • Educational programs should address ethical challenges such as algorithmic bias, data privacy, and accountability. Specialized courses should be developed to enhance ethical analysis and improve the ability to effectively communicate AI-driven recommendations to patients. This can include prompts designed to simulate bias scenarios and promote equitable AI usage.

    • Prompt Engineering Workshops: Conduct hands-on workshops designed to equip students with the skills to engineer prompts for large language models (LLMs), thereby enhancing their capabilities in personalized education and clinical decision-making.

  • Investment in faculty development

    • Governments and universities should invest in the training and development of faculty members with expertise in AI. Faculty training programs should encompass workshops on data science and the clinical applications of AI.

  • Collaboration with technology institutions and the private sector

    • Universities should establish partnerships with technology institutions and private sector entities to secure the educational resources and technological infrastructure necessary for AI training. Such collaborations may involve the establishment of AI research centers within medical schools.

  • Evidence-based evaluation training

    • Students should be trained to critically assess the scientific evidence supporting AI tools to ensure their safety and effectiveness. This training should include the analysis of clinical studies and real-world data.

  • Continuous professional development programs

    • To keep pace with rapid advancements in AI, annual refresher courses or online programs should be developed to introduce physicians to the latest AI tools and their applications.

  • Promoting positive attitudes toward AI

    • Educational programs should incorporate activities designed to alleviate students’ concerns regarding AI, such as fears of physician replacement, and foster a positive perception of AI as a supportive tool. Interactive workshops and discussion sessions can play a crucial role in achieving this objective.

Author contributions

AZ conceptualized and designed the survey, conducted the investigation, analyzed the data, revised the manuscript, and performed grammatical editing. AZ has reviewed and approved the final manuscript.

Ethics and consent

The data collection in the present study was conducted after the approval of Kermanshah University of Medical Sciences, and Publication Ethics Boards the number IR.KUMS.REC.1402.472. We confirm that all methods used in this study were carried out in accordance with relevant guidelines and regulations. The participation of students was completely voluntary and informed consent was obtained from all participants.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 10 Nov 2025
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Ziapour A. Policy Brief: AI Readiness in Medical Education: Assessing Current Gaps and Future Outlook [version 1; peer review: awaiting peer review]. F1000Research 2025, 14:1233 (https://doi.org/10.12688/f1000research.171529.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 10 Nov 2025
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.