Keywords
Artificial Intelligence, Chatbot, Students, Mental Health & Well-Being
The primary barriers to effective and comprehensive treatment of mental disorders are insufficient resources and competent health and medical personnel, alongside social discrimination, stigma, and marginalization. Artificial intelligence-enabled technologies are emerging as a promising solution for longstanding difficulties, most notably is mobile-based therapy chatbots.
This is a quantitative, descriptive comparative research design aimed to identify the relationship between the utilization of the Artificial Intelligence Chabot, Stress, Anxiety, and Depression levels among Health Sciences University Students at a University within the United Arab Emirates. The sample was recruited from four health sciences Colleges by using Stratified random sampling technique (n= 298).
Three tools were used for the data collection and the result revealed that a total of 206 participants (69.1%) reported having interacted with an AI chatbot, with the most used applications being Snapchat (76.9%), followed by ChatGPT and Bard (23.4% each). 40% of the participants reported that the chatbots understood them well, while 16% found that the chatbots helped to reduce their stress. Participants who used the AI chatbot were significantly more likely to suffer from moderate to extremely severe depression (63.5%) compared to those who had not used AI chatbots (36.7%, p<0.001). The multivariate regression analysis indicated that higher levels of depression (OR=1.022, 95% CI: 1.01-1.085, p<0.001) and anxiety (OR=1.05, 95% CI: 1.01-1.21, p<0.001) were strong predictors of increased AI chatbot usage.
Stress levels did not significantly predict AI chatbot usage. It is recommended that early intervention and support including university student counselling can significantly alleviate the burden of mental health issues and contribute to the overall well-being and academic success of students. AI chatbots in mental health care present a promising adjunct to nursing interventions; nonetheless, their implementation must be meticulously regulated to guarantee safe and practical assistance akin to the regulatory rigor imposed on registered healthcare practitioners.
Artificial Intelligence, Chatbot, Students, Mental Health & Well-Being
The global mental health care system is currently facing significant challenges and a need to explore new treatment approaches inclusive of digital healthcare strategies. The World Health Organization states that one in four individuals may experience mental illness at some stage in their lives (Pontes et al., 2021). With growing concerns of an increase in the vulnerability of adolescent mental health post-COVID 19 pandemic by Wright et al., (2024) there is a pressing need to identify new modes of treatment accessible and familiar with this generation. Indeed, the World Health Organisation (WHO, 2017) explains that an overreliance of hospital-based care has the potential to create barriers which consequently may impact the identification and recovery from mental illness. The authors seek to expand the knowledge base around AI chatbot use within the Middle East due to a growing need for comparative analyse with global studies. The authors highlight the research of Dattani (2021) that reports that mental health conditions in the Middle East have remained relatively consistent over the past two decades. Albeit, that mental health conditions are increasing as a share of the total disease burden. Worryingly, research by Al Habeeb et al. (2023) states that the Middle East and North Africa (MENA) region forms the global concentration for the proportion of mental health disorders as a disproportional share of the total disease burden. In support Dattani (2021) explains that in Kuwait, Jordan, Oman, and Qatar the percentage of reported mental health conditions as a share of the total disease burden is more than double the global average of 5%. The study’s authors articulate that the post-COVID-19 era has necessitated a need for healthcare leaders to continue to examine new innovative strategies inclusive of AI for a population with a lived experience of a global humanitarian crisis. Al Habeeb et al. (2023) supports this opinion in that adolescents are at risk of mental illness and with a higher burden of noncommunicable diseases. Indeed, mental disorders remain the primary source of health-related economic distress globally (Ransome et al., 2022). Depression and anxiety are the predominant causes, impacting around 322 million (depression) and 264 million (anxiety) individuals worldwide (Levant et al., 2022). The primary barriers to effective and comprehensive treatment are insufficient resources and competent medical personnel, alongside social discrimination, stigma, and marginalization. However, there is a beacon of hope. Information technology tools, particularly AI-enabled technologies, are emerging as a promising solution for longstanding difficulties such as societal stigma. These technologies are expected to provide more accessible, cost-effective, and potentially fewer stigmatizing alternatives to traditional mental health treatment models (Williamson et al., 2022). It is theorized that by reducing the stigma associated with mental health, AI has potential for paving the way for a more supportive and encouraging environment for those in need and those whose preference maybe digital health themed. The author’s aim was to conduct research that expands the knowledge of AI chatbot use to support mental well-being within the Middle East and specifically in the United Arab Emirates.
Artificial intelligence (AI) has had a significant impact on our daily lives. Gupta et al. (2023) explains that the causality of these enhancements is due to the advancement of artificial intelligence in recent years. Conversational agents, or chatbots, are software systems featuring a conversational user interface. They can be classified as open-domain if they engage with users on any topic or task-specific if they assist with a particular activity. The subsequent ideas are fundamental to chatbot technology. Chatbots are AI-driven software systems capable of engaging in natural language communication with individuals through text or voice interactions (Lee et al., 2024; Paay et al., 2022). This technology has continuously evolved and is presently employed in digital assistants like Apple’s Siri, Yandex’s Alice, Amazon’s Alexa, and other virtual assistants, in addition to consumer interfaces in electronic commerce and online banking (Nirala et al., 2022).
Depression, anxiety, and stress are prevalent among university students and impact the lives of many within their academic journey, and can lead to poor academic performance, unhealthy interpersonal relationships (Lee et al., 2020), and sadly, a low quality of life (Zhong et al., 2019). Mobile-based therapy chatbots are increasingly being used to help young adults who suffer from depression (Guo et al., 2020; Sheldon et al., 2021). As more and more people are interacting with computers, Chabot is becoming increasingly popular. Major tech firms including Microsoft, Google, Amazon, and Apple, have all released “personal digital assistants” or “smart speakers” that serve as platforms for chatbots (also known as voicebots) in 2016, which has been dubbed “The rise of the Chabot”. When compared to more traditional means of human-computer connection, chatting with a Chabot is likely to feel more natural and intuitive because it mimics human contact.
As Artificial intelligence (AI) technology has advanced rapidly over the past decade, more and more publications have begun to acknowledge AI’s importance in Internet-based Psychological Interventions. Gratzer and Goldbloom (2020) and Vaidyam et al. (2019) found that AI chatbots can more closely mimic human therapists. Even though most universities offer free therapy for students, many students refuse to seek help when they are suffering from mental health issues due to the reason of low perceived need (Andrade et al., 2014), attitude barriers (Andrade et al., 2014; Neathery et al., 2020), and the lack of mental health education (Neathery et al., 2020). Chabot could be a scalable solution that provides an interactive means of engaging users in behavioral health interventions driven by artificial intelligence. Although some Chabot platforms have shown promising early efficacy results, there is limited information about how people utilize these systems. Understanding the usage patterns of a Chabot for depression, anxiety, and stress among medical and health sciences students represents a crucial step towards improving Chabot’s design and providing information about Chabot’s strengths and limitations. Therefore, this study aimed to identify the relationship between the utilization of the Artificial Intelligence Chabot and Stress, Anxiety, and Depression levels among Medical and Health Sciences University Students within the United Arab Emirates.
RQ1. What are the frequencies of using the Chabot among medical and health sciences university students?
RQ2. What are the reasons for the usage of AI Chabot to cope with depression, anxiety, and stress among Medical and Health Sciences Students?
RQ3. Is there a relation between the usage of AI Chabot and depression, anxiety, and stress among Medical and Health Sciences University Students?
RQ4. Is there a difference between the group who is using Chabot and the one who does not about depression, anxiety, and stress levels?
The sample was recruited from four Colleges: College of Medical Sciences (MBBS), College of Dental Sciences (BDS), College of Pharmacy (B Pharm), and College of Nursing (BSN). The sample size was calculated based on the total number of students in the four colleges: MBBS, BDS, B Pharm, and BSN (530, 298, 123, and 358, respectively), in total of 1309 students. A stratified random sampling technique was obtained using this formula: ([sample size/population size] x stratum size) as follows: 120 students from MBBS, 68 students from BDS, 28 students from B Pharm, and 82 students from BSN (n = 298). Inclusion Criteria were undergraduate students who accept to participate in the study.
A face-to-face survey was carried out to collect the data. The participants took approximately 10-15 minutes to complete the questionnaire, and the duration of data collection was two months. To collect the data three tools were used. The correspondence/final author is a licensed mental health practitioner within the United Arab Emirates and was able to ensure rigor within the data collection process.
Tool I : Socio-Demographic Characteristics Questionnaire: This questionnaire includes questions on college, gender, age, nationality, and year in the university.
Tool II : AI Chatbot Usability questionnaire: The researcher created this questionnaire to assess the students’ usage of AI Chatbots, their causes of usage, and the time they spent on chatbots.
Tool III : Depression Anxiety Stress Scale 21 (DASS-21). The DASS-21 (Lovibond and Lovibond, 1995) is a well-established instrument for measuring depression, anxiety, and stress, with good reliability and validity reported from Hispanic American, British, and Australian adults. Lovibond and Lovibond (1995) designed this tool to measure the emotional states of depression, anxiety, and stress through this set of three self-report scales. Seven items are sub-divided into three scales that collectively allow the DASS-21 tool to assess mental well-being. The first scale focuses on depression and is used to assess inertia, hopelessness, devaluation of life dysphoria, self-deprecation, lack of interest/involvement, and anhedonia. The second scale focuses on anxiety and assesses anxious effect, subjective experience, situational experience, muscle effects, and autonomic arousal. It should be noted that there are reports of the stress scale being sensitive to levels of chronic non-specific arousal (Lovibond and Lovibond, 1995). This third scale assesses the participants ability to relax, recorded impatience, level of agitation, irritability and signs of over-reactivity. The final stage of the process is a holistic assessment, created through review of the calculated scores for depression (scale one), anxiety (scale two), and stress (scale three) are calculated through the accumulative score before progressing on to data analysis.
Data analysis was done using SPSS software, version 28 for Windows—the potential associations between the DASS scores and demographic variables, using chi-square. Regarding the association between the DASS items and AI usage, a binary outcome variable was created to classify participants into two distinct groups, “normal to mild” and “moderate to extremely severe,” utilizing predefined cutoff points determined using DASS score. A logistic regression analysis was performed to assess the association between the categorized DASS scores and periodontitis while adjusting for potential confounding factors. Odds ratios (OR) and corresponding 95% confidence intervals (CI) were calculated to estimate the strength and direction of the association. All statistical tests were conducted with two-tailed significance, and a p-value of less than 0.05 was considered statistically significant.
To assess the internal consistency and reliability of the Depression, Anxiety, and Stress Scale (DASS) scores and tool II (AI usage), a reliability analysis was conducted using Cronbach’s alpha coefficient. This coefficient, with a higher value (>0.7 and 0.8), indicates enhanced internal consistency among the items, a crucial factor in the reliability of the tools. Tool II was checked for its validity by a bilingual specialist.
Table 1 presents the demographic characteristics of the study participants. Most participants were female (N = 236, 79.2%), with a mean age of 20.9 ± 2.5 years. The most significant proportion of participants was from the College of Medicine (N = 120, 40.3%), followed by the College of Nursing (N = 82, 27.5%), Dental (N = 68, 22.8%), and Pharmacy (N = 28, 9.4%). Regarding the year of study, the highest percentage was in the first year (N = 107, 35.9%), followed by the third (N = 84, 28.2%), fourth (N = 73, 24.5%), fifth (N = 15, 5.0%), and second (N = 19, 6.4%) years ( Table 1).
Table 2 presents the usage of AI chatbots among the students. A total of 206 participants (69.1%) reported having ever spoken with an artificially intelligent chatbot. The most used AI chatbot applications were Snapchat (N = 230, 76.9%), followed by ChatGPT and Bard (N = 70, 23.4% each), and Copilot (N = 10, 3.3%): Table 2, Figure 1.
Nearly half of the study’s participants (40%) mentioned using AI chatbots because they are familiar with the interface with the platform and feel have a familiarity and understanding of the systems, while 25.7% reported that they can access them anytime. 20.8% found that the AI chatbot is always available to them. 17.4% felt a relationship akin to the platform representing a friend, and 16% found that it has a positive impact on reducing their stress levels. Figure 2 outlines the engagement with the A. I platforms.
Overall, the identified assessment tools indicated that more than half of the participants, 170 (57.0%), had moderate to extremely severe depression, 204 (68.5%) had moderate to extremely severe anxiety, and 100 (33.6%) had moderate to extremely severe stress. Figure 3 provides insight from the study’s adoption of the Depression, anxiety, and stress scale (DASS).
Table 3 shows the association between DASS scores and demographic characteristics. There were no significant differences in depression and anxiety levels between genders. However, a significant association was found for stress, with 35.6% of females experiencing moderate to highly severe stress compared to 25.8% of males (p < 0.001). Students from the Dental College had the highest rates of moderate to extremely severe anxiety (75.0%) and stress (32.4%) compared to other colleges (p < 0.001 for both). First-year students had the highest prevalence of moderate to extremely severe depression (60.7%), anxiety (69.2%), and stress (26.2%) across all.
Table 4 illustrates the association between AI chatbot usage and DASS scores. Participants who had never spoken with an AI chatbot were more likely to have moderate to extremely severe depression (N = 125, 63.5%) compared to those who had not used an AI chatbot (N = 45, 36.7%, p < 0.001). Additionally, 153 participants (75.0%) who used AI chatbots had moderate to extremely severe anxiety, while only 51 non-users (55.0%) had this level of anxiety (p < 0.001). However, no significant association was found between AI chatbot usage and stress levels (p = 0.236).
The results of the multivariate regression analysis identify factors predicting AI chatbot usage. After adjusting for covariates, students from the College of Medicine were more likely to use AI chatbots than those from the College of Nursing (OR = 3.094, 95% CI: 1.057-3.059, p = 0.039). Additionally, higher levels of depression (OR = 1.022, 95% CI: 1.01-1.085, p < 0.001) and anxiety (OR = 1.05, 95% CI: 1.01-1.21, p < 0.001) were significantly associated with increased AI chatbot usage ( Table 5).
Alowais et al. (2023) explains that the rapid advancement of AI has ushered in a new era of digital communication tools, including AI-powered chatbots. These chatbots are increasingly employed across various domains, including education and healthcare, to provide information, support, and interaction (Bajwa et al., 2021). As such, understanding the factors that influence the usage of AI chatbots, particularly among university students, is crucial. This demographic often faces unique academic and social pressures, which may drive their interaction with technological aids (Tian et al., 2024). Herein, we aimed to investigate the relationship between mental health issues among students in medical and health sciences disciplines and AI chatbot interaction.
This is timely as The World Health Organisation (2022) estimates that the majority of individuals with mental illness do not seek treatment, citing reasons as concerns a perceived damaging of their family’s reputation, proposals for marriage, social status, encountering discrimination, exclusion from communities, and stigma. Consequently, these individuals can experience poor academic achievement (Bruffaerts et al., 2018) and diminished self-esteem (Stuart et al., 2019). The findings of this study collaborate this earlier research and underscore a significant prevalence of mental health challenges, including depression, anxiety, and stress, among students in medical and health sciences disciplines, with more than half of the participants reporting moderate to extremely severe symptoms. The findings indicate that AI chatbot usage was associated with higher levels of depression and anxiety. Specifically, students who had interacted with AI chatbots exhibited a greater likelihood of experiencing moderate to extremely severe depression, although no significant correlation was found with stress levels.
The study revealed that the prevalence of mental health issues among university students, particularly in the medical and health sciences fields, is consistent with a substantial body of existing literature outside of the Middle East. Numerous studies by researchers such as Rtbey et al., (2022); Agyapong-Opoku et al., (2023); Ibrahim et al., (2024); Nair et al., (2023) have highlighted the high rates of depression, anxiety, and stress experienced by these healthcare students. in these disciplines, often attributed to the rigorous academic demands and intense competition inherent in healthcare education. The students experiencing these stressful life events, so often a consequence of rigorous academic growth sought support from AI chatbots. This process demonstrates evidence of the presence of Salutogenesis. Originally developed by Antonovsky, salutogenesis explains how some individuals utilize resources available to them to survive and thrive effectively in adverse social conditions (Antonovsky, 1979; Mottershead et al., 2024). This adoption of AI chatbot platforms appears to demonstrate that health and well-being cannot be conceptualized in the narrowest sense as a biological function. The students appear to be adopting this technology in an attempt to enhance their quality of life and to support them within their adverse circumstances of life within higher education. The authors therefore emphasize that salutogenesis has an important role in creating insight into the mental well-being of students and creating further understanding around AI use within their lives.
This understanding is furthered within the study’s observation that participants perceived medical chatbots as possessing numerous advantages, such as anonymity, convenience, and expedited access to relevant information. The participants appeared to be equally inclined to share emotions and information with a chatbot as they would with a human counterpart. The intriguing aspect is that interactions with chatbots and humans exhibited similar degrees of perceived understanding, intimacy of disclosure, and cognitive reappraisal, indicating that users engage psychologically with chatbots as they do with humans. The study’s participants mentioned using AI chatbots because they felt an understanding with them, that they (participant) can access them (chatbot) at any time which appeared to enhance a sense of familiarity due to the convenience that the chatbot was meeting their immediate needs. This appeared to foster a sense of belonging towards the chatbot and social cohesion mirroring similar relations identified as ‘friendship’ and that this relationship was able to alleviate their feeling of stress which in turn enhancing a bond of trust with the AI chatbot as the participants did not feel that they could or would be judged by the AI chatbot.
Regarding AI chatbot usage and its association with mental health well-being, our findings are consistent with research by Klos et al. (2021), which had suggested a potential link between excessive digital technology use and a consequential adverse negative impact on mental health. Interestingly, the authors highlight this study’s findings that indicate a significant association between AI chatbot usage paralleled with higher levels of depression and anxiety among students. This maybe explained that those students with negative ill-health are seeking support from the AI chatbots rather than the digital exposure is having an adverse effect on their mental health. There appeared to be a lack of a significant relationship with stress levels associated with AI chatbot use which contrasts with findings from studies such as that by Klos et al. (2021). This discrepancy may be attributed to variations in study methodologies, sample characteristics, and the specific platforms or types of AI chatbots examined. It does however, underscore the need for further research to create an understanding about the nuanced interactions between technology use and mental health outcomes in higher educational settings.
The study found no significant association between gender and AI chatbot usage, whilst other studies outside the Middle East have reported gender differences in technology adoption patterns (Truong et al., 2023). Moreover, the lack of significant association between age and AI chatbot usage in our study contrasts with findings by Truong et al. (2023), which identified age as a moderating factor of medical mobile applications. These discrepancies may stem from variations in sample characteristics, cultural contexts, or the specific types of technology examined, highlighting the need for further investigation into the nuanced factors influencing technology adoption among different global populations.
The high prevalence of AI chatbot engagement aligns with studies indicating increased acceptance and utilization of digital mental health interventions among young adults. Specifically, the popularity of platforms like Snapchat for accessing AI chatbots resonates with research demonstrating the widespread use of social media for mental health-related activities, including seeking support and sharing personal experiences (Klos et al., 2021). This would suggest that integrating AI chatbots into familiar social media platforms may enhance accessibility and acceptability among students, potentially addressing barriers to traditional mental health services. Whilst this may be favorable the authors note a need for rigor of data and the interpretation of data provided by the AI chatbot. Similarly, to the findings of Ahmed et al., (2025) it is recommended that there is a need to conduct a training program on AI usage in healthcare as well as ensuring that students are aware of the limitations of AI chatbot. This proposed training program could enhance the effectiveness of the usage of AI chatbot platforms whilst ensuring supportive mental health strategies. However, as highlighted by Nawaz et al., (2024) whilst there is indeed evidence of how digital systems can support mental health via enhanced social support, reducing stigma and isolation. Indeed, Mottershead and Ghisoni (2021) demonstrate the opportunities exist for non-pharmaceutical interventions however, a challenge is that the current healthcare landscape appears unprepared for its implementation and clearly there is a need for more explorative studies.
Despite the high prevalence of AI chatbot usage, our study also revealed alarming rates of moderate to extremely severe depression, anxiety, and stress among students highlighting the mental health challenges faced by university populations within the 21st century. The continued presence of mental health problems does raise questions about the effectiveness of AI chatbots in mitigating mental health symptoms among students when usage is so high. The authors believe that future research should explore the integration of AI chatbots with other forms of validated and accredited mental health support to optimize outcomes and ensure comprehensive care for this vulnerable population entrusted with our society’s future healthcare needs.
Integrating a well-structured demographic and psychological assessment enhances the reliability of our findings. However, there are limitations to consider. The study’s cross-sectional design restricts our ability to establish causality between mental health issues and AI chatbot usage. Undoubtedly, the author’s own lived experience and subjectivity may have influenced the interpretation of these findings, as highlighted by Blaikie (2007). However, precautions were taken to limit the impact of this bias, where possible, by adhering to a clear and robust methodological framework. The sample is limited to a single institution, which may affect the generalizability of the results to broader university populations. However, the data adds a new cultural context from the United Arab Emirates, contributing to global knowledge of this topic. The authors would recommend that future studies could benefit from longitudinal designs and broader demographic sampling to overcome these noted limitations.
Most of the participants experienced moderate to extremely severe symptoms. Notably, students who had used AI chatbots were more likely to have higher levels of depression and anxiety compared to non-users. Factors such as being a medical student and having a higher academic year were also associated with increased AI chatbot usage. These findings underscore the need for comprehensive mental health interventions and support services tailored to the unique needs of this population, which may include the judicious integration of AI-powered chatbots as part of a broader mental health strategy. In determining the relevance for clinical practice, the use of AI chatbots holds great potential in identifying and treating mental health issues like anxiety, depression, and stress in students and adolescents. Clinical nurses may recommend these technologies as primary support for clients who may not seek in-person support. It is feasible that College based counselling services could utilize AI Chatbots. This could let users monitor their symptoms in real-time and guide them through evidenced based and accredited cognitive behavioral therapy (CBT) treatment. The availability of Chatbots’ twenty-four hours a day and seven days a week, could have a significant positive impact on mental health care within universities and wider societies as AI Chatbots assist with this generations instant demand for a response and rapid assistance. It is feasible that University counsellors as well as wider healthcare professionals could incorporate chatbots into treatment plans, offering enhanced patient and family involvement and therefore, hope and optimism for holistic care and enhanced outcomes.
The study was conducted as per the relevant ethical guidelines and regulations, including the Declaration of Helsinki. After getting approval from RAK College of nursing REC (RAKCON-REC-01-2023/24-F-M) for the study, written informed consent was obtained from the participants. The privacy of the participants and the confidentiality of the collected data were assured.
In adherence to regulatory practices on the sharing of confidential student data, readers are requested to direct requests for access to the corresponding author – rmottershead@sharjah.ac.ae. Data will be shared upon request.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: My area of research is higher education, in particular in qualitative studies of lived experience.I have published several studies relating to AI.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: , community Emotional Intelligence, psychiatric mental health, Psychological , educational, a ddication
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 07 Jul 25 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)