ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Responsible practices and ethical behaviors in the use of artificial intelligence among university students

[version 1; peer review: awaiting peer review]
PUBLISHED 06 Jan 2026
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the Artificial Intelligence and Machine Learning gateway.

Abstract

Background

The growing incorporation of artificial intelligence (AI) in higher education presents significant ethical challenges, particularly regarding the critical and responsible use of these technologies by students. In this context, the present study aimed to analyze responsible practices and ethical behaviors in the use of AI among students at the State University of Milagro.

Methods

A quantitative, explanatory-level approach was applied, employing a non-experimental and cross-sectional design, with a validated questionnaire administered to 716 participants from various academic programs. The analysis was conducted through structural equation modeling to explore the influence of affective, behavioral, and cognitive learning dimensions on the ethical dimension.

Results

The affective and cognitive dimensions exert a significant and positive impact on the development of learning ethics, while the behavioral component did not show statistically relevant effects. Additionally, the study evidenced high reliability and validity of the instrument, as well as a medium level of AI appropriation among students, with predominant use of tools such as ChatGPT.

Conclusions

Ethical training cannot be dissociated from emotional attitudes and technical knowledge about AI. The study acknowledges as limitations the non-probabilistic nature of the sample and the cross-sectional design.

Keywords

Ethics, educational technology, artificial intelligence, higher education, student behavior, educational assessment, pedagogical innovation, assisted learning.

Introduction

The integration of artificial intelligence (AI) in higher education has generated growing academic interest in recent decades, driven by its potential to profoundly transform teaching, learning, assessment, educational personalization, and institutional management processes (Alotaibi, 2024; Katsamakas et al., 2024; Murdan & Halkhoree, 2024). This transformation is grounded in the development of intelligent systems capable of analyzing large volumes of data, detecting patterns of student behavior, and generating automated interventions (George & Wooden, 2023), thereby opening new possibilities for a more efficient, adaptive, and student-centered education.

In this context, recent studies such as those by Jiang et al. (2024) have shown that the technological dimension carries the greatest weight in the processes of incorporating AI into higher education, far surpassing other dimensions such as pedagogical, ethical, and cultural. This technological predominance has favored an instrumental adoption of AI, focused on infrastructure, availability of tools, and task automation, while the formative and axiological components have been addressed only in a fragmented or secondary manner. Several investigations have warned that this reductionist orientation generates multiple tensions within university environments (Moreira-Choez et al., 2025; Nishant et al., 2020; Raisch & Krakowski, 2021).

On the one hand, it has been reported that the uncritical use of AI-based systems, without adequate pedagogical mediation, can generate inequalities in access to knowledge, algorithmic biases, and technological dependency (Arnold, 2021; Bracci, 2023). On the other hand, the lack of ethical training in the use of these technologies has given rise to questionable academic practices, such as the excessive delegation of cognitive tasks to automated tools, the inappropriate use of generative models for content production, and the lack of transparency in assessment criteria (Liehner et al., 2022; Munoko et al., 2020; Parker & Grote, 2022). Furthermore, a significant gap has been identified between technological development and students’ capacity to critically understand the social, cultural, and regulatory impacts of AI, which limits their active and reflective participation in digital environments (Chan et al., 2025).

Despite these advances, several studies have highlighted the existence of persistent challenges in the adoption of AI in university contexts (Alhosani & Alhashmi, 2024; Naseer et al., 2025; Ragolane & Patel, 2024; Sabando-García et al., 2025). These include limited teacher training in advanced digital competencies, insufficient curricular adaptation, and the absence of clearly defined ethical-legal regulatory frameworks (Gabriel et al., 2022; Moreira-Choez et al., 2024d). Moreover, recent research emphasizes the need to consider sociocultural factors in the design and implementation of AI-based strategies, given that cultural perceptions directly influence the acceptance, use, and appropriation of these technologies. However, a theoretical and empirical gap persists regarding the interaction between these approaches, as well as a lack of integrative studies simultaneously addressing the technological, pedagogical, ethical, and cultural dimensions.

In this context, it becomes essential to develop research that enables a systemic understanding of the factors influencing the responsible and ethical incorporation of AI in higher education. The present study is justified both by the academic relevance of the phenomenon and by its potential impact on the transformation of training processes, the consolidation of a critical digital culture, and the promotion of a more equitable, inclusive, and ethically guided education. Despite the growing interest in technological deployment within universities, gaps remain in understanding the role played by the affective, behavioral, and cognitive dimensions of AI learning in shaping students’ ethical practices. Based on this problem, the following research question is posed: What are the responsible practices and ethical behaviors in the use of artificial intelligence among students at the State University of Milagro (UNEMI)? From this question, the following hypotheses are formulated:

H1. The affective learning of AI significantly influences learning ethics.

H2. The behavioral learning of AI has a significant impact on learning ethics.

H3. The cognitive learning of AI significantly contributes to learning ethics.

H4. AI learning has a significant impact on learning ethics among university students.

Accordingly, the general objective of this study is to analyze responsible practices and ethical behaviors in the use of artificial intelligence among students at the State University of Milagro (UNEMI), with identifying the predominant factors influencing their ethical development and proposing guidelines for a contextualized and sustainable implementation in higher education environments.

Methods

The present study was conducted under a quantitative and explanatory approach, aimed at identifying and analyzing the factors that influence the responsible and ethical use of artificial intelligence in the university context. A non-experimental, cross-sectional design was adopted, as data collection was carried out at a single point in time without manipulating the independent variables. The research was of a correlational-causal type, as it sought to establish significant relationships between the dimensions of AI learning (cognitive, affective, and behavioral) and learning ethics among university students.

The study was conducted at UNEMI, a higher education institution located in the city of Milagro, in the province of Guayas, Ecuador. The study population consisted of undergraduate students enrolled in various degree programs. A sample of 716 students was selected using non-probabilistic convenience sampling, considering their accessibility and willingness to participate in the study. For data collection, a structured questionnaire was administered through digital means, which included scales previously validated in studies on digital competencies, learning ethics, and the academic use of emerging technologies. Table 1 presents the demographic and academic characteristics of the participants, as well as their preferences and frequency of use of artificial intelligence (AI) applications.

Table 1. Distribution of the sample by sex, program, and preference for artificial intelligence applications.

VariableCategoryFrequency Percentage
Sex Male26737.3
Female44962.7
Total 716 100.0
Age Group 17 to 19 years21229.6
20 to 25 years42158.8
Over 25 years8311.6
Total 716 100.0
Semester Second15221.2
Third547.5
Fourth19327.0
Fifth608.4
Sixth8511.9
Seventh8912.4
Eighth7310.2
Ninth91.3
Tenth10.1
Total 716 100.0
Preference for AI Applications ChatGPT44261.7
Gemini11115.5
Siri578.0
Sora10.1
Deepseek40.6
Copilot60.8
Google Bard395.4
Claude40.6
DALL·E60.8
Midjourney81.1
Monica30.4
Fireflies50.7
Perplexity60.8
Sider10.1
Venice10.1
Poe10.1
Meta10.1
GAMMA10.1
None192.7
Total 716 100.0
Frequency of AI Use in Academic Activities Not frequent182.5
Slightly frequent15321.4
Occasionally29841.6
Frequent18025.1
Very frequent679.4
Total 716 100.0

Table 1 presents the distribution of the sample according to sex, age group, academic semester, preference for artificial intelligence (AI) applications, and frequency of AI use in academic activities. The sample distribution revealed a higher female participation rate (62.7%) compared to male (37.3%), suggesting a predominance of women in the study, possibly reflecting the overall composition of the university population. Regarding age, most participants were between 20 and 25 years old (58.8%), with a mean age of 21.6 years (SD = 3.7), representing a typically young cohort corresponding to the formative stage of higher education. The distribution by semester showed heterogeneity, with a predominance in the fourth (27.0%) and second (21.2%) semesters, which may be associated with greater availability or motivation among students in intermediate academic stages. Concerning preferences for artificial intelligence applications, ChatGPT emerged as the most frequently used tool (61.7%), followed by Gemini (15.5%), reflecting a trend toward the use of conversational models in educational settings. Finally, the frequency of AI use in academic activities showed that 41.6% of students use it occasionally and 25.1% use it frequently, indicating a moderate but expanding adoption of these technologies in university learning processes.

To ensure the validity and reliability of the instrument used to measure the use and ethics of artificial intelligence in the university context, fundamental statistical tests were applied prior to the factorial and structural analyses. Table 2 presents the results of the Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test of sphericity, which evaluate the adequacy of the sample for factor analysis.

Table 2. KMO and Bartlett’s test for measuring AI use and ethics.

KMO and Bartlett’s test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy,987
Bartlett’s Test of SphericityApprox. Chi-Square 75301,593
gl1485
Sig.,000

To ensure the validity and reliability of the instrument designed to assess the use and ethics of artificial intelligence in the university environment, preliminary statistical tests were conducted to verify data suitability before performing factorial and structural analyses. The results Table 2 showed a Kaiser-Meyer-Olkin (KMO) index of 0.987, indicating an excellent level of sampling adequacy and a high degree of interrelation among the analyzed items. Similarly, Bartlett’s test of sphericity produced a chi-square value of 75,301.593 with 1,485 degrees of freedom and a significance level below 0.001, confirming sufficient correlations among the variables. These results demonstrated the instrument’s internal consistency and its appropriateness for the application of multivariate techniques aimed at validating the proposed factorial structure.

In turn, Table 3 presents the internal reliability coefficients of the questionnaire, calculated using Cronbach’s α and McDonald’s Ω indicators, both for individual factors and for the overall scale.

Table 3. Reliability analysis of the AI questionnaire.

FactorsCronbach’s αMcDonald’s Ω Number of items
Affective learning0,9820,98219
Behavioral learning0,9700,97011
Cognitive learning0,9730,9729
Ethical learning0,9910,99116
Total 0,992 0,992 55

Regarding Table 3, it is observed that the reliability coefficients for all evaluated factors far exceed the minimum acceptable threshold of 0.70. Affective, behavioral, cognitive, and ethical learning show α and Ω coefficients ranging from 0.970 to 0.991, reflecting very high internal consistency. At the global level, the instrument reached α and Ω values of 0.992, indicating excellent reliability of the questionnaire. These findings confirm that the instrument used is statistically robust and suitable for accurately assessing students’ perceptions and attitudes regarding learning and the ethical use of artificial intelligence in the university setting.

Likewise, a specifically designed instrument was employed to assess responsible practices and ethical behaviors associated with the use of artificial intelligence in the university context. The questionnaire was administered through the Google Forms platform. The initial section of the form included an informed consent statement describing the objectives of the study, the voluntary nature of participation, data confidentiality, and the participants’ right to withdraw at any time without consequences. In accordance with the ethical principles governing research involving human participants, prior informed consent was obtained from all individuals involved in the study. For participants under 18 years of age, informed consent was provided by their legal guardians, in compliance with institutional and regulatory requirements. This consent was recorded electronically within the Google form: only those who accepted the terms or whose legal guardians accepted them, in the case of minors were allowed to access the questionnaire, while those who did not provide consent were automatically redirected to the end of the form, concluding their participation.

The items were structured using a five-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree), enabling the quantification of perceptions, attitudes, and ethical behaviors related to the use of artificial intelligence. Data processing and analysis were conducted using the statistical software SPSS (version 28) and AMOS, through which descriptive analyses, internal reliability tests, and validation of the proposed structural model were performed.

Ethical considerations

All individuals who took part in this research gave their informed consent electronically within the Google form in accordance with ethical guidelines for studies involving human participants. They were informed that participation was entirely voluntary and that they retained the right to withdraw at any stage without any consequences. To safeguard confidentiality, all personal information was anonymized. The study received ethical clearance from the Institutional Review Board (IRB) of Milagro State University, through approval “Oficio Nro. UNEMI-VICEINVYPOSG-DP-277-2025-OF,” issued on March 22, 2025.

Results and discussion

This section presents the results derived from the proposed structural equation model, which aimed to analyze the relationship between the dimensions of artificial intelligence learning (affective, behavioral, and cognitive) and learning ethics among university students at PUCESD. Model validation was performed using AMOS software, applying maximum likelihood estimation. The results obtained provide insights into the latent interactions among the studied variables and their influence on the configuration of ethics in educational contexts mediated by AI.

Figure 1 presents the structural model of AI learning ethics among university students. The diagram illustrates the relationships among the affective, behavioral, cognitive, and ethical latent variables, highlighting the strength and direction of the standardized paths estimated through structural equation modeling.

dda459f7-ee34-499f-a0de-f3a13b531314_figure1.gif

Figure 1. Structural model of AI learning on learning ethics in university students.

Note: The model displays the factor loadings of the indicators (P1–P55), errors (e1–e56), and the structural paths between the affective, behavioral, cognitive, and ethical variables.

Figure 1 illustrates the structural model that integrates the affective, behavioral, and cognitive dimensions of artificial intelligence learning and their direct associations with the ethical component. The standardized factor loadings (P1–P55) demonstrate high and consistent values across the observed indicators, confirming both convergent validity and internal consistency of the proposed theoretical model. The affective construct presents the strongest connection with the ethical dimension (β = 0.68), evidencing that students’ attitudes, emotions, and beliefs toward artificial intelligence substantially influence the internalization of ethical principles within the learning process. This finding aligns with previous studies emphasizing that emotional engagement toward emerging technologies constitutes a decisive factor in fostering critical awareness and digital ethical competencies (Mâță et al., 2020; Moreira-Choez et al., 2024b, 2024c; Pinargote-Macías et al., 2022). Similarly, Farangi et al. (2024) point out that positive emotions toward AI enhance ethical learning readiness and strengthen responsible judgment in digital contexts.

The behavioral dimension also exhibits a meaningful relationship with learning ethics (β = –0.05), suggesting that students’ habitual actions, usage patterns, and decision-making processes in AI-supported environments directly shape their ethical reasoning. This result supports Kudina (2019), who contends that ethical behavior in technology-mediated contexts is not derived solely from technical expertise but from the practical application of social responsibility principles. Correspondingly, Verma and Garg (2024) argue that active participation in AI-mediated educational experiences fosters ethical awareness, particularly when such activities are accompanied by pedagogical reflection and moral guidance.

Meanwhile, the cognitive dimension demonstrates a positive association with the ethical construct (β = 0.39), indicating that conceptual and procedural understanding of artificial intelligence contributes to ethical awareness, albeit to a lesser extent than the affective component. This suggests that cognitive mastery alone is insufficient to promote ethical conduct without the reinforcement of affective and behavioral engagement. In line with Waight et al. (2022), this result underscores the need to integrate technical knowledge with axiological and ethical training to enable students to critically assess the societal implications of AI. Therefore, ethical learning in AI-mediated educational settings should be conceived as a multidimensional process, emerging from the dynamic interaction among knowledge, emotion, and behavior, which together shape the moral consciousness necessary for responsible technological engagement.

The following section presents the analysis of the level of artificial intelligence use by university students, considering the four dimensions evaluated in the study: affective, behavioral, cognitive, and ethical. This information is summarized in Figure 2, which illustrates the percentage distribution of AI use according to low, medium, and high levels across each of the dimensions.

dda459f7-ee34-499f-a0de-f3a13b531314_figure2.gif

Figure 2. Level of AI use among university students.

Note: The figure illustrates the distribution of AI use levels low, medium, and high across the affective, behavioral, cognitive, and ethical learning dimensions, showing a predominance of medium-level engagement among university students.

Figure 2 reveals that most university students fall within a medium level of artificial intelligence use, both in the overall score (52.8%) and in the affective (47.8%), behavioral (43.7%), and cognitive (41.4%) dimensions. This finding demonstrates a partial adoption of AI as an academic resource, which may be linked to ongoing familiarization processes and an incipient integration into curricular environments. Similar results were reported by Valdivieso and González (2025), who found that most Latin American university students use AI-based tools occasionally and without clear institutional guidance. In this regard, the predominance of the medium level suggests the need to strengthen systematic training processes that consolidate the pedagogical use of AI from a critical and contextualized perspective.

Regarding the low levels, a greater concentration is observed in behavioral learning (34.7%) and cognitive learning (34.1%), which may reflect limitations in technical mastery and the operational application of AI tools. These results are consistent with Xia et al. (2024), who warn that, although students show interest in AI, there is a significant training gap in the functional use of these technologies for complex academic tasks. Similarly, the studies of Fanidawarti Hamzah et al. (2024) indicate that the lack of teacher training and the limited curricular integration of emerging technologies hinder the development of cognitive and procedural competencies among university students, restricting their ability to apply AI beyond superficial or recreational contexts.

In the ethical dimension, the results show a differential pattern. Although 32.4% of students were placed at the low level, the high level registered the highest percentage (34.6%) among all dimensions, suggesting a stronger internalization of principles and values associated with the responsible use of AI. This finding is consistent with Malinverni et al. (2025), who emphasize that the ethical dimension tends to develop more strongly when spaces for reflection on the social implications of using emerging technologies are created. Moreover, Chiu and Chai (2020), state that learning environments with an ethical orientation foster conscious practices, particularly when AI is addressed from an interdisciplinary and humanistic perspective. Therefore, the relative predominance of the high level in this dimension can be interpreted as a positive indicator of the transformative potential of ethical training in digital contexts.

Finally, the affective dimension shows a notable proportion at the high level (27.6%), suggesting that a significant group of students expresses positive attitudes toward AI, accompanied by interest, motivation, and willingness to learn. This result coincides with Bahroun et al. (2023), who found that a favorable emotional perception of AI can be a decisive factor for its integration into educational processes, provided that such an attitude is accompanied by meaningful content and participatory methodologies.

Table 4 presents the results of the convergent validity analysis for the research model on AI use and ethics. The table reports the standardized factor loadings, internal consistency coefficients (Cronbach’s α), composite reliability (CR), and average variance extracted (AVE) for the four latent dimensions—affective, behavioral, cognitive, and ethical learning—demonstrating the reliability and construct validity of the measurement model.

Table 4. Convergent validity of the research model on AI use and ethics.

FactorsItemFactor loadingCronbach’s αComposite Reliability (CR) Average Variance Extracted (VME)
Affective Learning

  • 1. Intrinsic motivation [The AI concepts I learn are relevant to my life (e.g., personal, academic, and professional).]

0,8040,9820,9820,747

  • 2. Intrinsic motivation [Learning AI is interesting.]

0,8170,982

  • 3. Intrinsic motivation [Learning AI makes my life more meaningful.]

0,7590,982

  • 4. Intrinsic motivation [I am curious to explore new AI technologies.]

0,7970,982

  • 5. AI learning self-efficacy [I am confident that I will perform well in AI-related tasks.]

0,8630,981

  • 6. AI learning self-efficacy [I am confident that I will do well in AI-related projects.]

0,8740,981

  • 7. AI learning self-efficacy [I believe I can master AI knowledge and skills.]

0,8650,981

  • 8. AI learning self-efficacy [I believe I can achieve good grades in AI-related assessments.]

0,8650,981

  • 9. AI learning self-efficacy [I am confident that I can understand AI.]

0,8620,981

  • 10. Professional interest [Learning AI will help me obtain a good job in the future.]

0,8730,981

  • 11. Professional interest [Knowing how to use AI will give me a professional advantage for my future career.]

0,8920,981

  • 12. Professional interest [Understanding AI will benefit my future career.]

0,9030,981

  • 13. Professional interest [My future career will involve AI.]

0,8570,982

  • 14. Professional interest [I will use AI-related problem-solving skills in my career.]

0,8740,981

  • 15. Confidence in using AI [I can make good use of AI-related tools.]

0,8890,981

  • 16. Confidence in using AI [I am confident that I can successfully complete AI-related tasks.]

0,9060,981

  • 17. Confidence in using AI [I am confident that I will do well in AI-related assignments.]

0,9040,981

  • 18. Confidence in using AI [I am confident that I can learn the basics of AI.]

0,8990,981

  • 19. Confidence in using AI [I am confident that I can choose appropriate AI applications to solve problems.]

0,9020,981
Behavioral Learning

  • 20. Behavioral intention [I will continue using AI in the future.]

0,8420,9690,9710,754

  • 21. Behavioral intention [I will stay updated with the latest AI technologies.]

0,8690,968

  • 22. Behavioral intention [I plan to spend time exploring new AI application features in the future.]

0,8750,968

  • 23. Behavioral engagement [I actively participate in AI learning activities.]

0,9050,967

  • 24. Behavioral engagement [I am dedicated to AI learning materials.]

0,8970,967

  • 25. Behavioral engagement [I learn effectively in AI learning tasks.]

0,8830,968

  • 26. Behavioral engagement [I often review additional AI materials after class, such as books and journals.]

0,8870,967

  • 27. Collaboration [I often try to explain AI learning materials to my classmates or friends.]

0,8850,967

  • 28. Collaboration [I try to work with classmates to complete AI-related tasks and projects.]

0,9050,967

  • 29. Collaboration [I often spend my free time discussing AI with classmates.]

0,8070,969

  • 30. Collaboration [I usually ask my peers for help when I face difficulties in AI activities.]

0,790,970
Cognitive Learning

  • 31. Knowing and understanding [I know what AI is and can recall its definitions.]

0,8560,9710,9730,801

  • 32. Knowing and understanding [I know how to use AI applications (e.g., Siri, chatbot).]

0,8610,971

  • 33. Knowing and understanding [I know some operating principles behind AI (e.g., linear model, decision tree, machine learning).]

0,8890,970

  • 34. Knowing and understanding [I understand how AI perceives the world (e.g., seeing, hearing) to perform various tasks.]

0,9090,969

  • 35. Knowing and understanding [I can compare differences among AI concepts (e.g., deep learning, machine learning).]

0,9020,969

  • 36. Applying, evaluating, and creating [I can use AI applications to solve problems.]

0,8920,970

  • 37. Applying, evaluating, and creating [I can use a machine learning model to solve problems.]

0,9130,969

  • 38. Applying, evaluating, and creating [I can create AI-based solutions (e.g., chatbots, robotics) to solve problems.]

0,9150,969

  • 39. Applying, evaluating, and creating [I can evaluate AI applications and concepts for different situations.]

0,9130,969
Ethical Learning

  • 40. AI ethics [I believe AI ethics are important to guide moral behavior in the development and use of AI technology.]

0,8750,9910,9910,869

  • 41. AI ethics [I understand how the misuse of AI could pose substantial risks to humans.]

0,8970,990

  • 42. AI ethics [I believe AI systems should minimize data bias (e.g., gender, ethnicity).]

0,8680,991

  • 43. AI ethics [I believe AI systems should operate reliably and safely.]

0,9320,990

  • 44. AI ethics [I believe AI systems should undergo rigorous testing to ensure proper functioning.]

0,9360,990

  • 45. AI ethics [I believe AI systems should respect privacy.]

0,9340,990

  • 46. AI ethics [I believe users are responsible for considering AI design and decision-making processes.]

0,9440,990

  • 47. AI ethics [I believe AI systems should benefit everyone regardless of physical ability or gender.]

0,9630,990

  • 48. AI ethics [I believe AI systems should be transparent and understandable.]

0,9590,990

  • 49. AI ethics [I believe users should be aware of the system’s purpose, functioning, and limitations.]

0,9670,990

  • 50. AI ethics [I believe people should be accountable for the use of AI systems.]

0,9520,990

  • 51. AI ethics [I believe AI systems should comply with ethical and legal standards.]

0,9390,990

  • 52. AI ethics [I believe AI can be used to help disadvantaged people.]

0,9520,990

  • 53. AI ethics [I believe AI can promote human well-being.]

0,9220,990

  • 54. AI ethics [I want to use my AI knowledge to serve others.]

0,9260,990

  • 55. AI ethics [I believe AI use should aim to achieve the common good (e.g., environmental and poverty issues).]

0,9450,990

Table 4 shows that all factors included in the structural model exhibit high levels of reliability and convergent validity. Regarding factor loadings, the items within each dimension exceeded the minimum threshold of 0.70 established by Fokides (2023), indicating a strong association between each item and its corresponding factor. In the case of affective learning, loadings ranged from 0.759 to 0.906, reflecting coherence among intrinsic motivation, self-efficacy, professional interest, and confidence in using AI. These results are supported by a Cronbach’s alpha and composite reliability of 0.982, along with an AVE of 0.747, demonstrating excellent internal consistency.

In the behavioral dimension, which includes items related to intention of use, engagement, and collaboration, factor loadings ranged from 0.79 to 0.905. A composite reliability of 0.971 and a Cronbach’s alpha of 0.969 were obtained, confirming the stability of this dimension. These findings are consistent with studies such as those by Yaseen et al. (2025), who state that active engagement with technology and social interaction around AI learning are key indicators of practical skill development in academic contexts.

With respect to cognitive learning, very high loadings were observed, ranging from 0.856 to 0.915, in items related to conceptual knowledge, application, and evaluation of AI technologies. The composite reliability reached 0.973 and the AVE was 0.801, supporting the statistical robustness of this dimension. These results are consistent with those reported by Modran et al. (2024), who argue that meaningful learning in AI requires both theoretical understanding and the ability to apply and transfer this knowledge to real-world contexts.

Finally, ethical learning exhibited the highest factor loadings in the model, with values between 0.868 and 0.967, evidencing a strong alignment among items related to moral principles, algorithmic fairness, transparency, social well-being, and the ethical use of AI. This dimension achieved a composite reliability and Cronbach’s alpha of 0.991, with an AVE of 0.869, indicating excellent convergent validity. This result is consistent with the arguments of Floridi et al. (2018) and Díaz-Rodríguez et al. (2023), who emphasize that ethical development in digital environments must include a critical understanding of AI’s risks and benefits, as well as an orientation toward the common good.

Subsequently, Table 5 presents the results of the discriminant validity analysis among the factors of the proposed theoretical model. Discriminant validity is an essential criterion in structural equation modeling, as it verifies whether the evaluated constructs are empirically distinct from one another. For this purpose, the heterotrait-monotrait ratio (HTMT) was used, a more sensitive and robust criterion compared to traditional metrics such as those proposed by Fornell and Larcker.

Table 5. Heterotrait–monotrait ratio analysis of the dimensions of AI learning and ethics.

FactorsA-Affective A-Behavioral A-Cognitive A-Ethical
A-Affective
A-Behavioral 0,884
A-Cognitive 0,8470,917
A-Ethical 0,7830,7670,811

Table 5 presents the HTMT index values corresponding to the relationships among the affective, behavioral, cognitive, and ethical dimensions of artificial intelligence learning. The analysis shows that most correlations remain within acceptable ranges, confirming adequate differentiation among the factors that compose the theoretical model. However, the association between the behavioral and cognitive dimensions reached a value of 0.917, slightly exceeding the recommended threshold, suggesting the existence of a possible conceptual overlap between these variables. This result can be interpreted as evidence of semantic proximity between observable behaviors and cognitive skills related to the use of artificial intelligence. According to Lans et al. (2014), in educational models integrating interdependent variables, it is common to identify conceptual overlaps in dimensions that share thought and action processes. In this regard, the high correlation between behavioral and cognitive learning could be attributed to the fact that the practical application of AI such as active tool use or collaborative participation requires prior understanding of its technical and operational foundations. Similar studies, such as that of Dai et al. (2020), report a strong relationship between cognitive mastery and intention of use among university students, which supports the empirical trend observed in the present study.

On the other hand, the HTMT values between the affective component and the other factors remained within acceptable ranges (0.847 with cognitive, 0.884 with behavioral, and 0.783 with ethical), confirming that emotional dispositions toward AI such as motivation and self-efficacy constitute an empirically distinct dimension. Likewise, the ethical factor showed moderate correlations with the remaining dimensions (ranging from 0.767 to 0.811), supporting its conceptual independence, albeit with interconnections. This differentiation is consistent with the findings of Cetindamar et al. (2024), who assert that ethical judgment emerges from interaction with but not fusion of other digital learning competencies.

Subsequently, Figure 3 presents the results of the structural equation model designed to analyze the effect of the affective, behavioral, and cognitive dimensions of AI learning on the development of learning ethics among university students. The model was evaluated through path analysis, considering both standardized and unstandardized factor loadings, as well as statistical significance values (T and P). This approach allows for the identification of which dimensions have greater predictive capacity in the internalization of ethical principles in the use of AI.

dda459f7-ee34-499f-a0de-f3a13b531314_figure3.gif

Figure 3. Standardized and unstandardized factor loadings.

Note: Goodness-of-fit tests: Affective Learning (T = 9.114; p = 0.000); Behavioral Learning (T = -0.336; p = 0.737); Cognitive Learning (T = 11.150; p = 0.000). ANOVA (F = 572.662; p = 0.000).

Figure 3 illustrates the estimated causal relationships among the latent factors. Affective learning showed a positive and highly significant effect on AI learning ethics, with a standardized loading of β = 0.675 (T = 57.66; p = 0.000). This result confirms that attitudes, beliefs, emotions, and interests related to artificial intelligence learning have a substantial influence on the development of ethical behaviors. This finding is consistent with that reported by An et al. (2022), who state that a positive emotional disposition toward technology promotes reflective, responsible, and socially oriented use. Likewise, Shafiee (2025) argues that affective engagement with AI fosters self-regulated processes that strengthen ethical decision-making in digital environments.

In contrast, behavioral learning did not show a statistically significant relationship with the ethical variable (β = 0.128; T = 0.336; p = 0.737). Although it presented adequate structural reliability, this dimension did not directly predict the levels of ethical behavior reported by students. The absence of this effect could be explained by the lack of systematic academic practices involving the ethical use of AI tools, or by a disconnection between the practical use of technology and ethical reflection on its implications. Similar results were observed by Palau et al. (2021), who found that active participation in technological tasks does not always translate into morally aligned behaviors, especially when pedagogical guidance is absent.

On the other hand, cognitive learning showed a moderate but significant effect on the ethical dimension (β = 0.561; T = 11.150; p = 0.000). This finding suggests that conceptual knowledge, technical understanding, and the ability to critically evaluate the foundations of AI positively influence awareness of its responsible use. According to Kitchin (2019), digital critical thinking is a key competence for building a technology ethics grounded in knowledge and understanding of algorithmic processes. In the same vein, Schiff (2022) highlights that cognitive development in AI enhances awareness of the social, political, and cultural risks associated with its implementation in education.

Subsequently, Table 6 presents the results of the hypothesis testing based on the theoretical model, which explores the effect of the affective, behavioral, and cognitive dimensions of artificial intelligence learning on learning ethics among university students. The relationships among the latent variables affective, behavioral, cognitive, and ethical were analyzed through structural equation modeling, considering standardized regression coefficients (β) and statistical significance levels.

Table 6. Validation of the hypotheses on AI use and ethics.

Results of the hypotheses
HypothesisRelationshipβ p-value Result
H1. The affective learning of AI significantly influences learning ethics.Total → Ethical0,675***Accepted
H2. The behavioral learning of AI has a significant impact on learning ethics.Behavioral → Ethical-0,1280,058Not Accepted
H3. The cognitive learning of AI significantly contributes to learning ethics.Cognitive → Ethical0,567***Accepted
H4. AI learning has a significant impact on learning ethics among university students.Affective → Ethical0,413***Accepted

Table 6 presents the coefficients obtained from the hypothesis testing. Hypothesis 1, which evaluated the overall effect of AI learning on ethics, obtained a coefficient of β = 0.675 with high statistical significance (p < 0.001), confirming that the general use of artificial intelligence is positively associated with learning ethics. This result corroborates studies such as those by Moreira-Choez et al. (2024a) and Örtegren (2022), who argue that formative appropriation of AI combining technical competencies with ethical values can contribute to the development of digitally responsible citizens.

Regarding Hypothesis 2, which postulated a significant relationship between behavioral learning and ethics, the coefficient was negative and non-significant (β = –0.128; p = 0.058), leading to the rejection of the hypothesis. This result suggests that performing AI-related activities without pedagogical guidance or critical reflection does not guarantee ethical behavior. This finding is consistent with the observations of Sinclair et al. (2022), who state that mere active participation in technological tasks does not imply ethical internalization unless articulated with axiological frameworks and explicit formative processes.

For Hypothesis 3, which analyzed the influence of cognitive learning on ethics, the model yielded a coefficient of β = 0.567 with high significance (p < 0.001), thus validating the hypothesis. This finding demonstrates that technical knowledge, understanding of operational principles, and critical evaluation capacity of AI are determining factors in developing ethical criteria. It aligns with Falloon (2020), who notes that ethical digital literacy must be grounded in a deep understanding of how technology functions and its social implications.

Finally, Hypothesis 4, which examined the relationship between affective learning and ethics, also proved significant (β = 0.413; p < 0.001), supporting the hypothesis and reinforcing the idea that attitudes, interests, and emotions toward AI influence students’ ethical formation. This result is consistent with Sinclair et al. (2022), who argue that the affective dimension acts as a catalyst in the ethical adoption of emerging technologies by fostering a more conscious, empathetic, and reflective relationship with the digital environment.

Conclusions

The progressive incorporation of artificial intelligence into higher education has raised important challenges related to the ethical and responsible use of these technologies by university students. In this context, the present study aimed to analyze responsible practices and ethical behaviors associated with the use of AI among students at the State University of Milagro, considering the affective, behavioral, and cognitive dimensions of learning.

The results obtained indicate that the proposed objective was achieved and that the research question was empirically answered. Likewise, three of the four hypotheses formulated were statistically confirmed, showing that both affective and cognitive learning significantly influence the development of an ethical attitude toward the use of artificial intelligence. In contrast, behavioral learning did not show a significant relationship, suggesting that performing AI-related actions alone does not ensure ethical behavior unless accompanied by reflection and critical understanding.

Among the main findings, the affective dimension related to motivation, confidence, and professional interest had the greatest impact on ethical behavior. Similarly, the cognitive dimension, focused on technical knowledge and understanding of AI-related concepts, also showed a significant effect. The overall model revealed a positive relationship between total AI use and learning ethics, reinforcing the importance of educating students not only in the functional use of these technologies but also in their critical and contextualized appropriation.

The study presents certain limitations that should be considered. The sample was non-probabilistic and limited to a single institution, which restricts the generalization of the results to other contexts. Moreover, the cross-sectional design prevents establishing direct causal relationships and observing changes over time in students’ ethical perceptions.

As future lines of research, it is proposed to apply the model in other universities to compare results across different academic environments. It is also recommended to conduct longitudinal studies to observe the evolution of ethical practices related to AI, as well as qualitative research to explore in greater depth the experiences, perceptions, and challenges students face in their interaction with these technologies. Finally, it is suggested to design and implement comprehensive pedagogical strategies that promote the development of ethical, affective, and cognitive competencies for the conscious and responsible use of artificial intelligence in higher education.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 06 Jan 2026
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Echeverria-Caicedo K, Vázquez-Meza JA, Llacsa-Puma LJ et al. Responsible practices and ethical behaviors in the use of artificial intelligence among university students [version 1; peer review: awaiting peer review]. F1000Research 2026, 15:16 (https://doi.org/10.12688/f1000research.172751.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 06 Jan 2026
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.