ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

You are the Driver and AI is the Mate: Exploring Human-Led Creative and Critical Thinking in AI-Augmented Learning Environments

[version 1; peer review: awaiting peer review]
PUBLISHED 24 Sep 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the AI and Sustainability collection.

Abstract

Background

This study delves into how university students in Ghana are using generative Artificial Intelligence (AI) tools to boost their creative and critical thinking skills in academic tasks.

Methods

By employing a convergent parallel mixed-methods approach, we gathered quantitative survey data from 490 participants and complemented it with qualitative insights from semi-structured interviews.

Results

The results show that AI tools spark idea generation and enhance evaluative reasoning, while students still feel a strong sense of agency, often fine-tuning or dismissing AI suggestions based on their own understanding. Nonetheless, there are ongoing concerns about ethical limits, dependency on technology, and insufficient training.

Conclusions

The study wraps up by suggesting that AI serves best as a cognitive partner when users are equipped with solid digital literacy and ethical guidance. We also discuss the implications for teaching, theory, and policy, offering recommendations for creating learner-centred AI integration in education.

Keywords

Generative AI, creative thinking, critical thinking, ethics, digital literacy

Introduction

The world has started to make use of advanced AI technology which has begun to change the way people carry out their day-to-day activities (Chakraborty et al., 2023). In the context of education, AI technologies foster change by helping create new teaching methods (Markauskaite et al., 2022), but at the same time, they are challenging the efficacy of the current evaluation systems (Matheis, P., & Jubin, 2025). For instance, ChatGPT and other generative AI (GAI) applications are aiding teachers with crafting instructional materials and assessments, fostering self-instruction among learners, and effectively utilizing learning analytics (Lo, 2023; Sha et al., 2024; Zhu et al., 2023). While these applications hold promise, GAI tools also present significant issues pertaining to academic integrity, the potential for students to become overly reliant on artificial intelligence, and the socio-technical reproduction of biases through algorithms (Grassini, 2023; Rahman & Watanobe, 2023). To reap the educational advantages of GAI, students are encouraged to adopt a perspective where AI serves as an adjunct to human thought.

The rapid growth of Artificial Intelligence (AI) Technologies has led to the emergence of new eras across different disciplines, with education being one of the sectors that has received profound effects (Luckin et al., 2022; Selwyn, 2019). The integration of AI into teaching and learning has progressed from simple adaptive technologies to complex generative models like ChatGPT, Claude, and Bard, which can generate text, provide human-like feedback, and engage in conversation (Chakraborty et al., 2024; Lo, 2023). These breakthroughs are transforming educational processes for the better, offering improved levels of customization, engagement and efficiency. On the other hand, understanding the nature of human cognition, especially regarding creative and critical thinking under AI-system enhancement becomes an immediate concern as these systems assume greater importance (Markauskaite et al., 2022; Zawacki-Richter et al., 2019).

Innovation and creativity are crucial 21st century skills essential for lifelong learning and nurturing creativity (Florentina, 2022; Bouckaert, 2023). In the context of education, creativity is the ability to generate valuable ideas or artefacts through divergent thinking, taking risks, and through imagination, which often requires one to go beyond the status quo (Runco & Jaeger, 2012). AI technologies greatly assist in the facilitation of creativity, including brainstorming, prompt generation, visual representation of ideas, and meta-cognitive composition (Boden, 2004; Liu et al., 2025; You et al., 2024). There is, however, concern that dependence on AI outputs may diminish one’s ability to construct original ideas because students would not need to engage in imaginative struggle or endure uncertainty (Grassini, 2023; McDowell, 2024). In Ghanaian higher education, where students are already burdened by an examination-focused educational system lacking creative tools and resources, the possibility of replacing original thinking with AI generated ideas becomes an acute concern.

Like this, critical thinking, which is the ability to assess, analyse, and reason methodically is a crucial cognitive skill for both democratic engagement and academic success (Facione, 1990; Halpern, 1998). By offering feedback, summarising arguments, or mimicking dialectical engagement, AI can support critical thinking (Aktoprak & Hursen, 2022). However, it can also encourage epistemic complacency, in which users accept outputs without question because they appear to be authoritative and coherent (Bender et al., 2021; Rahman & Watanobe, 2023). Students must maintain epistemic agency, that is, actively challenging, validating, and improving what the AI generates to serve as “drivers” in AI-augmented learning. Understanding how students negotiate the line between AI support and intellectual autonomy is a critical educational concern in places like Ghana where digital literacy varies greatly.

The literature currently in publication emphasises how AI has the potential to enhance human abilities while simultaneously emphasising how important pedagogy is in influencing these exchanges (Luckin, 2017; Holmes et al., 2019). Learners are more likely to be empowered to actively engage with knowledge and thinking when pedagogical approaches present AI as a collaborative partner rather than an oracle (Markauskaite & Goodyear, 2017). According to research by Topali et al. (2025) and Jayasuriya et al. (2025), students can use AI outputs as launching pads rather than destination points with the help of metacognitive guidance and reflective questioning techniques. However, there are still few empirical studies that examine how students use AI tools to engage in real-time creative and critical thinking, particularly in sub-Saharan African contexts.

The use of AI tools in Ghanaian universities is growing, especially among students who need assistance with academic research, writing, and idea generation. The impact of these tools on students’ cognitive engagement or decision-making during learning tasks, however, has not been thoroughly studied empirically. Although anecdotal evidence indicates that students occasionally rely too much on AI to create content, little is known about the complex methods they employ to assess, modify, or reject AI outputs in support of their own ideas. This indicates a substantial gap in the body of research on the relationship between critical thinking, creativity, and AI education in African higher education settings.

Therefore, by examining how students at a Ghanaian university use generative AI tools as co-collaborators in learning, this study places itself at the intersection of three crucial domains: critical thinking pedagogy, creative cognition, and AI integration in education. In addition to challenging deterministic narratives of AI dominance, framing humans as the “driver” and AI as the “mate” highlights the necessity of human-led meaning-making in digital learning ecosystems. The study aims to add to the global discussion on moral, empowering, and intellectually stimulating applications of AI in education by examining students’ cognitive behaviours, evaluative techniques, and metacognitive reflections while completing AI-supported tasks. Based on the forgoing, the following objectives were addressed:

  • 1. To examine how students use generative AI tools to support and enhance their creative thinking during learning tasks.

  • 2. To investigate the strategies students’employ to critically evaluate, refine, or reject AI-generated content.

  • 3. To assess students’ perceived agency and control in the AI-augmented learning process.

  • 4. To identify the challenges and ethical considerations students encounter when using AI tools in academic contexts.

  • 5. To analyse the influence of AI literacy, prior experience, and digital confidence on students’ creative and critical engagement with AI tools.

Empirical literature

The growing interest in using generative AI tools to boost students’ creative thinking has become a hot topic in recent educational research (Johnson & Salter, 2025). Studies show that AI can act as a “creativity catalyst” in learning settings. Mahama et al. (2023) point out that tools like ChatGPT can spark students’ imaginations, help them come up with new ideas, and assist in breaking through creative blocks. In a study by Lo (2024), university students who utilized ChatGPT for their creative writing tasks reported improvements in fluency and originality, although the quality of their work often depended on how well they crafted their prompts. Similarly, Dai et al. (2023) discovered that students involved in project-based learning with AI support showed increased ideational fluency, but they also required structured guidance to keep their outputs original and academically sound. These insights indicate that while AI can nurture creativity, human intention and instructional support are still crucial (Ruiz Viruel et al., 2025).

Another important area of research is the critical evaluation of AI-generated content, especially as students interact with outputs that may seem coherent and authoritative but can sometimes be inaccurate (Amirjalili et al., 2024). Sharma (2025) found that while some students actively edited or restructured AI-generated content, others accepted it without question. Abouelenein et al. (2025) added depth to this discussion by revealing that students with stronger metacognitive skills were more inclined to question, revise, or cross-check AI responses with other reliable sources. On the flip side, Guarcello and Longo (2024) noted that students with limited AI literacy often took AI-generated information at face value, particularly when it appeared polished or logically convincing.

The issue of learner agency in AI-enhanced learning environments has led to a variety of research outcomes (Jacobson, 2025). Bhoi and Dash (2024) point out that AI can either empower or hinder learners, depending on how it’s woven into the educational fabric. While some systems boost personalization and autonomy, others might promote a more passive approach to learning. Jarrahi et al. (2023) suggest that AI should function as an “intelligent assistant,” enhancing human reasoning rather than replacing it. In a more recent study, Ni Uanachain and Aouad (2025) discovered that students who felt a strong sense of academic agency were more likely to question AI-generated suggestions, critically revise outputs, and assert their own ideas during the learning process. These insights indicate that how much control students feel they have over AI tools can significantly impact their cognitive engagement. However, there’s still a gap in understanding how students in low-resource or exam-focused environments perceive and exercise this agency.

At the same time, researchers have noted a growing tension between the advantages of AI and the ethical dilemmas it raises in education. Holmes and Miao (2023) found that many students, while acknowledging the usefulness of tools like ChatGPT, felt uncertain about ethical boundaries due to a lack of guidance from their institutions. This sentiment was echoed by Swiecki et al. (2022), who highlighted that students were anxious about the transparency and fairness of assessments involving AI-generated content. These concerns are particularly pressing in higher education, where academic integrity policies are still catching up with the realities of AI, and students often have to navigate what constitutes acceptable use on their own. In places like Ghana, where digital ethics education is still in its infancy, there’s a pressing need to delve into students’ real-life experiences with AI-related challenges in academic settings.

It has become clear that students’ past experiences, their understanding of AI, and their confidence in using digital tools play a significant role in how they engage with AI technologies, ultimately affecting their learning outcomes. Hwang and Chen (2023) showed that students who feel confident in their digital skills and are familiar with AI systems tend to use these tools in more creative and critical ways, which indicates a deeper level of cognitive engagement. Similarly, Kim et al. (2025) found that students with strong digital backgrounds are better at tweaking AI prompts and understanding the outputs with a higher level of metacognitive awareness. However, research in Africa points to ongoing challenges. Nkansah and Oldac (2024) and Ayisi et al. (2024) both noted that while university students in Ghana are getting more exposure to digital tools, many still struggle with the essential digital literacy skills required to engage effectively and independently with AI systems. This gap underscores the need to explore how different levels of familiarity with AI and digital readiness affect student interactions with generative AI in Ghanaian universities.

Materials and methods

Design

This study adopted a convergent mixed-methods design, integrating both quantitative and qualitative data to explore how students engage with AI in educational settings through creative and critical thinking. There were two phases of data analysis: quantitative analysis to assess patterns and relationships among measured constructs, and qualitative analysis to develop an in-depth understanding of learners’ subjective experiences.

Participants selection

Through google forms, we surveyed 490 university students in Ghana. The forms were distributed on students’ social media platforms, and our target was on those pursuing undergraduate (diploma and bachelor) and postgraduate (M.ED, M.A., MPhil, MSc. PhD) programmes across Ghana. The study was conducted during school period where students have been tasked to perform some academic tasks individually and submit by their respective lecturers. In such tasks, students appear to be using ChatGPT in doing their tasks, so exploring their perspectives in AI usage was appropriate. Qualitative data were collected through semi-structured interviews under five thematic areas: (1) creative thinking with AI, (2) critical evaluation strategies, (3) perceived agency and control, (4) ethical and practical challenges, and (5) digital experience and AI literacy.

Instrumentation

A new scale was developed from literature to assess students’ creative thinking with AI (5-items, α=.79), critical thinking strategies with AI (5-items, α=.79), perceived agency and control with AI (5-items, α=.76), challenges and ethical concerns with AI (5-items, α=.67), and AI literacy, digital confidence, and prior experience (5-items, α=.74). All constructs evidenced acceptable reliability (α >.67), with some leniency for emerging constructs, following Nunnally and Bernstein’s (1994) criterion of α ≥.70 for exploratory research. The scale was a five-point (strongly disagree=1 to strongly agree=5) Likert scale that contains 25 items. The items were carefully curated from literature related to the concepts under exploration and these items reflect the conceptual meanings of the constructs and this process catered for the content validity (Haynes et al., 1995). The items were validated face-wise and content-wise through expert input (Boparai et al., 2018; DeVellis and Thorpe, 2021). Furthermore, a semi-structured interview guide was developed on the five thematic areas of the quantitative scale and was used to explore subjective experiences of students in the use AI through creative and critical thinking procedures (Konecki, 2019; Kallio et al., 2016). The complete set of the scale and the interview guide can be found at the appendix.

Data analysis

Quantitatively, descriptive statistics (means, standard deviations, skewness, and kurtosis) were computed to identify the distribution and central tendencies of each construct. To test predictive relations, multiple linear regression analyses were conducted to determine the extent to which AI literacy and digital confidence predict creative and critical thinking outcomes and perceived agency in working in AI-augmented environments. Normality, linearity, multicollinearity, and homoscedasticity assumptions were tested and met prior to performing regression analyses. All quantitative analyses were conducted using SPSS (version 26), with the significance level at p <.05.

The qualitative data were grouped under five thematic areas: (1) creative thinking with AI, (2) critical evaluation strategies, (3) perceived agency and control, (4) ethical and practical challenges, and (5) digital experience and AI literacy. Response transcriptions of the participants were transcribed verbatim and analysed thematically following Braun and Clarke (2006). The initial coding was inductive, and open codes were assigned to segments of text that were meaningful. Through iterative comparison and refinement, codes were then sorted into categories. Themes were constructed around patterns of meaning that recurred and were relevant to the research questions. Investigator triangulation and peer debriefing were employed to enhance trustworthiness. In all, manual processes were employed to effectively organize, code, and retrieve data.

The integration of the two strands was conducted at the interpretation stage, with the potential for convergence between numerical trends and participants’ narratives (Moseholm & Fetters, 2017). This allowed for a richer understanding of the ways in which students navigate human-driven creativity and criticality in AI-supported learning environments.

Results and Discussion

Gender representation is relatively balanced but leans male (62.2%), reflecting either program-specific enrolment patterns or broader gender disparities in access or interest in digital technologies. Programme distribution shows strong participation from students in Political Science and Corporate Units (44.7%) and Social Sciences and Management (32.9%), both of which may emphasize policy, ethics, and decision-making—relevant themes in AI integration.

Year of study data indicate that most respondents are in Level 200 (77.1%), implying early-stage academic exposure where foundational digital literacy and academic integrity norms are being formed. Notably, nearly 20% of the sample are postgraduate students (Levels 700–900), offering a contrast in academic maturity that can influence AI usage practices.

Crucially, the use of generative AI tools is near-universal, with 95.7% of students reporting usage. This suggests that generative AI is not a fringe phenomenon but a mainstream element of students’ academic workflow. Among those who use AI tools, the majority do so occasionally (52.6%) or sometimes (25.9%), indicating a pattern of supplementary rather than habitual reliance. A smaller group (8.2%) reported always using AI tools, hinting at deeper integration or potential overreliance.

The presence of a 13.3% minority who rarely use AI may reflect barriers such as limited access, ethical uncertainty, or preference for traditional learning approaches. These findings highlight the need for differentiated AI literacy initiatives that account for programme-specific needs, user frequency, and varying levels of digital confidence.

The descriptive and distributional characteristics of five key constructs were examined to assess their central tendencies, variability, and distribution shape in Table 2. Results are interpreted while focusing on nuanced observations relevant to data normality and construct characteristics.

Table 1. Participant demographics and generative AI usage (N = 490).

VariableCategoryFrequency Percent
Age18–25 years32666.5%
26–30 years6212.7%
31–35 years408.2%
36–40 years244.9%
41–45 years387.8%
GenderMale30562.2%
Female18537.8%
Programme of StudyHumanities and Psychology5310.8%
Social Sciences and Management16132.9%
Political Science and Corporate Units21944.7%
Health and Nursing5711.6%
Year of StudyLevel 10091.8%
Level 20037877.1%
Level 30081.6%
Level 40071.4%
Level 700 (Postgraduate)469.4%
Level 800 (Postgraduate)377.6%
Level 900 (Postgraduate)51.0%
Used Generative AI ToolsYes46995.7%
No214.3%
Frequency of AI UseRarely6313.3%
(Among Users only) Occasionally25052.6%
Sometimes12325.9%
Always548.2%

Table 2. Normality test.

ConstructsMeanStd. DeviationSkewnessKurtosis
StatisticStatisticStatisticStd. ErrorStatistic Std. Error
Creative Thinking with AI19.353.00-.89.112.18.22
Critical Thinking with AI20.423.04-1.11.113.20.22
Perceived Agency and Control with AI20.153.01-.69.111.60.22
Challenges and Ethical Concerns with AI18.513.16-.61.111.25.22
AI Literacy, Digital Confidence, and Prior Experience18.593.21-.31.11.28.22

Participants reported moderate to high levels across all constructs, with mean scores ranging from M = 18.51 (challenges and ethical concerns with AI) to M = 20.42 (critical thinking with AI), on scales likely summing across five items per construct. Standard deviations ranged from approximately SD = 2.99 to SD = 3.21, indicating moderate dispersion and suggesting relatively consistent perceptions across respondents.

Notably, critical thinking with AI had the highest mean (M = 20.42, SD = 3.04), suggesting participants more strongly endorse behaviours and attitudes related to reflective evaluation and metacognitive engagement with AI than other dimensions. Creative thinking with AI also yielded a relatively high mean (M = 19.35, SD = 2.99), consistent with the interpretation that AI is viewed as a tool that supports ideation and novel connections, albeit slightly less strongly than its critical function.

Normality assessments were conducted using skewness and kurtosis statistics with standard errors provided. According to Kline (2015), skewness values within ±1 and kurtosis values within ±3 indicate acceptable normality for most psychological and educational data.

Creative thinking with AI showed a skewness of -0.89 and kurtosis of 2.18, indicating a negatively skewed distribution with mild leptokurtic tendencies. The negative skew implies that most students scored high on this construct, with fewer reporting low creative engagement with AI.

Critical thinking with AI was more strongly skewed (Sk = -1.11) and more peaked (K = 3.20), suggesting a notable clustering of responses at the higher end of the scale. This reflects the strong endorsement of critical appraisal behaviors among students—perhaps due to increased awareness of ethical concerns or critical use prompted by AI’s known limitations.

Perceived agency and control with AI had a moderate negative skew (Sk = -0.69) and kurtosis (K = 1.60), indicating that most students generally felt in control while using AI tools, but with some variability in perceived autonomy.

Challenges and ethical concerns with AI showed slightly negative skew (Sk = -0.61) and kurtosis (K = 1.25), suggesting that while students somewhat acknowledged ethical tensions and boundary concerns, such concerns were not extreme or highly variable.

AI literacy, digital confidence, and prior experience yielded the most normal distribution (Sk = -0.31, K = 0.28), indicating a symmetric and mesokurtic shape. This suggests a balanced range of responses and a more even distribution of AI familiarity among students.

Across all constructs, skewness and kurtosis values suggest acceptable levels of normality, supporting the use of multiple regression. The stronger skew and peaked distribution for Critical thinking with AI underscores the high importance students place on their evaluative interaction with AI tools. Meanwhile, the relatively neutral distribution for AI literacy highlights diverse levels of prior experience and confidence, possibly reflecting differences in digital access or institutional support.

These findings imply that while most learners feel confident and reflective in using AI, there remains variability in experience and ethical certainty. Such nuance reinforces the need for targeted instruction on AI literacy and clearer ethical guidance in academic contexts.

The descriptive statistics in Table 3 reflect learners’ experiences, perceptions, and competencies related to the use of AI in academic contexts. Mean scores ranged from 3.22 to 4.27 on a 5-point Likert scale, with higher scores indicating greater agreement. Several nuanced patterns emerge across domains of creative thinking, critical engagement, perceived agency, ethical reflection, and digital confidence.

Table 3. Descriptive results of AI use.

StatementsMean SD
Creative Thinking with AI
I use AI tools to brainstorm new ideas when working on academic tasks.3.98.764
AI helps me generate alternative perspectives when I'm stuck.4.04.725
I feel more imaginative when I integrate AI suggestions into my learning activities.3.63.865
AI tools help me develop more original or unique academic content.3.66.905
The use of AI enhances my ability to connect different ideas creatively.4.04.774
Critical Thinking with AI
I critically evaluate the responses provided by AI before using them.4.15.831
I often compare AI-generated suggestions with my own ideas before deciding.4.17.835
I modify AI-generated outputs to reflect my own understanding.4.08.804
I reject AI content when it appears inaccurate or biased.4.02.945
I ask myself reflective questions when reviewing AI outputs.4.00.705
Perceived Agency and Control with AI
I see myself as the main decision-maker when using AI tools.3.95.949
I use AI as a support, not a substitute, for my thinking.4.27.711
I feel confident managing my interaction with AI tools during tasks.3.86.825
I can determine when AI suggestions are not useful or relevant.4.03.879
I maintain ownership of my learning process even with AI involvement.4.04.817
Challenges and Ethical Concerns with AI
I am concerned about over-reliance on AI for academic work.3.711.027
I sometimes find it difficult to determine whether using AI is ethical.3.50.966
I worry about the originality of my work when I use AI-generated content.3.691.002
I have faced challenges understanding where to draw the line between help and academic dishonesty when using AI.3.431.011
I think institutions should provide clearer guidelines on ethical AI use in learning.4.18.813
AI Literacy, Digital Confidence, and Prior Experience
I understand the strengths and limitations of the AI tools I use.3.83.844
I feel digitally confident when exploring AI for learning tasks.3.76.837
My prior experience with digital tools helps me use AI effectively.3.84.847
I know how to prompt or interact with AI to get better responses.3.94.840
I have received formal or informal training on using AI tools for learning.3.221.181

Creative thinking with AI

Items such as “AI helps me generate alternative perspectives when I’m stuck” (M = 4.04, SD = 0.725) and “The use of AI enhances my ability to connect different ideas creatively” (M = 4.04, SD = 0.774) recorded high means, suggesting that students strongly value AI’s role in augmenting creativity through idea expansion and connection. However, the item “I feel more imaginative when I integrate AI suggestions into my learning activities” (M = 3.63, SD = 0.865) had a relatively lower mean, indicating that while students find AI helpful, it may not fully stimulate intrinsic imagination or original ideation. This reflects a nuanced boundary between AI-stimulated creativity and self-driven imagination.

Critical thinking strategies with AI

Responses reveal high levels of critical engagement, particularly with items such as “I modify AI-generated outputs to reflect my own understanding” (M = 4.08, SD = 0.804), “I critically evaluate the responses provided by AI before using them” (M = 4.15, SD = 0.831), and “I often compare AI-generated suggestions with my own ideas before deciding” (M = 4.17, SD = 0.835). These responses suggest that learners actively engage in higher-order thinking processes and do not accept AI outputs passively. The consistency of these scores reinforces the idea that AI is used as a cognitive partner rather than a cognitive authority.

Perceived agency and control with AI

The item “I use AI as a support, not a substitute, for my thinking” had the highest mean (M = 4.27, SD = 0.711), reinforcing the sense of retained cognitive ownership. Similarly, “I see myself as the main decision-maker when using AI tools” (M = 3.95, SD = 0.949) suggests strong perceived agency. These findings imply that students are not abdicating responsibility to AI but are leveraging it within a self-directed learning framework, aligning with constructivist principles of learner autonomy.

Challenges and ethical concerns with AI

Nuanced findings emerge within the domain of ethics. While students moderately agree with concerns about AI over-reliance (M = 3.71, SD = 1.027) and originality (M = 3.69, SD = 1.002), the item “I sometimes find it difficult to determine whether using AI is ethical” had a relatively lower mean (M = 3.50, SD = 0.966). Moreover, ambiguity is highest in “I have faced challenges understanding where to draw the line between help and academic dishonesty when using AI” (M = 3.43, SD = 1.011). These scores suggest a complex cognitive and moral tension, highlighting the need for institutional guidance. This interpretation is reinforced by the item “I think institutions should provide clearer guidelines on ethical AI use in learning” (M = 4.18, SD = 0.813), which had one of the highest means, indicating strong student demand for clearer norms and policies.

AI literacy, digital confidence, and prior experience

Responses show strong digital confidence and literacy, with “I feel digitally confident when exploring AI for learning tasks” (M = 3.76, SD = 0.837) and “I know how to prompt or interact with AI to get better responses” (M = 3.94, SD = 0.840). However, a notable outlier is “I have received formal or informal training on using AI tools for learning” (M = 3.22, SD = 1.181), which is the lowest-scoring item. This indicates a gap between usage and institutional support or formal instruction—students may be self-taught, which carries implications for equity and consistency in AI literacy.

Taken together, the findings present a balanced profile of learner engagement with AI, where students are largely confident and competent, use AI reflectively and strategically, and retain agency over learning. However, nuances related to imagination, ethical uncertainty, and training reveal areas that require attention. Educational institutions are thus encouraged to invest in AI literacy programs and develop clear ethical frameworks that support students in navigating this evolving terrain confidently and responsibly.

In Table 4, a multivariate multiple regression analysis was conducted to examine the combined and unique contributions of perceived agency and control with AI, challenges and ethical concerns with AI, and AI literacy, digital confidence, and prior experience on two dependent variables: creative thinking with AI and critical thinking with AI. The multivariate tests were statistically significant for each predictor, indicating that all independent variables significantly contributed to the model when considered jointly, Pillai’s Trace values ranging from .038 to.192, all p < .001.

Table 4. Multiple linear regression.

Predictor Dependent variableBSEtPPartial η2 98.3% CI (LL, UL)Fp (F)Multivariate partial η2
InterceptCreative Thinking with AI7.46.967.80<.001.11[5.169, 9.751]60.827<.001.132
Critical Thinking with AI5.50.896.17<.001.07[3.367, 7.634]38.119<.001.132
Perceived Agency and Control with AICreative Thinking with AI.19.044.30<.001.04[.085, .298]18.472<.001.192
Critical Thinking with AI.45.0410.75<.001.19[.346, .544]115.458<.001.192
Challenges and Ethical Concerns with AICreative Thinking with AI.10.042.51.012.01[.005, .194]6.320.012.038
Critical Thinking with AI.16.044.27<.001.04[.069, .245]18.260<.001.038
AI Literacy, Digital Confidence, and Prior ExperienceCreative Thinking with AI.33.048.27<.001.12[.237, .431]68.307<.001.127
Critical Thinking with AI.16.044.35<.001.04[.074, .254]18.959<.001.127

For the univariate analyses, the overall regression model for creative thinking with AI was statistically significant, F (3, 486) = 59.14, p < .001, R2 = .267, indicating that approximately 26.7% of the variance in creative thinking was explained by the predictors. Similarly, the model for critical thinking with AI was significant, F (3, 486) = 99.63, p < .001, R2 = .381, explaining 38.1% of the variance.

Among the predictors, perceived agency and control with AI was a strong predictor of both outcomes. It significantly predicted creative thinking with AI (B = .191, p < .001, partial η2 = .037) and had an even stronger effect on Critical Thinking with AI (B = .445, p < .001, partial η2 = .192), suggesting that students who felt more in control when using AI demonstrated greater critical and creative engagement.

Challenges and ethical concerns with AI also significantly predicted both outcomes, albeit with smaller effect sizes. For creative thinking with AI, the effect was modest (B = .099, p = .012, partial η2 = .013), while for critical thinking, the contribution was more substantial (B = .157, p < .001, partial η2 = .036), indicating that grappling with ethical concerns may encourage deeper evaluative thinking.

Finally, AI literacy, digital confidence, and prior experience showed a strong positive relationship with creative thinking with AI (B = .334, p < .001, partial η2 = .123), and a moderate effect on critical thinking with AI (B = .164, p < .001, partial η2 = .038). These findings emphasize the importance of technical readiness and familiarity with AI tools in fostering student engagement in AI-augmented educational environments.

Taken together, the results indicate that cognitive engagement with AI both creatively and critically is significantly shaped by students’ sense of agency, ethical reflection, and AI literacy. These findings highlight the multifaceted role of psychological and experiential factors in shaping the educational potential of AI.

Qualitative findings on human-AI interaction in learning environments

This research presents results from student participants in interviews on their use of AI on academic tasks. The data are organized around five broad areas: creative thinking, critical evaluation, agency and control, ethical considerations, and AI literacy. Transcripts were analysed using inductive coding and thematic synthesis. The richness of these perspectives sheds light on how students interact with and participate in AI-extended learning environments.

Theme One: AI as inspiration for creative discovery

Every participant consistently used AI as an ideation aid and a catalyst for overcoming creative obstacles. Several described how they utilized AI to brainstorm an essay, organize a speech, or stimulate artistic projects. One mentioned, “I used an image-generation AI to envision a poem. The result spurred an entire new verse.” Others used AI to decide on themes, compose lyrics, or redefine business ideas. Rather than replacing originality, AI helped students solve problems from new angles, offering new or unconventional ideas that expanded their creative horizons. AI software enables divergent thinking and helps students overcome creative blocks. This means that teachers need to encourage AI as a creative partner while emphasizing student agency. Curriculum frameworks should introduce AI as an assistive tool that supports, but does not substitute, original thinking.

Theme Two: AI as a draft partner, rather than a last word

Students did not accept AI outputs as face value. Many participants indicated reading AI outputs critically, comparing them to their own knowledge or scholarly sources. Some participants utilized AI responses as drafts and rewrote them heavily. One of them said, “I read the output carefully and keep only what aligns with my intention.” Another said, “I ask AI to elaborate more, which probes its comprehension.” Students exercised discernment and employed several strategies—cross-verifying, rephrasing, and soliciting peer advice. Through this, students can exercise reflective practices while working with AI. These findings affirm the integration of AI literacy and critical thinking in the curriculum, emphasizing the significance of verifying, questioning, and placing AI answers rather than copying them verbatim.

Theme Three: Balancing control in human-AI collaboration

The responses given demonstrated a nuanced awareness of control. Although most students claimed that they were controlling their decisions, others admitted to sometimes surrendering to AI suggestions under pressure. One of them said, “I tend to control most of the time, but when I am tired, I rely on AI’s response.” Another one said, “I copy blindly at times, especially if it appears nice.” Students vacillated between active choice-making and passive reliance, contingent upon time, pressure, or difficulty of the task. These results indicate the role of metacognitive consciousness in the acceptance of AI. Teachers and facilitators: Teach students not just technical competence but also the sensitivity of when and why they are relying on AI, with agency and intentionality in tool use.

Theme Four: Fuzzy lines between help and deception

Several participants reported concerns regarding reliance on AI, plagiarism, and the uncertainty of the implications of AI use from an ethical standpoint. One student commented, “I once got flagged for plagiarism when I didn’t edit enough.” Another said, “It’s hard to know when AI crosses into doing the work for me.” Universally, there was consensus that employing AI was a good thing if done openly and critically, but bad if taken up whole. Students are morally sensitive but require guidance. Institutions need to develop clear policies and education guidelines on AI use. These would cover ethical boundaries, attribution needs, and how to determine between support and substitution in educational work.

Theme Five: Experience determines confidence in AI interaction

The volunteers had varying levels of AI literacy. Those who were previously exposed to electronic technology—such as programming, multimedia, or web design—were more confident. As one of the participants put it, “I used to develop websites, so experimenting with AI comes naturally.” Others said that they learned experimentally or via the internet. Those who were more technologically exposed worked with AI more easily and creatively. Students’ ability to collaborate with AI is tightly tied to earlier digital experience. These demands differentiated approaches to AI integration that do not alienate students with lower levels of tech confidence. Training needs to address not only procedural knowledge but also conceptual knowledge of the potential and limitations of AI.

Points of integration (convergence)

In this study, both quantitative and qualitative approaches come together to tell a clear story about how students are developing their relationships with AI in educational settings. This is especially true when it comes to creativity, critical thinking, perceived agency, ethical judgment, and digital literacy.

On the quantitative side, the descriptive statistics indicate a strong consensus among students. They view AI as a helpful partner in creative endeavours. This idea is backed up by qualitative data, where students describe AI as a source of creative inspiration that sparks new ideas and solutions.

When it comes to critical thinking, the quantitative results also show high ratings, particularly regarding how students evaluate and modify AI outputs. This is mirrored in the qualitative findings, where participants talk about actively questioning and refining the suggestions generated by AI. This connection highlights a growing trend: students are approaching AI thoughtfully, seeing it as a temporary tool rather than the ultimate authority.

The theme of perceived agency and control also show a consistent pattern across both data types. The quantitative results suggest that students feel a strong sense of autonomy in their learning. Qualitative insights support this autonomy but also reveal some subtle variations. These nuances add depth to the quantitative findings, showing that while students generally feel in control, their sense of agency can fluctuate and isn’t the same for everyone. This points to a valuable opportunity for metacognitive training to help students use AI in a more intentional and context-aware way.

Regarding ethical concerns, the mixed methods findings show both agreement and expansion. The quantitative data indicate moderate to high levels of concern about ethical boundaries. These worries are echoed and elaborated upon in the qualitative data, where students express uncertainty about where to draw the line between ethical assistance and dishonest substitution. This highlights the importance of institutions providing clear ethical guidelines and support for responsible AI use.

The aspect of AI literacy and confidence paints a clear picture when we look at both sets of data. On the quantitative side, students expressed a strong sense of confidence when interacting with AI, yet they also highlighted a significant gap in formal training. The qualitative findings back this up, especially in the theme where students who had previous digital experience showed more comfort and creativity in using AI. In contrast, those with less experience seemed to be more hesitant and less willing to take risks. Altogether, these insights indicate that AI literacy is influenced not just by direct training but also by earlier exposure to digital tools. This underscores the importance of providing structured and fair training opportunities that help close the digital divide in educational settings.

Discussion

This study offers some fascinating insights into how university students in Ghana are not just using generative AI tools, but are really engaging with them in a thoughtful way to boost their creative and critical thinking skills. The results—where both numbers and personal experiences come together—show that students are starting to see AI more as a partner in thinking rather than just a tool for quick solutions. The data reveals that students are generating a lot of creative ideas and reflecting critically, especially when they use AI to brainstorm, analyze, or tweak their academic work. This aligns with what other researchers have found, suggesting that AI can act as a “creativity catalyst” (Mahama et al., 2023; Dai et al., 2023), especially when students are encouraged to engage with AI outputs actively instead of just copying them (Jayasuriya et al., 2025).

Interestingly, students’ ability to manage their interactions with AI really stood out. Those who were more digitally savvy felt more confident using AI in a strategic way, which supports what Jarrahi et al. (2023) and Ni Uanachain and Aouad (2025) have said about the importance of students keeping control over their learning. Our qualitative findings added more depth to this idea, revealing that while many students saw themselves as the ones driving their learning, some admitted to relying on AI a bit too much—especially when they were pressed for time or feeling tired. This variation highlights the need for structured training in metacognition, focusing on the intentional and reflective use of AI, which echoes the insights of Markauskaite and Goodyear (2017).

It is crucial to consider the ethical dilemmas that students face. While the numbers suggest a moderate level of concern regarding originality and misuse, personal stories highlight a real confusion about where to draw the line between getting help and crossing into academic dishonesty. These insights echo earlier findings by Holmes and Miao (2023) and Swiecki et al. (2022), who point out that the lack of clear institutional guidance only adds to the ethical uncertainties. In Ghanaian higher education, where policies are still in flux, students often rely on informal norms, which can lead to misjudgements and potential penalties from institutions.

Moreover, AI literacy and digital confidence have emerged as key factors influencing both creative and critical thinking. The regression analysis showed that students who are more experienced and confident in digital spaces tend to engage more deeply with AI-enhanced tasks. This aligns with the research by Hwang and Chen (2023) and Kim et al. (2025), who emphasize that being digitally prepared is essential for effective AI utilization. However, the identified lack of formal training—both in quantitative and qualitative terms—highlights a concerning equity gap. Students from less digitally privileged backgrounds may not benefit equally from AI, which could worsen existing educational inequalities (Ayisi et al., 2024; Nkansah & Oldac, 2024).

Conclusion

This study highlights how generative AI can really boost cognitive engagement among students, especially in areas like creative and critical thinking, when it’s seen as a collaborative partner instead of a replacement. Ghanaian university students showed a genuine willingness to reflect, revise, and take charge of their learning journeys while working with AI. However, the ethical concerns and digital disparities revealed in this study point to the necessity for thoughtful, context-sensitive policies and teaching strategies. Without proper support, the potential of AI in education could end up being inconsistent and ethically challenging.

Recommendations

Institutions need to create clear, context-specific ethical guidelines for using AI in academic environments. These policies should be developed in collaboration with students and educators to mirror real-life practices and challenges, ensuring everyone understands what academic integrity looks like in AI-enhanced settings.

Universities should also invest in comprehensive AI literacy programs that cover both practical skills and theoretical knowledge. These programs should not only teach how to use AI tools but also foster abilities in ethical reasoning, prompt engineering, and reflective judgment. Tailored approaches should be implemented to ensure fairness, especially for students who may have limited experience with digital technologies.

Ethical approval and consent to participate

As a non-experimental study, informed consent was sought from the respondents. These respondents were adult students, and they are assumed to be independent in taking decisions regarding activities they engage in while in school, so, informed consent was sought from them personally and not through any third-party. In this process, all the respondents were required to respond to an informed consent document, and they were obliged to participate without any sentiments. Aside from the above, the respondents for the study were made aware of their right to withdraw from the study at any time without consequence. The research team adhered to strict ethical standards throughout the data collection, analysis, and reporting processes to ensure the protection of respondents’ rights and well-being. Again, the study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki (General Assembly of the World Medical Association, 2014). All the procedures including survey administration and interview protocols were designed to ensure respect for individuals, informed consent, voluntary participation, and the right to withdraw at any time without consequence. Participants were assured of anonymity and confidentiality, and data were securely stored and used exclusively for research purposes. Specifically, we did not seek any IRB consent for the study because human manipulation with risk was not entertained and as well the study was descriptive and not a cause-effect one.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 24 Sep 2025
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Mahama I and Amadu P. You are the Driver and AI is the Mate: Exploring Human-Led Creative and Critical Thinking in AI-Augmented Learning Environments [version 1; peer review: awaiting peer review]. F1000Research 2025, 14:974 (https://doi.org/10.12688/f1000research.167988.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 24 Sep 2025
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.