Keywords
Artificial Intelligence in Research, Professional Doctorates, Qualitative Inquiry, Research Ethics, Research Integrity
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Artificial Intelligence and Machine Learning gateway.
The integration of artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, into academic research is accelerating. While these tools offer considerable utility for professional doctorate students, especially those engaged in practice-based, qualitative inquiry, their use also presents serious epistemological, ethical, and pedagogical challenges. This article critically examines the promise and limitations of AI in the context of professional doctorates, with a specific focus on qualitative research. Drawing from recent scholarship, it highlights how overreliance on AI can bypass crucial aspects of intellectual development, compromise reflexivity, and obscure researcher accountability. This paper argues for a principled and transparent use of AI, guided by structured frameworks, to ensure that the human researcher remains central to meaning-making and scholarly authorship. The proposed stance foregrounds epistemic agency and underscores the importance of learning through complexity and discomfort in the doctoral journey.
Artificial Intelligence in Research, Professional Doctorates, Qualitative Inquiry, Research Ethics, Research Integrity
The rise of artificial intelligence (AI) in higher education research has been swift and far-reaching. In particular, large language models (LLMs) such as ChatGPT have emerged as accessible, general-purpose tools capable of generating fluent academic text, suggesting analytical categories, and assisting in the early stages of qualitative data analysis. These technologies are being explored across disciplines, and their adoption is accelerating, fuelled by institutional demands for efficiency, student curiosity, and a pervasive cultural narrative of technological progress.
For professional doctorate students, typically mid- or late-career professionals undertaking research within their own practice contexts, AI presents a tempting proposition. Balancing complex work responsibilities, tight time constraints, and academic expectations, these students often seek tools that can streamline writing, organise literature, or support qualitative analysis. AI appears to offer precisely this kind of support: fast, accessible, and easy to use. But this apparent solution comes with significant and under-examined risks.
This article argues that while AI can provide practical assistance in professional doctorate research, its uncritical use poses a fundamental threat to the learning journey that these programmes are designed to enable. Unlike traditional PhDs, which are often aimed at preparing academics to contribute to disciplinary knowledge, professional doctorates are concerned with developing professionals as researching professionals, practitioners capable of generating, interpreting, and applying knowledge within complex, real-world settings (Taylor, 2007; Wellington & Sikes, 2006). The aim is not simply to conduct research, but to become a particular kind of scholar-practitioner: reflexive, ethical, and epistemologically aware.
This transformation hinges on the intellectual and moral labour of engaging with data, theory, and practice in context. Qualitative research, widely used in professional doctorate projects, demands this labour. It invites uncertainty, rewards criticality, and foregrounds interpretation over automation. While AI can mimic surface-level analysis, it cannot engage in the nuanced, situated, and reflexive sense-making that lies at the heart of qualitative inquiry. Nor can it assume responsibility for methodological choices or ethical consequences. The purpose of this article, therefore, is not to discourage the use of AI outright, but to caution against using it in ways that bypass the developmental, interpretive, and ethical demands of professional doctorate research. Drawing on current literature and emerging AI-integrated research frameworks, this manuscript explores the intersection between AI assistance and professional learning. It demonstrates how over-reliance on AI tools can risk epistemic detachment, erode scholarly authorship, and undermine the very transformation that the professional doctorate is designed to cultivate.
Ultimately, this article advocates for a critically engaged, ethically aware approach to AI, one that acknowledges the affordances of these tools while insisting on the centrality of human interpretation, voice, and judgement. In a research culture increasingly shaped by automation, professional doctorate students must remain not just users of tools, but authors of meaning. This is the path to becoming a researching professional, and it cannot be taken on autopilot.
Professional doctorates differ fundamentally from traditional research doctorates in both purpose and orientation. While PhD programmes often aim to produce disciplinary knowledge that contributes to theoretical advancement, professional doctorates are designed to develop practitioners who can generate, interpret, and apply knowledge within complex, real-world settings (Taylor, 2007; Fulton et al., 2011). This emphasis on practical impact and reflective inquiry situates professional doctorate students at the intersection of academic and workplace knowledge cultures, requiring them to navigate not only research methods but shifting identities and epistemological paradigms.
As Wellington and Sikes (2006) highlight, students undertaking professional doctorates often enter with considerable professional expertise and authority but may feel uncertain in academic spaces. They describe this as a “tight compartment” in which students must reconcile their practitioner identity with their emerging role as researcher. Unlike undergraduate or early postgraduate learners, professional doctorate students are not blank slates; they bring with them habits of practice, ways of knowing, and assumptions formed through years of experience. The challenge, and opportunity, of the professional doctorate lies in facilitating a shift from “knowing-in-action” to “knowing-about-action” and ultimately “knowing through research.” This transition is not only cognitive but ontological, requiring students to reorient themselves toward knowledge as something to be interrogated rather than applied.
Qualitative research is particularly well suited to this process of transformation. At its core, qualitative inquiry is interpretive, iterative, and relational. It invites the researcher to attend not just to what people do or say, but to how meaning is constructed, negotiated, and situated within broader social, organisational, or cultural systems. This level of engagement demands deep reflexivity, the capacity to question one’s own assumptions, values, and position in relation to the research context. For professional doctorate students, this means not only conducting interviews or analysing documents, but reflecting on the meanings that underpin their own practice and how these intersect with the lives and experiences of others. Morgan (2023) underscores the importance of this interpretive dimension, noting that qualitative research involves an ongoing dialogue between the researcher, the data, and the emergent theoretical insights. This dialogue is non-linear and often uncomfortable. It resists easy answers and demands the kind of sustained attention that fosters intellectual humility and depth. Similarly, Wachinger et al. (2024) observe that qualitative analysis is more than a mechanical process of theme identification; it involves judgement, contextual awareness, and ethical discernment. These are not skills that can be downloaded or replaced by AI systems, no matter how fluent or responsive.
Moreover, qualitative research methods teach students how to sit with ambiguity, how to recognise that multiple truths can coexist, that categories are porous, and that certainty is often an illusion. This is particularly important in professional doctorate work, where research questions are grounded in real-world complexity and often involve navigating contested values, institutional constraints, or interpersonal dynamics. AI tools, by contrast, are not designed to navigate such ambiguity. They are optimised to produce confident, coherent responses, even when the underlying issue is uncertain or contested. As a result, there is a risk that students who rely too heavily on AI will miss the opportunity to learn how to think critically through complexity. In this context, the qualitative research process becomes both a methodology and a developmental tool. It supports students not only in producing new insights but in becoming the kind of scholar-practitioner who can make sense of, and act within, the layered realities of professional life. The process of conducting qualitative research, reading richly, listening closely, analysing deeply, writing reflexively, thus becomes central to the formation of the researching professional. It is in this space that students learn not just how to do research, but how to be as researchers.
The convenience offered by AI tools like ChatGPT is undeniable. These systems can generate text rapidly (Khlaif et al., 2023), summarise long documents, and even identify patterns in datasets (Burger et al., 2023). For professional doctorate students juggling full-time employment, research timelines, and complex life responsibilities, such capabilities are understandably attractive. However, these apparent benefits come with considerable epistemological and developmental risks, particularly when students begin to substitute AI-assisted outputs for their own intellectual labour.
Professional doctorate programmes aim not merely to produce research outputs, but to foster critical, independent thinkers capable of engaging with the complexities of professional practice. Central to this development is the process of inquiry itself, posing difficult questions, analysing contradictions, and reflecting on one’s assumptions. Overreliance on AI shortcuts can obscure this process. As Chubb et al. (2022) argue, the institutional pressures to “speed up to keep up” risk encouraging superficial engagement with research problems in favour of rapid production. In such an environment, AI becomes a tool for deliverables rather than a partner in thinking. Kulkarni et al. (2024) echo this concern, warning that automation in research may gradually erode scholars’ capacity to theorise. Rather than developing new insights, students may increasingly default to AI-generated interpretations that appear plausible but lack grounding in the specificities of context or data. This can result in a kind of epistemic detachment, where students accept interpretations they have not critically examined and whose assumptions they may not fully understand.
This phenomenon is especially problematic in qualitative inquiry. As previously discussed, qualitative research requires slow thinking, sensitivity to context, and a willingness to dwell in ambiguity. When ChatGPT or similar systems are used to carry out coding, theme generation, or even early interpretation, they may produce outputs that seem legitimate but are in fact unmoored from the situatedness that qualitative inquiry demands. As Wachinger et al. (2024) found in their comparative study, AI-generated analyses often miss less obvious but potentially more meaningful themes, particularly those that challenge dominant discourses or arise from marginalised voices. This points to a broader cognitive risk. AI may promote what van Veggel et al. (2025) describe as “analytic complacency”, a state in which the researcher becomes a passive recipient of insight rather than its active producer. This is antithetical to the kind of learning that the professional doctorate is meant to facilitate. Doctoral-level research is supposed to stretch students intellectually, to expose them to complexity, contradiction, and uncertainty. It is in grappling with difficult data and unresolved tensions that genuine insight emerges.
Moreover, repeated reliance on AI to initiate or structure interpretation may inhibit the development of key research capacities: critical reading, theoretical framing, conceptual abstraction, and methodological judgement. These are slow-forming capabilities, cultivated through iterative practice and reflection. Delegating these tasks to AI too early or too often can short-circuit this developmental process, leaving students with the appearance of progress but lacking the epistemic maturity to defend or elaborate their findings. The consequence is a hollowing out of the researcher’s role. Instead of becoming skilled interpreters of complex data, students may become technicians managing a series of AI-generated outputs, shaping prompts, fine-tuning models, and copying outputs into dissertations with minimal critical engagement. This not only jeopardises research quality but undermines the core pedagogical function of the professional doctorate: to develop professionals who can think, reason, and act with scholarly depth.
In sum, while AI can certainly assist with efficiency and surface-level organisation, the cost of outsourcing thinking is high. It risks replacing learning with automation, depth with convenience, and transformation with simulation. For professional doctorate students, the choice is not merely about tool use, it is about what kind of researcher, and what kind of professional, they are becoming.
At the heart of all research lies an ethical relationship between the researcher and the knowledge they produce. In qualitative inquiry, this relationship is particularly acute: meaning is not discovered, but co-constructed through interpretation, reflection, and engagement with context. Authorship, therefore, is not simply about who types the words, it is about who takes responsibility for interpretation, whose voice guides the analysis, and whose perspective shapes the narrative. As AI becomes more capable of generating text that mimics human reasoning, these fundamental questions become both more urgent and more complex. The issue of authorship and integrity in the context of AI is receiving increasing attention across disciplines. Tang et al. (2024) argue that transparent declaration of AI use is a non-negotiable requirement for maintaining academic credibility. This is not merely a matter of formality. When AI tools are used without disclosure, they obscure the boundaries between human reasoning and machine assistance, misleading examiners, supervisors, and readers about the origins of the work. Such practices erode trust in academic outputs and threaten the core values of honesty and accountability in scholarship.
For professional doctorate students, the stakes are even higher. Their research is often closely linked to their workplace roles and professional identities. As such, questions of authorship are not just academic, they are intimately tied to how they understand and represent their professional knowledge. Using AI to produce interpretations without deep engagement risks divorcing the findings from the practitioner’s lived reality, weakening both the rigour and the relevance of the research. Moreover, qualitative inquiry requires the researcher to be visible in the text, not in a self-indulgent way, but as a situated, reflexive interpreter of meaning. This visibility is essential for establishing trustworthiness and for acknowledging the partial, positioned nature of all interpretation. AI, by contrast, is fundamentally decontextualised. It cannot disclose its assumptions, explain its reasoning, or justify its conclusions. It lacks positionality. As Hosseini et al. (2024) make clear, AI cannot be held ethically accountable for the claims it produces. It cannot respond to the concerns of participants, engage in moral reasoning, or revise its analysis in light of new understanding. This has profound implications for epistemic agency. The use of AI risks shifting the researcher’s role from meaning-maker to manager of outputs, someone who curates, edits, and assembles rather than thinks, reflects, and questions. While this shift may appear efficient, it undermines the developmental goal of the professional doctorate: to cultivate scholarly practitioners who can navigate ambiguity, theorise practice, and communicate their findings with clarity and conviction.
As Mantere and Vaara (2024) point out, authorship is not only about responsibility, it is about voice. Academic writing is a space in which students articulate who they are as scholars. It is where they take a stance, frame their contributions, and position themselves within broader conversations. When students rely too heavily on AI to construct their narratives, they risk losing that voice. The resulting work may be grammatically fluent and structurally coherent, but it lacks the authenticity and reflexivity that distinguishes robust qualitative research. Furthermore, the homogenising tendencies of AI raise concerns about originality. AI systems are trained on existing data, meaning they are more likely to reproduce dominant discourses than to challenge them. This is particularly problematic in qualitative research, which often seeks to illuminate marginalised perspectives, expose taken-for-granted assumptions, and generate alternative framings of reality. As Wachinger et al. (2024) observed, AI-generated analyses tend to follow conventional patterns and miss opportunities for theoretical or political insight. If students begin to internalise these patterns as authoritative, they may become less inclined to explore unorthodox, disruptive, or critical lines of inquiry, thereby limiting both the originality and social relevance of their work. The use of AI has profound implications for authorship, integrity, and scholarly identity (Yeo, 2024). For professional doctorate students, who are learning not only how to conduct research but how to become researching professionals, it is vital that they remain at the centre of the knowledge production process. Authorship is not just a technical attribution, it is a moral and epistemological stance. It signals ownership, responsibility, and voice. These are the very qualities that define the professional doctorate, and they cannot be automated.
While the risks of AI use in qualitative research are significant, rejecting these tools outright would be both impractical and intellectually limiting. The goal is not to eliminate AI from the research process, but to integrate it in ways that preserve the epistemological integrity and educational function of professional doctorate work. This requires not only caution, but also structure. Emerging frameworks are now offering pathways to support critical, ethical, and developmentally appropriate uses of AI within qualitative research.
One of the most promising contributions comes from van Veggel et al. (2025), whose Integrated Prompt Framework offers a pragmatic structure for engaging with AI tools like ChatGPT across four key domains: planning, prompting, evaluating, and procedural use. This model emphasises that AI should not drive the research but support the researcher in thinking more expansively, asking better questions, and refining their methodological awareness. At each phase, the researcher is encouraged to interrogate both the process and the output, asking not only “What does the AI produce?” but “How does this align with my epistemological stance, research context, and ethical commitments?” In particular, the evaluating component of this framework is crucial. It asks students to pause and assess AI-generated content against criteria such as trustworthiness, theoretical congruence, and interpretive nuance. This step supports the development of critical reflexivity, helping students to move beyond surface-level use of AI and to engage with it as a prompt for deeper analysis. Rather than accepting AI output at face value, students are encouraged to triangulate it with their own insights, theoretical readings, and the specificities of their research data.
Other frameworks, such as the Guided AI Thematic Analysis (GAITA) proposed by Nguyen-Trung (2024), similarly advocate for keeping the researcher firmly in control. In GAITA, ChatGPT is used as a brainstorming partner in the early stages of coding, helping to surface alternative perspectives or overlooked themes. However, all final interpretations are generated, verified, and articulated by the human researcher. The AI serves as a cognitive aid, not a surrogate for judgment. These models also foreground the concept of AI literacy, a vital capacity that professional doctorate students must now cultivate. As Turobov et al. (2024) argue, responsible use of AI depends not only on methodological discipline but on an understanding of how these systems work: their training data, limitations, biases, and vulnerabilities. Without this awareness, students may unknowingly allow AI to introduce distortions, particularly around issues of power, representation, or cultural framing.
Developing AI literacy also means understanding when not to use AI. For instance, AI is ill-suited for tasks that involve emotional nuance, complex ethical tensions, or culturally specific meaning systems, domains where qualitative research often operates. Students must learn to identify these boundaries and be able to justify their methodological decisions in light of them. This is not simply a technical choice; it is a professional judgement that reflects their emerging identity as a researcher. Importantly, these frameworks are not intended to replace academic supervision or peer review but to scaffold reflective practice. They provide a vocabulary and a set of checkpoints that students and supervisors can use to discuss the ethical and epistemological implications of AI use. This helps re-centre the educational focus of the professional doctorate: to develop thoughtful, critically engaged scholars who can navigate complexity, rather than simply manage outputs.
In this sense, responsible AI use becomes an opportunity rather than a threat. It invites professional doctorate students to develop new skills, confront new dilemmas, and engage with emerging scholarly practices, while holding fast to the principles that underpin meaningful, ethical research. AI can prompt, suggest, and support, but the researcher must always decide, justify, and interpret. That is where the real learning happens, and where professional identity is formed.
As artificial intelligence becomes an increasingly prominent feature of the research landscape, professional doctorate students face both an opportunity and a responsibility. The opportunity lies in the ability to engage with powerful tools that can support aspects of the research process, through drafting, summarising, coding, or generating analytical prompts. The responsibility, however, is to ensure that these tools are used in ways that preserve the developmental purpose of the professional doctorate and the intellectual integrity of qualitative research. This article has argued that professional doctorates are not merely about producing a thesis or acquiring a credential, they are about becoming a different kind of professional. One who is capable of asking difficult questions, grappling with ambiguity, theorising practice, and constructing knowledge that is both grounded in experience and conceptually rigorous. This transformation cannot be automated. It depends on a process of critical engagement with data, theory, and self. It is in the struggle to interpret, the tension between practice and abstraction, and the willingness to reflect that the student becomes a researching professional.
AI, in this context, is not a threat if used wisely. But its dangers arise when it is treated as a surrogate for thinking, a shortcut through the messy work of analysis, or a replacement for the student’s own voice. The seductive fluency of ChatGPT and similar tools can create a false sense of mastery, leading students to mistake coherence for depth, or convenience for insight. When this happens, the learning journey is hollowed out, and the purpose of the professional doctorate is undermined. What is needed, therefore, is not rejection of AI but a reframing of its role. AI can function as a supportive scaffold, prompting questions, surfacing alternative perspectives, or helping to organise ideas. But it must never replace the human work of interpretation. The frameworks discussed in this article, from van Veggel et al.’s Integrated Prompt Framework to Nguyen-Trung’s Guided AI Thematic Analysis, offer practical guidance for engaging with AI critically and reflectively. These approaches foreground the role of the researcher as the primary agent in knowledge production, and emphasise AI literacy, ethical judgement, and methodological rigour.
Professional doctorate students must also develop the confidence to say no to AI when appropriate. Not every task benefits from automation. In fact, some of the most important aspects of qualitative research, reflexivity, ethical sensitivity, theoretical abstraction, resist delegation. These are the moments that demand human presence, vulnerability, and thoughtfulness. It is in these moments that learning becomes transformation. The broader implications extend beyond individual theses or projects. As AI continues to reshape the norms and practices of academia, there is a risk that efficiency becomes the overriding goal, displacing care, critique, and curiosity. Professional doctorate students, as boundary-crossers between practice and academia, are uniquely positioned to resist this drift. They can model what it means to do research that is rigorous, relevant, and reflexive, research that uses tools wisely but never forgets who is ultimately responsible.
In the end, becoming a researching professional is not about mastering a set of techniques or even completing a doctoral programme. It is about becoming someone who can stand behind their ideas, account for their interpretations, and contribute ethically to their field. No AI can do that. And that is why the researcher, human, situated, and thinking, must remain at the centre of professional doctorate research.
The author is grateful to Prof Hilary Engward and Dr Sally Goldspink for fruitful discussions on this topic which led to this work. A preprint version of this paper is available on the Open Science Framework: https://doi.org/10.35542/osf.io/8cbvu_v2.
| Views | Downloads | |
|---|---|---|
| F1000Research | - | - |
|
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)