Keywords
trial; participation; patient experience; patient satisfaction; patient-centred trials; cognitive testing
To encourage participation in trials, people need to have a positive experience. However, researchers do not routinely measure participant experience. Our aim is to facilitate routine measurement by developing a measure that captures the participant perspective, in a way that is meaningful, acceptable and useful to trial teams and participants.
We conducted a focus group and 25 interviews with trial professionals and trial participants to explore how participant experiences of trials should be measured, and to identify domains and items to include in the measure. Interviewees were also asked to comment on a list of candidate domains and items informed by a previous review of the literature on participant experience measures. Interviews were analysed thematically. Candidate domains and items were incorporated into a draft measure. Cognitive testing was undertaken in two rounds to ensure the items were comprehensible and grounded in participant experience.
Interviewees and patient and public contributors reported that standardising the measurement of participant experience of trials had the potential to improve trial experience but described issues around the timing of measurement. Cognitive testing highlighted issues with comprehension, recall and response and numerous items were removed or refined. We developed a standard and a short version of the measure for feasibility testing.
We developed a measure covering important domains of participant experience of trials, which could assist trial teams and participants to improve trial design and enhance delivery of a meaningful participant experience.
trial; participation; patient experience; patient satisfaction; patient-centred trials; cognitive testing
Increasing participation in trials is a global goal but recruitment and retention remain key challenges,1,2 and there is interest in developing better methods to conduct trials,3,4 including new processes,5 incentives6 and methods to enhance diversity.7 One method of improving trials (which may benefit engagement) is a focus on participant experience.8
In health contexts, measurement of patient experience is increasingly used in quality improvement,9 and a similar approach could be used in trials.10 Most trials already have extensive platforms to measure participant experience of the intervention (e.g. quality of life and quality of care) but few routinely measure experience of trial participation. If the measurement of participant experience of the trial was routine, it would allow measurement of variation in experience over sites and time, and between groups within a trial. Feedback from participant experience data could support interventions to enhance future participant experience and potentially increase engagement with research.
An example in a health context is the routine assessment of patient experience of primary care.11,12 Done routinely at scale, experience can be assessed among individual practices, providing policy-makers with extensive data on experience over time and providing practices with data on strengths and weaknesses in their services which could be modified. There could be value in the trials community adopting a similar approach.
Our scoping review of self-reported measures of participant experience concluded that there is no existing standardised measure.10 The review found a number of limitations in the available studies, including the lack of a formal definition of participant experience, and the use of measures without detailed data about their development.
Since 2015, the UK National Institute of Health and Care Research (NIHR) Clinical Research Network13 has embedded a measure (the ‘Participant Research Experience Survey’ - PRES) to assess participant experience of studies (including trials) within each local network. The survey uses a standard set of questions, but the survey is delivered within local networks and across multiple studies currently recruiting within that network. This provides a useful snapshot of experience within that local network but does not provide a systematic, comprehensive assessment of individual trials, and is not designed to provide the detail necessary to compare individual trials. Building in routine measurement and feedback of participant experience into all trials would arguably complement this approach.
Our aim was to develop a standardised measure of participant experience in trials through the application of qualitative research, cognitive testing and patient and public involvement and engagement (PPIE).
The study had two sequential phases, in line with measure development guidance,14 and the study flow is detailed in Figure 1. Phase 1 involved qualitative interviews to determine broad domains of participant experience and specific item content. Phase 2 involved cognitive testing to explore comprehension of the items. A PPIE advisory group (PAG) supported both phases. The 6 members of the PAG provided input into all stages of the project, as outlined in a strategy. The GRIPP-SF checklist15 details implementation of the strategy (Table 1). Three PAG members had previously taken part in a trial and four had experience of PPIE work. AD and CP were joint PAG co-ordinators. To guide development, we modified an existing definition of patient experience16 to make it applicable to trials, and defined participant experience as: (1) the sum of all interactions in the trial (2) shaped by an organisation’s culture (3) which influences participant perceptions (4) across the continuum of research.
In summary, we identified candidate domains of patient experience and candidate items from our previous work,10 and explored those through a focus group, semi-structured interviews, and PAG meetings. This generated (a) insights about measuring participant experience in trials (b) a preliminary list of candidate domains of participant experience, and candidate items mapped to those domains. These underwent two rounds of cognitive testing.
Firstly, a focus group with clinical trials unit staff captured their perspectives on measuring participant experience of trials and their views on domains and items. Secondly, semi-structured interviews with trial professionals and participants explored the same issues. For pragmatic reasons, we limited the sample size to a maximum of 25 interviewees. Separate topic guides for professionals and participants were used incorporating questions and prompts covering trial experience and what the questionnaire might look like with core questions that can be asked of anyone participating in a trial, and to facilitate discussion, we presented candidate domains and items from a predefined list.10 Topic guides were piloted with members of the PAG who advised that the questions were appropriate. We have uploaded versions of the measure to the OSF repository.26 The unformatted versions show the item content, which can be used with existing trial materials and matched to local formatting and style requirements. The formatted measure is a suggested format for presentation.
The study used purposive sampling. We recruited professionals who had worked on any type of trial within the previous 12 months, including clinical trials unit directors, principal investigators, trial funders and Clinical Research Network staff. We also recruited current or past (within the last 5 years) adult participants. We sought diversity in trial participation as well as type, age, gender, ethnicity and education. Recruitment sources were a local consent-to-contact database,17 the Greater Manchester Clinical Research Network and research team contacts using an email from the research team and an advert. Before conducting the research, the researcher and the Director of the Trials Unit and relevant CRN lead identified professional participants who could participate in the focus group by circulating the study information. Trial professional recruitment sources were aware of the researcher’s interest to improve participant experience in trials. There was no existing relationship with previous trial participants; people responded to the advert directly to the research team with no prior knowledge of the researcher’s interest.
One focus group (n=6) was conducted with professionals by author CP in April 2018. Individual interviews with professionals were conducted in-person at the Trial Unit in the UK or via the telephone (n=10) at the University of Manchester by CP between April and July 2018. Trial participant interviews were also conducted in-person (n=10) or by telephone (n=4) by the lead author, NS, between May and July 2018. Each participant took part in one interview or focus group for approximately one-hour; only the participant/s and researcher were present. Both CP and NS are experienced, female, health services researchers, with PhDs, employed as Research Associates at the time of the study. In-person interviews took place in a convenient and private location, with travel expenses reimbursed and a £10 shopping voucher given as thanks. Of note, no participants refused to participate at interview.
Table 2 shows the focus group and interview participant characteristics for the study. Of the 11 trial professional participants interviewed, 36% were staff at the CRN, 27% were clinical trials unit directors, 18% were principal investigators, and 18% were trial funders. Of the 6 participants in the focus group, the majority were Trial Managers (n = 2) followed by a Clinical Trials Director, a Statistician, a Quality Assurance Trialist, and a Clinical Research Fellow. Of the 14 trial participants interviewed, 64% were male, with age range, 30-78 years, with 50% reporting a postgraduate qualification and 64% White English ethnicity.
Interviews were recorded, field notes were made, and transcribed verbatim, then uploaded to NVivo (version 11) for coding and analysis by NS and CP. Research team members read early transcripts, suggesting codes, and offering avenues to explore with participants. PAG members each read one professional and one participant transcript to add their insights. Of note, transcripts were not returned to participants for comment/feedback and/or correction. Transcripts from trial participant interviews were analysed thematically by NS.18 Themes were generated, coded and categorized according to the developing framework. A document listing the codes from each transcript with excerpts of data relevant to each theme was developed to manage the developing framework. Data from trial professionals were coded separately from participant data by CP, and then compared. This and our previous work10 informed the drafting of the measure to be tested by cognitive testing (consisting of 9 candidate domains and 52 items).
Cognitive testing is a method used to assess the performance of a measure by collecting information about respondents’ thought processes, as they answer questions, using ‘think aloud’ techniques to inform adaptation of the measure.19
We invited trial participants from phase 1 and new participants using the same recruitment methods (via email and advert) and purposive sampling. There is no consensus around sample size for cognitive testing, as this depends on the type of cognitive process examined and the number of ‘rounds’ necessary to adjust the measure.20 Here, a round was considered complete when clear problems were identified with items warranting adjustments to the items and measure.
The study was conducted within the Centre for Primary Care and Health Services Research at The University of Manchester, UK. Cognitive interviews took place in August 2018 in a private room for approximately one hour with only the participant and researcher, NS, present. Interviews were audio-recorded, with note taking, but not transcribed. As before, participants received travel expenses and a shopping voucher.
NS received training on cognitive testing ahead of the study. The topic guide included a full explanation of the ‘think aloud’ task and open-ended questions and probes (spontaneous and pre-prepared) to assess different cognitive processes: comprehension; recall; judgement; and response.21 We opted for an ‘immediate retrospective think aloud’ approach whereby we asked participants to read each item while thinking aloud and to respond to each item. After doing a review of the topic guide, the research team and PAG judged that the questions and topic areas were appropriate.
Cognitive testing took place over two rounds (n= 13 and n = 6) where the results of round 1 informed round 2, with each respondent participating once. No participants refused to participate at interview. Of the 19 participants, 63% were male, with age range, 40–86 years, with 58% reporting a postgraduate qualification and 74% previous trial experience (see Table 3).
For data analysis, we followed the three-stage process in a modified framework analysis,20 involving: chronicling, condensing and using the data to improve the questionnaire; NS prepared the analysis. PAG members each read cognitive testing guidance prepared by NS to add insights to the analysis at the meeting to refine the items. The research team and PAG made decisions on changes, including changes to items, the items contained within each domain, the domain name (to reflect the revised content), the response scale and measure layout. We followed the published reporting framework for cognitive testing.22
The study received Proportionate University Research Ethics Committee approval (Ref: 2018-2739-5920). Participants who expressed an interest in participating in the study via the advert were emailed a full Participant Information Sheet outlining study involvement and had the opportunity to consider the information and ask questions ahead of consenting to be interviewed. Those who agreed to participate gave written informed consent ahead of the interview, including how data collected would remain confidential; the use of the audio recording; any data collected may be archived and used as anonymous data as part of subsequent research (also known as secondary data analysis); and the use of anonymous quotes in academic books, reports or journals.
We report themes and illustrative quotes from interviews and focus group participants, and insights from the PAG (each quote is tagged to indicate method of data collection and the participant’s role).
(a) Advantages and disadvantages of measuring participant experience
All participants mentioned that they thought the standardised measurement of participant experience had potential to improve trial experience.
If you’re looking at the results of that patient experience questionnaire informing future behaviour and future trials, it would be invaluable, wouldn’t it? We get involved in the design, the application … (focus group, clinical trials unit staff).
Whether this benefit was realised as the trial was running or in future trials was discussed. Some interviewees described the potential to use the experience data, if it highlighted the benefits of participation, to market future trials to potential participants and to promote trials generally.
It would be good if they do think that being in the trial has helped in anyway, possibly other than response, so added value, it would be great to know that so that we can gather a body of evidence to try to put that information out there for patients more generally. (PR8, interview, clinical research network staff).
The main concern raised from trial professional interviewees centred around burden to participants, particularly in situations that were stressful.
They’ve been through emotions of the medical treatment and the emotional trauma of having an early baby … You’ve got people like me who go up to them and say, ‘can we put the baby in a trial?’ … ‘Did they get it right?’ (PR4, interview, chief investigator of a running trial).
The majority of trial interviewees focused on feasibility, in terms of when would be the easiest to do it for the participant and the usefulness of the experience data captured.
If it’s a long trial, then some of these [items] would be perhaps partway through, but you’d have to vary the questions…. Close to the end of the trial, so you haven’t forgotten what happened (TP1, interview, previous trial participant, podiatry trial).
The interplay between the intervention, perception of benefit, and impact of receiving trial results was also a concern expressed by interviewees.
If they weren’t on that drug and saw the results of the trial that said, this drug worked and they didn’t get it, they may have had a good experience as far as being in research but seeing the fact that actually the drug that they didn’t get worked might influence how they put over their experience (PR8, interview, clinical research network staff).
(b) Domains to be included in the measure
Candidate domains were identified by the research team for inclusion, relating specifically to the different phases in the trial (which the PAG described as ‘the participant journey’).
(b1) Early stage information provision and trial processes
Professionals were concerned that recall would make assessing information and processes occurring early on in the trial problematic if the measure was administered at the end of the trial. The majority of interviewees saw early stage processes as important to assess.
What was the consent process there like? Did they receive enough information? Was the information understandable? So all the sort of things that would support that pre-participation decision-making … Were they allowed enough time to consider participation? (PR7, interview, funder of trials).
Trial participants spoke of having too much information with complicated text, and wanted specific information to manage their expectations of participation.
Information given was easy to understand but perhaps a little over explained… I would help a trial regardless - but the cost needs to be thought about as it is someone’s time, especially if they have to come out of work to attend appointments. (TP14, interview, current trial participant, inflammatory bowel disease trial).
(b2) Perception of conduct of trial processes
How a trial was conducted and the practical issues (e.g., waiting times and travel) was seen as important by all interviewees:
… the procedures themselves, it’s saying what you’re doing, it’s all the aspects of the trial itself, how much burden was added onto the parents as a result of it, was it an emotional burden or do they have to do something burden, both of them have obviously got different meaning. (PR4, interview, chief investigator of trials).
(b3) Sharing trial results with participants and perception of trial processes
There was discussion around whether measures should be administered before or after the sharing of trial results, and frustration described by participants when results were not shared by the trial.
… the guy [trial staff] was really good at explaining everything, really, really good. Because putting all these things on your head is quite a big thing and extremely reassuring, which meant a lot. The only thing he did is he promised he would send me the results of my brain scan, because I was interested and he never did. I was disappointed and I never got the results of the study, which I was really interested to get. (TP12, interview, previous trial participant, mental health trial).
(b4) Engagement with trial team
This domain relates to the communication between the team and the participant across the trial journey and the extent to which participants ‘feel a part’ of the trial and have confidence and trust in staff. Having a relationship with the trial team was viewed as vital to participants.
And during the study, I think this is more about retention and what kind of negative or positive experiences would help people to remain in trials. So again, I think experiences around how they were engaged with the team would be quite useful (PR3, interview, clinical trials unit director).
(b5) Perceived benefits of participation for participants
This theme specifically relates to the perceived benefits of participation other than satisfaction to increase understanding of why participation in trials is important.
It would be good if they do think that being in the trial has helped in anyway … It would be great to know that so that we can gather a body of evidence to try to put that information out there (PR8, interview, clinical research network staff).
(b6) Perceived satisfaction with participation
Interviewees discussed the utility of a question on overall satisfaction. Some interviewees and PAG members felt that it would be preferable to ask whether a participant would take part in the same trial again compared to asking about taking part in research more generally.
I would agree with asking about taking part in another research study. About the experience of taking part… you know, had a good experience of taking part, would you participate in a future study (PR9, interview, clinical research network staff.
Some trial professional interviewees referenced the assessment of patient experience of clinical care, which has become a mainstay of quality improvement in the NHS.
We do ask patients in the NHS about their satisfaction with the care received. We do that anyway so why is it any different? (PR8, interview, clinical research network staff).
(c) Logistics of administering an experience measure
Research participants and PAG members raised a number of logistical considerations around delivering experience measures. These are summarised as a checklist (Table 4) with a brief summary presented here. The majority of interviewees felt that measuring experience in participants who are no longer in the trial (i.e. lost to follow-up or withdrawn) would provide important data on patient experience. The consensus was that the questionnaire needed to be easy to read and short (between 1-4, A4 sides, and take between 3-10 minutes to complete). A variety of modes were suggested (self-complete questionnaire: paper, electronic via a website or app). Different time-points were suggested by interviewees and can be broadly categorised as ‘during the trial’ or ‘end of trial’. There was no consensus as to which option was best, but flexibility was seen as key.
The cognitive testing highlighted issues around comprehension, recall and response to the candidate items. In summary, from 2 rounds of cognitive testing, we removed 30 items from the measure. We refined 46 items using respondents’ preferred terms to provide clarity and inserted examples grounded in participants’ experiences in 9 of these items, and 2 items were changed from statements to questions. The response options were enhanced to ensure the measure captured a range of trial experiences: binary responses were inserted in 9 items; graded responses were retained in 14 items; a ‘not applicable’ option was added to 6 response scales, a ‘no opinion’ to 4 scales; and free text was added to 4 items. Figure 1 summarises the development process of the measure, showing the domain and item refinement process throughout phase 1 and phase 2. 3 of 9 candidate domains were amalgamated and 3 were re-labelled.
By the end of the cognitive testing phase, consensus was reached among the research team that the final 6 core domains, and 25 items (including one final ‘full text’ response) captured a ‘meaningful’ trial participant experience. The short measure was subsequently developed based on those items viewed as preferable by respondents, with 1-9 items tapping each of the 6 domains, totalling 17 items. We have uploaded versions of the measure to the OSF repository.
The study aimed to develop a useful measure of participant experience of trials. We describe a detailed development study including the views of both trial professionals and trial participants, with detailed cognitive testing to assess understanding and acceptability.
There are limitations to the study. First, we did not include participants in either the phase 1 or 2 sample who had not completed a trial they had been recruited to, either because they had withdrawn or were lost to follow-up. Our sample might be biased towards trial participants who may have had a more positive experience, although there were instances in the data where trial participants reported otherwise.
Most of our participants were white (88%) indicating the need for further work to explore under-represented groups. This supports wider initiatives underway to support trialists to design and conduct inclusive trials.7 We did not formerly assess the readability of the items instead relying on a proxy indicator of educational level for health literacy, and we were also less successful in recruiting a diverse group of participants. The PAG provided an important opportunity to integrate patient insights into development, but all were known to the team through existing networks and had interests in the research topic.
The data were collected over 4 years ago, with publication delayed by the pandemic and other issues. Although we expect that many of the issues remain pertinent, changes in the delivery of trials, such as moves to more remote methods,23 would not have been captured.
The measurement of trial participant experience may be important for improving delivery of trials, exploring variation in experience in different sites and over time, and between groups within a trial. However, this will require feedback of the data and resulting change. Effective feedback processes (such as, guidance on how to interpret the results of the measure) needs to be co-produced with stakeholders to ensure it is acceptable and useful.24 It will also be important to assess whether potential disadvantages of participation are realised.
Our measure has been piloted with three ongoing trials in the UK, with results reported in a forthcoming paper. Future work will also need to examine the factors that influence participant experience, and how much of the variation in experience is due to context (for example, condition experienced), trial type, participant characteristics, or aspects of the trial (for example, the intervention and trial procedures). It will also be important to explore the extent to which experience is relevant in all trials. For example, some trials have little active participation either because of the trial design (such as, cluster trials without individual consent) or because of the duration and extent of follow-up. We do not foresee any challenges translating the measure to a digital platform given there are substantial precedents for capturing patient experience and study experience data digitally.25 We report a feasibility study of the use of the measure elsewhere.
Both trial professionals and trial participants consider the standardised assessment of participant experience of trials important. In this paper, we have outlined the core domains that should be assessed to measure participant experience in trials and provided measures for further assessment.
The agreements with trial teams did not provide for data sharing beyond the study team. If third parties wish to access the data, the corresponding author (PB) can contact the trial teams on behalf of those third parties, to negotiate an additional agreement for data sharing beyond the original scope of the study.
OSF: PACT. https://doi.org/10.17605/OSF.IO/BCXH2. 26
This project contains the following extended data:
- Patient experience of trials - short measure formatted.docx
- Patient experience of trials - short measure unformatted.docx
- Patient experience of trials - standard measure unformatted.docx
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).
The authors would like to thank everyone who took part in this study. We would like to specially thank the UK NIHR Clinical Research Network staff, as well as those at the Local Greater Manchester Clinical Research Network, and Eastern Clinical Research Network for help with recruitment. We would also like to thank the Research for the Future team for help with recruitment, and those who were involved in supporting the recruitment of professionals to the qualitative study.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
No
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Expertise in patient and public involvement, community perceptions of research, autism research.
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions drawn adequately supported by the results?
Yes
References
1. Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, et al.: COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study.Qual Life Res. 2018; 27 (5): 1159-1170 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Psychometrics, Statistics, Clinical Trials
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 19 Jan 24 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)