ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Developing a measure of participant experience of trials: qualitative study and cognitive testing

[version 1; peer review: 2 approved with reservations]
PUBLISHED 19 Jan 2024
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

Abstract

Background

To encourage participation in trials, people need to have a positive experience. However, researchers do not routinely measure participant experience. Our aim is to facilitate routine measurement by developing a measure that captures the participant perspective, in a way that is meaningful, acceptable and useful to trial teams and participants.

Methods

We conducted a focus group and 25 interviews with trial professionals and trial participants to explore how participant experiences of trials should be measured, and to identify domains and items to include in the measure. Interviewees were also asked to comment on a list of candidate domains and items informed by a previous review of the literature on participant experience measures. Interviews were analysed thematically. Candidate domains and items were incorporated into a draft measure. Cognitive testing was undertaken in two rounds to ensure the items were comprehensible and grounded in participant experience.

Results

Interviewees and patient and public contributors reported that standardising the measurement of participant experience of trials had the potential to improve trial experience but described issues around the timing of measurement. Cognitive testing highlighted issues with comprehension, recall and response and numerous items were removed or refined. We developed a standard and a short version of the measure for feasibility testing.

Conclusions

We developed a measure covering important domains of participant experience of trials, which could assist trial teams and participants to improve trial design and enhance delivery of a meaningful participant experience.

Keywords

trial; participation; patient experience; patient satisfaction; patient-centred trials; cognitive testing

Introduction

Increasing participation in trials is a global goal but recruitment and retention remain key challenges,1,2 and there is interest in developing better methods to conduct trials,3,4 including new processes,5 incentives6 and methods to enhance diversity.7 One method of improving trials (which may benefit engagement) is a focus on participant experience.8

Why measure participant experience?

In health contexts, measurement of patient experience is increasingly used in quality improvement,9 and a similar approach could be used in trials.10 Most trials already have extensive platforms to measure participant experience of the intervention (e.g. quality of life and quality of care) but few routinely measure experience of trial participation. If the measurement of participant experience of the trial was routine, it would allow measurement of variation in experience over sites and time, and between groups within a trial. Feedback from participant experience data could support interventions to enhance future participant experience and potentially increase engagement with research.

An example in a health context is the routine assessment of patient experience of primary care.11,12 Done routinely at scale, experience can be assessed among individual practices, providing policy-makers with extensive data on experience over time and providing practices with data on strengths and weaknesses in their services which could be modified. There could be value in the trials community adopting a similar approach.

Existing measures of participant experience

Our scoping review of self-reported measures of participant experience concluded that there is no existing standardised measure.10 The review found a number of limitations in the available studies, including the lack of a formal definition of participant experience, and the use of measures without detailed data about their development.

Since 2015, the UK National Institute of Health and Care Research (NIHR) Clinical Research Network13 has embedded a measure (the ‘Participant Research Experience Survey’ - PRES) to assess participant experience of studies (including trials) within each local network. The survey uses a standard set of questions, but the survey is delivered within local networks and across multiple studies currently recruiting within that network. This provides a useful snapshot of experience within that local network but does not provide a systematic, comprehensive assessment of individual trials, and is not designed to provide the detail necessary to compare individual trials. Building in routine measurement and feedback of participant experience into all trials would arguably complement this approach.

Our aim was to develop a standardised measure of participant experience in trials through the application of qualitative research, cognitive testing and patient and public involvement and engagement (PPIE).

Methods

Study design

The study had two sequential phases, in line with measure development guidance,14 and the study flow is detailed in Figure 1. Phase 1 involved qualitative interviews to determine broad domains of participant experience and specific item content. Phase 2 involved cognitive testing to explore comprehension of the items. A PPIE advisory group (PAG) supported both phases. The 6 members of the PAG provided input into all stages of the project, as outlined in a strategy. The GRIPP-SF checklist15 details implementation of the strategy (Table 1). Three PAG members had previously taken part in a trial and four had experience of PPIE work. AD and CP were joint PAG co-ordinators. To guide development, we modified an existing definition of patient experience16 to make it applicable to trials, and defined participant experience as: (1) the sum of all interactions in the trial (2) shaped by an organisation’s culture (3) which influences participant perceptions (4) across the continuum of research.

65ae2b37-00fc-46a8-9fe8-8b6c4d5157fc_figure1.gif

Figure 1. Study structure.

Table 1. GRIPP-SF reporting checklist.

Section and topicItemReported on page No
1: AimReport the aim of PPIE in the study3
2: MethodsProvide a clear description of the methods used for PPIE in the study3, 5, 8
3: Study resultsOutcomes—Report the results of PPIE in the study, including both positive and negative outcomes8-11
4: Discussion and conclusionsOutcomes—Comment on the extent to which PPIE influenced the study overall. Describe positive and negative effects12
5: Reflections/critical perspectiveComment critically on the study, reflecting on the things that went well and those that did not, so others can learn from this experience12

In summary, we identified candidate domains of patient experience and candidate items from our previous work,10 and explored those through a focus group, semi-structured interviews, and PAG meetings. This generated (a) insights about measuring participant experience in trials (b) a preliminary list of candidate domains of participant experience, and candidate items mapped to those domains. These underwent two rounds of cognitive testing.

Phase 1: Measure development

Firstly, a focus group with clinical trials unit staff captured their perspectives on measuring participant experience of trials and their views on domains and items. Secondly, semi-structured interviews with trial professionals and participants explored the same issues. For pragmatic reasons, we limited the sample size to a maximum of 25 interviewees. Separate topic guides for professionals and participants were used incorporating questions and prompts covering trial experience and what the questionnaire might look like with core questions that can be asked of anyone participating in a trial, and to facilitate discussion, we presented candidate domains and items from a predefined list.10 Topic guides were piloted with members of the PAG who advised that the questions were appropriate. We have uploaded versions of the measure to the OSF repository.26 The unformatted versions show the item content, which can be used with existing trial materials and matched to local formatting and style requirements. The formatted measure is a suggested format for presentation.

Participants

The study used purposive sampling. We recruited professionals who had worked on any type of trial within the previous 12 months, including clinical trials unit directors, principal investigators, trial funders and Clinical Research Network staff. We also recruited current or past (within the last 5 years) adult participants. We sought diversity in trial participation as well as type, age, gender, ethnicity and education. Recruitment sources were a local consent-to-contact database,17 the Greater Manchester Clinical Research Network and research team contacts using an email from the research team and an advert. Before conducting the research, the researcher and the Director of the Trials Unit and relevant CRN lead identified professional participants who could participate in the focus group by circulating the study information. Trial professional recruitment sources were aware of the researcher’s interest to improve participant experience in trials. There was no existing relationship with previous trial participants; people responded to the advert directly to the research team with no prior knowledge of the researcher’s interest.

Data collection

One focus group (n=6) was conducted with professionals by author CP in April 2018. Individual interviews with professionals were conducted in-person at the Trial Unit in the UK or via the telephone (n=10) at the University of Manchester by CP between April and July 2018. Trial participant interviews were also conducted in-person (n=10) or by telephone (n=4) by the lead author, NS, between May and July 2018. Each participant took part in one interview or focus group for approximately one-hour; only the participant/s and researcher were present. Both CP and NS are experienced, female, health services researchers, with PhDs, employed as Research Associates at the time of the study. In-person interviews took place in a convenient and private location, with travel expenses reimbursed and a £10 shopping voucher given as thanks. Of note, no participants refused to participate at interview.

Table 2 shows the focus group and interview participant characteristics for the study. Of the 11 trial professional participants interviewed, 36% were staff at the CRN, 27% were clinical trials unit directors, 18% were principal investigators, and 18% were trial funders. Of the 6 participants in the focus group, the majority were Trial Managers (n = 2) followed by a Clinical Trials Director, a Statistician, a Quality Assurance Trialist, and a Clinical Research Fellow. Of the 14 trial participants interviewed, 64% were male, with age range, 30-78 years, with 50% reporting a postgraduate qualification and 64% White English ethnicity.

Table 2. Phase 1 focus group and interview participant characteristics.

IDRolePatient trials experienceSexAgeEthnicityEducational qualificationRecruitment source
FGCTUClinical trials unit directorNA-----
FGCTUTrial ManagerNA-----
FGCTUStatisticianNA-----
FGCTUQuality AssuranceNA-----
FGCTUCRFNA-----
FGCTUTrial ManagerNA-----
PR1Clinical trials unit DirectorNA-----
PR2Clinical trials unit DirectorNA-----
PR3Clinical trials unit DirectorNA-----
PR4Chief InvestigatorNA-----
PR5Chief InvestigatorNA-----
PR6FunderNA-----
PR7FunderNA-----
PR8CRNNA-----
PR9CRNNA-----
PR10CRNNA-----
PR11CRNNA-----
TP1RetiredPodiatryFemale65White EnglishNonePPI
TP2Part timeCosmetic (multiple)Female65White EnglishGCSEPPI
TP3RetiredRenalMale70White EnglishOther qualificationsRFTF
TP4RetiredAlzheimer’s (multiple)Male78White English (South African origin)Degree or equivalentLCRN
TP5Part timeDementiaFemale61White EnglishDegree or equivalentECRN
TP6RetiredLung cancer, cardiac (multiple)Male73White EnglishGCSE or equivalentECRN
TP7RetiredCardiacMale82White EnglishDegree or equivalentECRN
TP8RetiredCardiacMale77White BritishGCE A level or equivalentGMCRN
TP9Part timeEyes, skin, mental health (multiple)Male36Black AfricanDegree or equivalentCI
TP10Part timeMental health, autism, spectacles, excessive perspiration (multiple)Male38Black CaribbeanOther (BTEC)CI
TP11Part timeMental health, eyes, (multiple)Male30Black CaribbeanDegree or equivalentCI
TP12Part timeMental healthFemale52White IrishHigher education below degree levelPPI
TP13UnemployedMental healthFemale46White EnglishDegree or equivalentPPI
TP14Full timeInflammatory bowel diseaseMale40Other White: half Jewish, half white EnglishDegree or equivalentGMCRN

Analysis

Interviews were recorded, field notes were made, and transcribed verbatim, then uploaded to NVivo (version 11) for coding and analysis by NS and CP. Research team members read early transcripts, suggesting codes, and offering avenues to explore with participants. PAG members each read one professional and one participant transcript to add their insights. Of note, transcripts were not returned to participants for comment/feedback and/or correction. Transcripts from trial participant interviews were analysed thematically by NS.18 Themes were generated, coded and categorized according to the developing framework. A document listing the codes from each transcript with excerpts of data relevant to each theme was developed to manage the developing framework. Data from trial professionals were coded separately from participant data by CP, and then compared. This and our previous work10 informed the drafting of the measure to be tested by cognitive testing (consisting of 9 candidate domains and 52 items).

Phase 2: Cognitive testing

Cognitive testing is a method used to assess the performance of a measure by collecting information about respondents’ thought processes, as they answer questions, using ‘think aloud’ techniques to inform adaptation of the measure.19

Participants

We invited trial participants from phase 1 and new participants using the same recruitment methods (via email and advert) and purposive sampling. There is no consensus around sample size for cognitive testing, as this depends on the type of cognitive process examined and the number of ‘rounds’ necessary to adjust the measure.20 Here, a round was considered complete when clear problems were identified with items warranting adjustments to the items and measure.

Data collection

The study was conducted within the Centre for Primary Care and Health Services Research at The University of Manchester, UK. Cognitive interviews took place in August 2018 in a private room for approximately one hour with only the participant and researcher, NS, present. Interviews were audio-recorded, with note taking, but not transcribed. As before, participants received travel expenses and a shopping voucher.

NS received training on cognitive testing ahead of the study. The topic guide included a full explanation of the ‘think aloud’ task and open-ended questions and probes (spontaneous and pre-prepared) to assess different cognitive processes: comprehension; recall; judgement; and response.21 We opted for an ‘immediate retrospective think aloud’ approach whereby we asked participants to read each item while thinking aloud and to respond to each item. After doing a review of the topic guide, the research team and PAG judged that the questions and topic areas were appropriate.

Cognitive testing took place over two rounds (n= 13 and n = 6) where the results of round 1 informed round 2, with each respondent participating once. No participants refused to participate at interview. Of the 19 participants, 63% were male, with age range, 40–86 years, with 58% reporting a postgraduate qualification and 74% previous trial experience (see Table 3).

Table 3. Phase 2: Cognitive testing respondent characteristics.

IDGenderAgeEthnic groupHighest educational qualificationEmployment statusPrevious trial participationRecruitment avenue
CT1Female58White BritishBelow degreeFull timeNRFTF
CT2Male80White BritishDegreeSelf-employedYRFTF
CT3Male49White BritishBelow degreeFull timeNRFTF
CT4Male86White BritishNoneRetiredNRFTF
CT5Male57White BritishOther (CSE)Full timeYRFTF
CT6Female69White BritishOther (ACCA)RetiredYRFTF
CT7Male64White BritishHigher degreeRetiredYRFTF
CT8Male63White BritishBelow degreeRetiredNRFTF
CT9Female65White BritishGCSEPart timeY*PPI
CT10Female55White BritishHigher degreeStudentYRFTF
CT11Male70White BritishOther (ACCA)RetiredYRFTF
CT12Male57Asian/BritishOther (CSE)UnemployedNRFTF
CT13FemalexWhite BritishDegreeRetiredYRFTF
CT14Female61White BritishDegreeRetiredY*ECRN
CT15Male70White EnglishOther qualificationsRetiredY*RFTF
CT16Male73White EnglishGCSE or equivalentRetiredY*ECRN
CT17Male82White EnglishDegree or equivalentFull timeY*ECRN
CT18Female46White EnglishDegree or equivalentUnemployedY*PPI
CT19Male40MixedDegree or equivalentFull timeY*GMCRN

Analysis

For data analysis, we followed the three-stage process in a modified framework analysis,20 involving: chronicling, condensing and using the data to improve the questionnaire; NS prepared the analysis. PAG members each read cognitive testing guidance prepared by NS to add insights to the analysis at the meeting to refine the items. The research team and PAG made decisions on changes, including changes to items, the items contained within each domain, the domain name (to reflect the revised content), the response scale and measure layout. We followed the published reporting framework for cognitive testing.22

Ethical considerations

The study received Proportionate University Research Ethics Committee approval (Ref: 2018-2739-5920). Participants who expressed an interest in participating in the study via the advert were emailed a full Participant Information Sheet outlining study involvement and had the opportunity to consider the information and ask questions ahead of consenting to be interviewed. Those who agreed to participate gave written informed consent ahead of the interview, including how data collected would remain confidential; the use of the audio recording; any data collected may be archived and used as anonymous data as part of subsequent research (also known as secondary data analysis); and the use of anonymous quotes in academic books, reports or journals.

Results

Phase 1: Measure development

We report themes and illustrative quotes from interviews and focus group participants, and insights from the PAG (each quote is tagged to indicate method of data collection and the participant’s role).

(a) Advantages and disadvantages of measuring participant experience

All participants mentioned that they thought the standardised measurement of participant experience had potential to improve trial experience.

If you’re looking at the results of that patient experience questionnaire informing future behaviour and future trials, it would be invaluable, wouldn’t it? We get involved in the design, the application … (focus group, clinical trials unit staff).

Whether this benefit was realised as the trial was running or in future trials was discussed. Some interviewees described the potential to use the experience data, if it highlighted the benefits of participation, to market future trials to potential participants and to promote trials generally.

It would be good if they do think that being in the trial has helped in anyway, possibly other than response, so added value, it would be great to know that so that we can gather a body of evidence to try to put that information out there for patients more generally. (PR8, interview, clinical research network staff).

The main concern raised from trial professional interviewees centred around burden to participants, particularly in situations that were stressful.

They’ve been through emotions of the medical treatment and the emotional trauma of having an early baby … You’ve got people like me who go up to them and say, ‘can we put the baby in a trial?’ … ‘Did they get it right?’ (PR4, interview, chief investigator of a running trial).

The majority of trial interviewees focused on feasibility, in terms of when would be the easiest to do it for the participant and the usefulness of the experience data captured.

If it’s a long trial, then some of these [items] would be perhaps partway through, but you’d have to vary the questions…. Close to the end of the trial, so you haven’t forgotten what happened (TP1, interview, previous trial participant, podiatry trial).

The interplay between the intervention, perception of benefit, and impact of receiving trial results was also a concern expressed by interviewees.

If they weren’t on that drug and saw the results of the trial that said, this drug worked and they didn’t get it, they may have had a good experience as far as being in research but seeing the fact that actually the drug that they didn’t get worked might influence how they put over their experience (PR8, interview, clinical research network staff).

(b) Domains to be included in the measure

Candidate domains were identified by the research team for inclusion, relating specifically to the different phases in the trial (which the PAG described as ‘the participant journey’).

(b1) Early stage information provision and trial processes

Professionals were concerned that recall would make assessing information and processes occurring early on in the trial problematic if the measure was administered at the end of the trial. The majority of interviewees saw early stage processes as important to assess.

What was the consent process there like? Did they receive enough information? Was the information understandable? So all the sort of things that would support that pre-participation decision-making … Were they allowed enough time to consider participation? (PR7, interview, funder of trials).

Trial participants spoke of having too much information with complicated text, and wanted specific information to manage their expectations of participation.

Information given was easy to understand but perhaps a little over explained… I would help a trial regardless - but the cost needs to be thought about as it is someone’s time, especially if they have to come out of work to attend appointments. (TP14, interview, current trial participant, inflammatory bowel disease trial).

(b2) Perception of conduct of trial processes

How a trial was conducted and the practical issues (e.g., waiting times and travel) was seen as important by all interviewees:

… the procedures themselves, it’s saying what you’re doing, it’s all the aspects of the trial itself, how much burden was added onto the parents as a result of it, was it an emotional burden or do they have to do something burden, both of them have obviously got different meaning. (PR4, interview, chief investigator of trials).

(b3) Sharing trial results with participants and perception of trial processes

There was discussion around whether measures should be administered before or after the sharing of trial results, and frustration described by participants when results were not shared by the trial.

… the guy [trial staff] was really good at explaining everything, really, really good. Because putting all these things on your head is quite a big thing and extremely reassuring, which meant a lot. The only thing he did is he promised he would send me the results of my brain scan, because I was interested and he never did. I was disappointed and I never got the results of the study, which I was really interested to get. (TP12, interview, previous trial participant, mental health trial).

(b4) Engagement with trial team

This domain relates to the communication between the team and the participant across the trial journey and the extent to which participants ‘feel a part’ of the trial and have confidence and trust in staff. Having a relationship with the trial team was viewed as vital to participants.

And during the study, I think this is more about retention and what kind of negative or positive experiences would help people to remain in trials. So again, I think experiences around how they were engaged with the team would be quite useful (PR3, interview, clinical trials unit director).

(b5) Perceived benefits of participation for participants

This theme specifically relates to the perceived benefits of participation other than satisfaction to increase understanding of why participation in trials is important.

It would be good if they do think that being in the trial has helped in anyway … It would be great to know that so that we can gather a body of evidence to try to put that information out there (PR8, interview, clinical research network staff).

(b6) Perceived satisfaction with participation

Interviewees discussed the utility of a question on overall satisfaction. Some interviewees and PAG members felt that it would be preferable to ask whether a participant would take part in the same trial again compared to asking about taking part in research more generally.

I would agree with asking about taking part in another research study. About the experience of taking part… you know, had a good experience of taking part, would you participate in a future study (PR9, interview, clinical research network staff.

Some trial professional interviewees referenced the assessment of patient experience of clinical care, which has become a mainstay of quality improvement in the NHS.

We do ask patients in the NHS about their satisfaction with the care received. We do that anyway so why is it any different? (PR8, interview, clinical research network staff).

(c) Logistics of administering an experience measure

Research participants and PAG members raised a number of logistical considerations around delivering experience measures. These are summarised as a checklist (Table 4) with a brief summary presented here. The majority of interviewees felt that measuring experience in participants who are no longer in the trial (i.e. lost to follow-up or withdrawn) would provide important data on patient experience. The consensus was that the questionnaire needed to be easy to read and short (between 1-4, A4 sides, and take between 3-10 minutes to complete). A variety of modes were suggested (self-complete questionnaire: paper, electronic via a website or app). Different time-points were suggested by interviewees and can be broadly categorised as ‘during the trial’ or ‘end of trial’. There was no consensus as to which option was best, but flexibility was seen as key.

Table 4. A checklist for using a participant experience measure.

ChecklistIssueQuestionsConsiderations
Logistical considerationsTiming of administrationWhat is the most appropriate time-point- during trial or end of trial?If ‘during trial’ at what time point(s)?
If ‘end of trial’, define the end- e.g. last assessment, last assessment of final participant, trial close, results publication etc
Will the measure be administered at the same time as other trial procedures/assessments?If so, where in the sequence of assessments?
If not, when (and is this likely to affect response rates)?
Is the measure likely to influence or be influenced by other assessments and if so what are the implications for timing?
Mode of administrationWhat mode is appropriate?What is appropriate for the participant population?
Is there existing evidence around best mode for this population?
Is there an opportunity for PPI approaches to inform mode?
What mode is feasible?How many participants will be sampled and is mode feasible for sample size? i.e. is face-to-face feasible if 10,000 participants with no planned contact with research team.
GuidanceWhat is the process for administering and analysing the results?Are relevant instructions available to members of the site team that require them?
Are relevant instructions available to members of the clinical trials unit, that require them?
Who on the research team needs access to guidance?Who will come into contact with the measure or the data generated from it?
CostingsAre there any costs to delivering the measure?What are the likely costs? Are these covered?
Mitigating distressIs completion of the measure likely to cause distress?If so, is a distress protocol in place?
Can distress be mitigated through considering overall burden and timing of administration etc.?
Data managementHow will data be managed?Mode dependent
Is data being transferred and if so is a data transfer agreement in place?If data is being transferred from site to clinical trials unit for example, what is the process and is this process any different to other assessments?
Who will be responsible for analysing data?Research team, clinical trials unit, clinical research network, other
Who will results be communicated to and at what time-point?
Contacting participants that withdraw or are lost to follow-upWill participants in either of these categories be asked to complete the measure?If so, at what time-point and what mode options will be available?
Has ethical approval been granted?
Will participants be reimbursed for their time?
Questionnaire design considerationsTailored questions or responsesWhich questions require information tailored to the trial?What will be the process for tailoring questions/response options where applicable
Is there an opportunity for PPI approaches to inform tailoring?
Open-ended questionsIs there resource to analyse the open-ended questions in the measure?If not, will these questions/response options be removed?
DemographicsWill data from the measure be linked to demographic data?If so how?
If not, will demographic data be collected as part of the measure?

Phase 2: Cognitive testing

The cognitive testing highlighted issues around comprehension, recall and response to the candidate items. In summary, from 2 rounds of cognitive testing, we removed 30 items from the measure. We refined 46 items using respondents’ preferred terms to provide clarity and inserted examples grounded in participants’ experiences in 9 of these items, and 2 items were changed from statements to questions. The response options were enhanced to ensure the measure captured a range of trial experiences: binary responses were inserted in 9 items; graded responses were retained in 14 items; a ‘not applicable’ option was added to 6 response scales, a ‘no opinion’ to 4 scales; and free text was added to 4 items. Figure 1 summarises the development process of the measure, showing the domain and item refinement process throughout phase 1 and phase 2. 3 of 9 candidate domains were amalgamated and 3 were re-labelled.

By the end of the cognitive testing phase, consensus was reached among the research team that the final 6 core domains, and 25 items (including one final ‘full text’ response) captured a ‘meaningful’ trial participant experience. The short measure was subsequently developed based on those items viewed as preferable by respondents, with 1-9 items tapping each of the 6 domains, totalling 17 items. We have uploaded versions of the measure to the OSF repository.

Discussion

The study aimed to develop a useful measure of participant experience of trials. We describe a detailed development study including the views of both trial professionals and trial participants, with detailed cognitive testing to assess understanding and acceptability.

Strengths and limitations

There are limitations to the study. First, we did not include participants in either the phase 1 or 2 sample who had not completed a trial they had been recruited to, either because they had withdrawn or were lost to follow-up. Our sample might be biased towards trial participants who may have had a more positive experience, although there were instances in the data where trial participants reported otherwise.

Most of our participants were white (88%) indicating the need for further work to explore under-represented groups. This supports wider initiatives underway to support trialists to design and conduct inclusive trials.7 We did not formerly assess the readability of the items instead relying on a proxy indicator of educational level for health literacy, and we were also less successful in recruiting a diverse group of participants. The PAG provided an important opportunity to integrate patient insights into development, but all were known to the team through existing networks and had interests in the research topic.

The data were collected over 4 years ago, with publication delayed by the pandemic and other issues. Although we expect that many of the issues remain pertinent, changes in the delivery of trials, such as moves to more remote methods,23 would not have been captured.

Implications

The measurement of trial participant experience may be important for improving delivery of trials, exploring variation in experience in different sites and over time, and between groups within a trial. However, this will require feedback of the data and resulting change. Effective feedback processes (such as, guidance on how to interpret the results of the measure) needs to be co-produced with stakeholders to ensure it is acceptable and useful.24 It will also be important to assess whether potential disadvantages of participation are realised.

Our measure has been piloted with three ongoing trials in the UK, with results reported in a forthcoming paper. Future work will also need to examine the factors that influence participant experience, and how much of the variation in experience is due to context (for example, condition experienced), trial type, participant characteristics, or aspects of the trial (for example, the intervention and trial procedures). It will also be important to explore the extent to which experience is relevant in all trials. For example, some trials have little active participation either because of the trial design (such as, cluster trials without individual consent) or because of the duration and extent of follow-up. We do not foresee any challenges translating the measure to a digital platform given there are substantial precedents for capturing patient experience and study experience data digitally.25 We report a feasibility study of the use of the measure elsewhere.

Conclusions

Both trial professionals and trial participants consider the standardised assessment of participant experience of trials important. In this paper, we have outlined the core domains that should be assessed to measure participant experience in trials and provided measures for further assessment.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Jan 2024
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Small N, Planner C, Gillies K et al. Developing a measure of participant experience of trials: qualitative study and cognitive testing [version 1; peer review: 2 approved with reservations]. F1000Research 2024, 13:78 (https://doi.org/10.12688/f1000research.138829.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 19 Jan 2024
Views
7
Cite
Reviewer Report 19 Jul 2024
Elizabeth Pellicano, University College London Research Department of Clinical Educational and Health Psychology (Ringgold ID: 224223), London, England, UK 
Approved with Reservations
VIEWS 7
This study sought to develop a measure to capture participants’ experiences of taking part in research trials. I very much appreciated the premise of this study. This could be a really useful tool for researchers. The degree and nature of ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Pellicano E. Reviewer Report For: Developing a measure of participant experience of trials: qualitative study and cognitive testing [version 1; peer review: 2 approved with reservations]. F1000Research 2024, 13:78 (https://doi.org/10.5256/f1000research.152059.r287337)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
10
Cite
Reviewer Report 10 Jun 2024
Timothy Pickles, Centre for Trials Research, Cardiff University (Ringgold ID: 2112), Cardiff, Wales, UK 
Approved with Reservations
VIEWS 10
Methods
Study design
What does standardized mean? Not defined.
What are trial professionals? Not defined in this section.
No mention of COSMIN guidelines for evidence of content validity. The following should be referenced as a minimum ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Pickles T. Reviewer Report For: Developing a measure of participant experience of trials: qualitative study and cognitive testing [version 1; peer review: 2 approved with reservations]. F1000Research 2024, 13:78 (https://doi.org/10.5256/f1000research.152059.r287335)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Jan 2024
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.