scholarly journals Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?—a systematic review

Trials ◽  
2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Caroline French ◽  
Hilary Pinnock ◽  
Gordon Forbes ◽  
Imogen Skene ◽  
Stephanie J. C. Taylor

Abstract Background Process evaluations are increasingly conducted within pragmatic randomised controlled trials (RCTs) of health services interventions and provide vital information to enhance understanding of RCT findings. However, issues pertaining to process evaluation in this specific context have been little discussed. We aimed to describe the frequency, characteristics, labelling, value, practical conduct issues, and accessibility of published process evaluations within pragmatic RCTs in health services research. Methods We used a 2-phase systematic search process to (1) identify an index sample of journal articles reporting primary outcome results of pragmatic RCTs published in 2015 and then (2) identify all associated publications. We used an operational definition of process evaluation based on the Medical Research Council’s process evaluation framework to identify both process evaluations reported separately and process data reported in the trial results papers. We extracted and analysed quantitative and qualitative data to answer review objectives. Results From an index sample of 31 pragmatic RCTs, we identified 17 separate process evaluation studies. These had varied characteristics and only three were labelled ‘process evaluation’. Each of the 31 trial results papers also reported process data, with a median of five different process evaluation components per trial. Reported barriers and facilitators related to real-world collection of process data, recruitment of participants to process evaluations, and health services research regulations. We synthesised a wide range of reported benefits of process evaluations to interventions, trials, and wider knowledge. Visibility was often poor, with 13/17 process evaluations not mentioned in the trial results paper and 12/16 process evaluation journal articles not appearing in the trial registry. Conclusions In our sample of reviewed pragmatic RCTs, the meaning of the label ‘process evaluation’ appears uncertain, and the scope and significance of the term warrant further research and clarification. Although there were many ways in which the process evaluations added value, they often had poor visibility. Our findings suggest approaches that could enhance the planning and utility of process evaluations in the context of pragmatic RCTs. Trial registration Not applicable for PROSPERO registration

BMJ ◽  
1995 ◽  
Vol 310 (6972) ◽  
pp. 125-126 ◽  
Author(s):  
S. Shepperd ◽  
C. Jenkinson ◽  
P. Morgan

Medical Care ◽  
2001 ◽  
Vol 39 (6) ◽  
pp. 627-634 ◽  
Author(s):  
Morris Weinberger ◽  
Eugene Z. Oddone ◽  
William G. Henderson ◽  
David M. Smith ◽  
James Huey ◽  
...  

BMJ ◽  
2006 ◽  
Vol 332 (7538) ◽  
pp. 413-416 ◽  
Author(s):  
Ann Oakley ◽  
Vicki Strange ◽  
Chris Bonell ◽  
Elizabeth Allen ◽  
Judith Stephenson

BMJ Open ◽  
2019 ◽  
Vol 9 (8) ◽  
pp. e025127 ◽  
Author(s):  
Hueiming Liu ◽  
Alim Mohammed ◽  
Janani Shanthosh ◽  
Madeline News ◽  
Tracey-Lea Laba ◽  
...  

ObjectiveProcess evaluations (PEs) alongside randomised controlled trials of complex interventions are valuable because they address questions of for whom, how and why interventions had an impact. We synthesised the methods used in PEs of primary care interventions, and their main findings on implementation barriers and facilitators.DesignSystematic review using the UK Medical Research Council guidance for PE as a guide.Data sourcesAcademic databases (MEDLINE, SCOPUS, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, EMBASE and Global Health) were searched from 1998 until June 2018.Eligibility criteriaWe included PE alongside randomised controlled trials of primary care interventions which aimed to improve outcomes for patients with non-communicable diseases.Data extraction and synthesisTwo independent reviewers screened and conducted the data extraction and synthesis, with a third reviewer checking a sample for quality assurance.Results69 studies were included. There was an overall lack of consistency in how PEs were conducted and reported. The main weakness is that only 30 studies were underpinned by a clear intervention theory often facilitated by the use of existing theoretical frameworks. The main strengths were robust sampling strategies, and the triangulation of qualitative and quantitative data to understand an intervention’s mechanisms. Findings were synthesised into three key themes: (1) a fundamental mismatch between what the intervention was designed to achieve and local needs; (2) the required roles and responsibilities of key actors were often not clearly understood; and (3) the health system context—factors such as governance, financing structures and workforce—if unanticipated could adversely impact implementation.ConclusionGreater consistency is needed in the reporting and the methods of PEs, in particular greater use of theoretical frameworks to inform intervention theory. More emphasis on formative research in designing interventions is needed to align the intervention with the needs of local stakeholders, and to minimise unanticipated consequences due to context-specific barriers.PROSPERO registration numberCRD42016035572.


Sign in / Sign up

Export Citation Format

Share Document