Complex interventions and their implications for systematic reviews: A pragmatic approach

2015 ◽  
Vol 52 (7) ◽  
pp. 1211-1216 ◽  
Author(s):  
Mark Petticrew ◽  
Laurie Anderson ◽  
Randy Elder ◽  
Jeremy Grimshaw ◽  
David Hopkins ◽  
...  
2013 ◽  
Vol 66 (11) ◽  
pp. 1209-1214 ◽  
Author(s):  
Mark Petticrew ◽  
Laurie Anderson ◽  
Randy Elder ◽  
Jeremy Grimshaw ◽  
David Hopkins ◽  
...  

2019 ◽  
Vol 4 (Suppl 1) ◽  
pp. e000882 ◽  
Author(s):  
Kate Flemming ◽  
Andrew Booth ◽  
Ruth Garside ◽  
Özge Tunçalp ◽  
Jane Noyes

This paper is one of a series exploring the implications of complexity for systematic reviews and guideline development, commissioned by the WHO. The paper specifically explores the role of qualitative evidence synthesis. Qualitative evidence synthesis is the broad term for the group of methods used to undertake systematic reviews of qualitative research evidence. As an approach, qualitative evidence synthesis is increasingly recognised as having a key role to play in addressing questions relating to intervention or system complexity, and guideline development processes. This is due to the unique role qualitative research can play in establishing the relative importance of outcomes, the acceptability, fidelity and reach of interventions, their feasibility in different settings and potential consequences on equity across populations. This paper outlines the purpose of qualitative evidence synthesis, provides detail of how qualitative evidence syntheses can help establish understanding and explanation of the complexity that can occur in relation to both interventions and systems, and how qualitative evidence syntheses can contribute to evidence to decision frameworks. It provides guidance for the choice of qualitative evidence synthesis methods in the context of guideline development for complex interventions, giving ‘real life’ examples of where this has occurred. Information to support decision-making around choice qualitative evidence synthesis methods in the context of guideline development is provided. Approaches for reporting qualitative evidence syntheses are discussed alongside mechanisms for assessing confidence in the findings of a review.


2016 ◽  
Vol 75 ◽  
pp. 78-92 ◽  
Author(s):  
Jane Noyes ◽  
Maggie Hendry ◽  
Andrew Booth ◽  
Jackie Chandler ◽  
Simon Lewin ◽  
...  

2019 ◽  
Vol 16 (8) ◽  
pp. 667-676 ◽  
Author(s):  
Colin B. Shore ◽  
Gill Hubbard ◽  
Trish Gorely ◽  
Robert Polson ◽  
Angus Hunter ◽  
...  

Background: Exercise referral schemes (ERS) are prescribed programs to tackle physical inactivity and associated noncommunicable disease. Inconsistencies in reporting, recording, and delivering ERS make it challenging to identify what works, why, and for whom. Methods: Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided this narrative review of reviews. Electronic databases were searched for systematic reviews of ERS. Inclusion criteria and quality assessed through A Measurement Tool to Assess Systematic Reviews (AMSTAR). Data on uptake, attendance, and adherence were extracted. Results: Eleven reviews met inclusion criteria. AMSTAR quality was medium. Uptake ranged between 35% and 81%. Groups more likely to take up ERS included (1) females and (2) older adults. Attendance ranged from 12% to 49%. Men were more likely to attend ERS. Effect of medical diagnosis upon uptake and attendance was inconsistent. Exercises prescribed were unreported; therefore, adherence to exercise prescriptions was unreported. The influence of theoretically informed approaches on uptake, attendance, and adherence was generally lacking; however, self-determination, peer support, and supervision were reported as influencing attendance. Conclusions: There was insufficient reporting across studies about uptake, attendance, and adherence. Complex interventions such as ERS require consistent definitions, recording, and reporting of these key facets, but this is not evident from the existing literature.


2017 ◽  
Vol 13 (2) ◽  
pp. 138-156 ◽  
Author(s):  
Alex Pollock ◽  
Eivind Berge

High quality up-to-date systematic reviews are essential in order to help healthcare practitioners and researchers keep up-to-date with a large and rapidly growing body of evidence. Systematic reviews answer pre-defined research questions using explicit, reproducible methods to identify, critically appraise and combine results of primary research studies. Key stages in the production of systematic reviews include clarification of aims and methods in a protocol, finding relevant research, collecting data, assessing study quality, synthesizing evidence, and interpreting findings. Systematic reviews may address different types of questions, such as questions about effectiveness of interventions, diagnostic test accuracy, prognosis, prevalence or incidence of disease, accuracy of measurement instruments, or qualitative data. For all reviews, it is important to define criteria such as the population, intervention, comparison and outcomes, and to identify potential risks of bias. Reviews of the effect of rehabilitation interventions or reviews of data from observational studies, diagnostic test accuracy, or qualitative data may be more methodologically challenging than reviews of effectiveness of drugs for the prevention or treatment of stroke. Challenges in reviews of stroke rehabilitation can include poor definition of complex interventions, use of outcome measures that have not been validated, and poor generalizability of results. There may also be challenges with bias because the effects are dependent on the persons delivering the intervention, and because masking of participants and investigators may not be possible. There are a wide range of resources which can support the planning and completion of systematic reviews, and these should be considered when planning a systematic review relating to stroke.


Sign in / Sign up

Export Citation Format

Share Document