scholarly journals Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks

2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Marina Krnic Martinic ◽  
Dawid Pieper ◽  
Angelina Glatt ◽  
Livia Puljak

Abstract Background A standard or consensus definition of a systematic review does not exist. Therefore, if there is no definition about a systematic review in secondary studies that analyse them or the definition is too broad, inappropriate studies might be included in such evidence synthesis. The aim of this study was to analyse the definition of a systematic review (SR) in health care literature, elements of the definitions that are used and to propose a starting point for an explicit and non-ambiguous SR definition. Methods We included overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks. We extracted the definitions of SRs, as well as the inclusion and exclusion criteria that could indicate which definition of a SR the authors used. We extracted individual elements of SR definitions, categorised and quantified them. Results Among the 535 analysed sources of information, 188 (35%) provided a definition of a SR. The most commonly used reference points for the definitions of SRs were Cochrane and the PRISMA statement. We found 188 different elements of SR definitions and divided them into 14 categories. The highest number of SR definition elements was found in categories related to searching (N = 51), analysis/synthesis (N = 23), overall methods (N = 22), quality/bias/appraisal/validity (N = 22) and aim/question (N = 13). The same five categories were also the most commonly used combination of categories in the SR definitions. Conclusion Currently used definitions of SRs are vague and ambiguous, often using terms such as clear, explicit and systematic, without further elaboration. In this manuscript we propose a more specific definition of a systematic review, with the ultimate aim of motivating the research community to establish a clear and unambiguous definition of this type of research.

2021 ◽  
pp. bmjebm-2021-111710
Author(s):  
Rebecca Abbott ◽  
Alison Bethel ◽  
Morwenna Rogers ◽  
Rebecca Whear ◽  
Noreen Orr ◽  
...  

ObjectiveThe academic and scientific community has reacted at pace to gather evidence to help and inform about COVID-19. Concerns have been raised about the quality of this evidence. The aim of this review was to map the nature, scope and quality of evidence syntheses on COVID-19 and to explore the relationship between review quality and the extent of researcher, policy and media interest.Design and settingA meta-research: systematic review of reviews.Information sourcesPubMed, Epistemonikos COVID-19 evidence, the Cochrane Library of Systematic Reviews, the Cochrane COVID-19 Study Register, EMBASE, CINAHL, Web of Science Core Collection and the WHO COVID-19 database, searched between 10 June 2020 and 15 June 2020.Eligibility criteriaAny peer-reviewed article reported as a systematic review, rapid review, overview, meta-analysis or qualitative evidence synthesis in the title or abstract addressing a research question relating to COVID-19. Articles described as meta-analyses but not undertaken as part of a systematic or rapid review were excluded.Study selection and data extractionAbstract and full text screening were undertaken by two independent reviewers. Descriptive information on review type, purpose, population, size, citation and attention metrics were extracted along with whether the review met the definition of a systematic review according to six key methodological criteria. For those meeting all criteria, additional data on methods and publication metrics were extracted.Risk of biasFor articles meeting all six criteria required to meet the definition of a systematic review, AMSTAR-2 ((A MeaSurement Tool to Assess systematic Reviews, version 2.0) was used to assess the quality of the reported methods.Results2334 articles were screened, resulting in 280 reviews being included: 232 systematic reviews, 46 rapid reviews and 2 overviews. Less than half reported undertaking critical appraisal and a third had no reproducible search strategy. There was considerable overlap in topics, with discordant findings. Eighty-eight of the 280 reviews met all six systematic review criteria. Of these, just 3 were rated as of moderate or high quality on AMSTAR-2, with the majority having critical flaws: only a third reported registering a protocol, and less than one in five searched named COVID-19 databases. Review conduct and publication were rapid, with 52 of the 88 systematic reviews reported as being conducted within 3 weeks, and a half published within 3 weeks of submission. Researcher and media interest, as measured by altmetrics and citations, was high, and was not correlated with quality.DiscussionThis meta-research of early published COVID-19 evidence syntheses found low-quality reviews being published at pace, often with short publication turnarounds. Despite being of low quality and many lacking robust methods, the reviews received substantial attention across both academic and public platforms, and the attention was not related to the quality of review methods.InterpretationFlaws in systematic review methods limit the validity of a review and the generalisability of its findings. Yet, by being reported as ‘systematic reviews’, many readers may well regard them as high-quality evidence, irrespective of the actual methods undertaken. The challenge especially in times such as this pandemic is to provide indications of trustworthiness in evidence that is available in ‘real time’.PROSPERO registration numberCRD42020188822.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 221 ◽  
Author(s):  
Assem M. Khamis ◽  
Lara A. Kahale ◽  
Hector Pardo-Hernandez ◽  
Holger J. Schünemann ◽  
Elie A. Akl

Background: The living systematic review (LSR) is an emerging approach for improved evidence synthesis that uses continual updating to include relevant new evidence as soon as it is published. The objectives of this study are to: 1) assess the methods of conduct and reporting of living systematic reviews using a living study approach; and 2) describe the life cycle of living systematic reviews, i.e., describe the changes over time to their methods and findings. Methods: For objective 1, we will begin by conducting a cross-sectional survey and then update its findings every 6 months by including newly published LSRs. For objective 2, we will conduct a prospective longitudinal follow-up of the cohort of included LSRs. To identify LSRs, we will continually search the following electronic databases: Medline, EMBASE and the Cochrane library. We will also contact groups conducting LSRs to identify eligible studies that we might have missed. We will follow the standard systematic review methodology for study selection and data abstraction. For each LSR update, we will abstract information on the following: 1) general characteristics, 2) systematic review methodology, 3) living approach methodology, 4) results, and 5) editorial and publication processes. We will update the findings of both the surveys and the longitudinal follow-up of included LSRs every 6 months. In addition, we will identify articles addressing LSR methods to be included in an ‘LSR methods repository’. Conclusion: The proposed living methodological survey will allow us to monitor how the methods of conduct, and reporting as well as the findings of LSRs change over time. Ultimately this should help with ensuring the quality and transparency of LSRs.


2021 ◽  
Author(s):  
Neal R Haddaway ◽  
Matthew J Page ◽  
Christopher C Pritchard ◽  
Luke A McGuinness

Background Reporting standards, such as PRISMA aim to ensure that the methods and results of systematic reviews are described in sufficient detail to allow full transparency. Flow diagrams in evidence syntheses allow the reader to rapidly understand the core procedures used in a review and examine the attrition of irrelevant records throughout the review process. Recent research suggests that use of flow diagrams in systematic reviews is poor and of low quality and called for standardised templates to facilitate better reporting in flow diagrams. The increasing options for interactivity provided by the Internet gives us an opportunity to support easy-to-use evidence synthesis tools, and here we report on the development of tools for the production of PRISMA 2020-compliant systematic review flow diagrams. Methods and Findings We developed a free-to-use, Open Source R package and web-based Shiny app to allow users to design PRISMA flow diagrams for their own systematic reviews. Our tools allow users to produce standardised visualisations that transparently document the methods and results of a systematic review process in a variety of formats. In addition, we provide the opportunity to produce interactive, web-based flow diagrams (exported as HTML files), that allow readers to click on boxes of the diagram and navigate to further details on methods, results or data files. We provide an interactive example here; https://driscoll.ntu.ac.uk/prisma/. Conclusions We have developed a user-friendly suite of tools for producing PRISMA 2020-compliant flow diagrams for users with coding experience and, importantly, for users without prior experience in coding by making use of Shiny. These free-to-use tools will make it easier to produce clear and PRISMA 2020-compliant systematic review flow diagrams. Significantly, users can also produce interactive flow diagrams for the first time, allowing readers of their reviews to smoothly and swiftly explore and navigate to further details of the methods and results of a review. We believe these tools will increase use of PRISMA flow diagrams, improve the compliance and quality of flow diagrams, and facilitate strong science communication of the methods and results of systematic reviews by making use of interactivity. We encourage the systematic review community to make use of these tools, and provide feedback to streamline and improve their usability and efficiency.


2019 ◽  
Author(s):  
Julia Bidonde ◽  
Jose Francisco Meneses-Echavez ◽  
Angela Jean Busch ◽  
Catherine Boden

Abstract Background: Transparency is a tenet of systematic reviews. Searching for clinical trial registry records and published protocols has become a mandatory standard when conducting a systematic review of interventions. However, there is no comprehensive guidance for review authors on how to report the use of registry records and published protocols in their systematic review. The objective of this study was to generate initial guidance to assist authors of systematic reviews of interventions in the reporting of registry records and published protocols in systematic reviews of interventions. Methods: We used a compilation of the procedures recommended by expert organizations (e.g., Cochrane Collaboration) related to the reporting of use of registry records and published protocols in the conduct of systematic reviews. The compilation was developed by one of the authors in this study and served as a starting point in developing the algorithm. We extracted current practice data related to registry records and published protocols from a stratified random sample of Cochrane systematic reviews of interventions published between 2015 and 2016 (n=169). We identified examples that adhered to or extended the current guidance. Based on the on the elements above, we created the algorithm to bridge gaps and improve current reporting practices. Results: Trial protocols should be used to account for all evidence in a subject area, evaluate reporting bias (i.e. selective reporting and publication bias), and determine the nature and number of ongoing or unpublished studies for planning review updates. Review authors’ terminology (e.g., ongoing, terminated) and consequent reporting in the review should reflect the phase of the trial found. Protocols should be clearly and consistently reported throughout the review (e.g. abstract, methods, results) as is done with published articles. Conclusions: Our study expands on available guidance to describe in greater detail the reporting of registry records and published protocols for review authors. We believe this is a timely investigation that will increase transparency in the reporting of trial records in systematic reviews of interventions and bring clarification to current fuzziness in terminology. We invite researchers to provide feedback on our work for its improvement and dissemination. Trial Registration: not applicable


2007 ◽  
Vol 25 (18_suppl) ◽  
pp. 4018-4018
Author(s):  
M. E. Buyse ◽  
K. J. Punt ◽  
C. H. Köhne ◽  
P. Hohenberger ◽  
R. Labianca ◽  
...  

4018 Background: Disease-free survival (DFS) is the primary endpoint of most trials testing adjuvant treatments. However many other endpoints are used. There is much confusion about these endpoints since different definitions were used among trials, or no definitions were provided at all. Moreover there is no consensus on either the definition of each endpoint or on the most relevant among these endpoints. This creates difficulties when comparing the results of various trials. Methods: Adjuvant trials in colon cancer were used as a model. A systematic review was performed on published adjuvant studies in colon cancer from 1997–2006, and the definitions of endpoints other than overall survival (OS) were recorded. A panel of medical oncologists, surgical oncologists, and a statistician, all with expertise in randomised trials in colorectal cancer, aimed to reach consensus on the definition of the various endpoints as well as to select the most relevant among these. Results: A total of 52 studies were identified. In addition to overall survival 8 other endpoints were used, and both the definition of these endpoints as well as the starting point differed considerably among these studies. No definition was provided for the endpoint in 19 (37%) studies and for the starting point in 30 (58%) studies. The panel reached consensus on the definition of each endpoint ( table ), and agreed that DFS, defined as the time from randomisation to any event irrespective of cause was considered to be the most relevant endpoint for adjuvant studies. The date of randomisation was considered to be the most appropriate starting point. Conclusions: The proposed guideline will help in the design of future adjuvant studies in colon cancer, and will achieve the uniformity required to facilitate cross-study comparisons. It may serve as a model for adjuvant studies in other solid tumors. [Table: see text] No significant financial relationships to disclose.


2013 ◽  
Vol 26 (3) ◽  
pp. 373-381 ◽  
Author(s):  
Theodore D. Cosco ◽  
A. Matthew Prina ◽  
Jaime Perales ◽  
Blossom C. M. Stephan ◽  
Carol Brayne

ABSTRACTBackground:Half a century after the inception of the term “successful aging (SA),” a consensus definition has not emerged. The current study aims to provide a comprehensive snapshot of operational definitions of SA.Methods:A systematic review across MedLine, PsycInfo, CINAHL, EMBASE, and ISI Web of Knowledge of quantitative operational definitions of SA was conducted.Results:Of the 105 operational definitions, across 84 included studies using unique models, 92.4% (97) included physiological constructs (e.g. physical functioning), 49.5% (52) engagement constructs (e.g. involvement in voluntary work), 48.6% (51) well-being constructs (e.g. life satisfaction), 25.7% (27) personal resources (e.g. resilience), and 5.7% (6) extrinsic factors (e.g. finances). Thirty-four definitions consisted of a single construct, 28 of two constructs, 27 of three constructs, 13 of four constructs, and two of five constructs. The operational definitions utilized in the included studies identify between <1% and >90% of study participants as successfully aging.Conclusions:The heterogeneity of these results strongly suggests the multidimensionality of SA and the difficulty in categorizing usual versus successful aging. Although the majority of operationalizations reveal a biomedical focus, studies increasingly use psychosocial and lay components. Lack of consistency in the definition of SA is a fundamental weakness of SA research.


2021 ◽  
Vol 4 ◽  
pp. 1-21
Author(s):  
Vanessa Picker ◽  
Eleanor Carter ◽  
Mara Airoldi ◽  
James Ronicle ◽  
Rachel Wooldridge ◽  
...  

Background: Across a range of policy areas and geographies, governments and philanthropists are increasingly looking to adopt a social outcomes contracting (SOC) approach. Under this model, an agreement is made that a provider of services must achieve specific, measurable social and/or environmental outcomes and payments are only made when these outcomes have been achieved. Despite this growing interest, there is currently a paucity of evidence in relation to the tangible improvement in outcomes associated with the implementation of these approaches. Although promising, evidence suggests that there are risks (especially around managing perverse incentives).[1] The growing interest in SOC has been accompanied by research of specific programmes, policy domains or geographies, but there has not been a systematic attempt to synthetise this emerging evidence. To address this gap, this systematic review aims to surface the best evidence on when and where effects have been associated with SOC.  Methods: This mixed-methods systematic review protocol has been prepared using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) guidelines (Additional File 2) (Shamseer et al., 2010). The review aims to consult policymakers throughout the evidence synthesis process, by adopting a user-involved research process. This will include the establishment and involvement of a Policy Advisory Group (PAG). The PAG will consist of a large, diverse, international group of policy makers who are or have been actively involved in funding and shaping social outcomes contracts (Additional File 3). The following electronic databases will be searched: ABI/INFORM Global, Applied Social Sciences Index & Abstracts (ASSIA), Scopus, International Bibliography of the Social Sciences (IBSS), PAIS Index, PolicyFile Index, Proquest Dissertations and Theses, ProQuest Social Science, Social Services Abstracts, Web of Science, Worldwide Political Science Abstracts and PsycINFO. We will also conduct a comprehensive search of grey literature sources. Studies will be imported into Covidence and screened (after de-duplication) independently by two reviewers, using explicit inclusion/exclusion criteria. We will conduct risk of bias and quality assessment using recommended tools and we will extract data using a pre-piloted, standardised data extraction form. If meta-synthesis cannot be conducted for the effectiveness component, we will carry out a descriptive narrative synthesis of the quantitative evidence, categorised by type of intervention, type of outcome/s, population characteristics and/or policy sector. The qualitative studies will be synthesised using thematic content analysis (Thomas and Harden 2008). If possible, we will also analyse the available economic data to understand the costs and benefits associated with SOC. Finally, we will conduct a cross-study synthesis, which will involve bringing together the findings from the effectiveness review, economic review and qualitative review. We recognise that the proposed conventional effectiveness review method may lead to inconclusive or partial findings given the complexity of the intervention, the likely degree of heterogeneity and the under-developed evidence base. We see a traditional systematic review as an important foundation to describe the evidence landscape. We will use this formal review as a starting point and then explore more contextually rooted review work in future. Discussion: We will use the systematic review findings to produce accessible and reliable empirical insights on whether, when, and where (and if possible, how) SOC approaches deliver improved impact when compared to more conventional funding arrangements. The outputs will support policymakers to make informed decisions in relation to commissioning and funding approaches. Systematic   review   registration: This   systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO), on 20th November 2020 and was last updated on 21 January 2021: (registration number PROSPERO CRD42020215207). [1] A perverse incentive in an outcomes-based contract is an incentive that has unintended and undesirable results. For instance, a poorly designed welfare-to-work scheme could create incentives for service providers to prioritise clients who are easier to help and to ‘park’ those who are harder to assist (NAO 2015).


2019 ◽  
Vol 3 ◽  
pp. 157
Author(s):  
Fala Cramond ◽  
Alison O'Mara-Eves ◽  
Lee Doran-Constant ◽  
Andrew SC Rice ◽  
Malcolm Macleod ◽  
...  

Background: The extraction of data from the reports of primary studies, on which the results of systematic reviews depend, needs to be carried out accurately. To aid reliability, it is recommended that two researchers carry out data extraction independently. The extraction of statistical data from graphs in PDF files is particularly challenging, as the process is usually completely manual, and reviewers need sometimes to revert to holding a ruler against the page to read off values: an inherently time-consuming and error-prone process. Methods: To mitigate some of the above problems we integrated and customised two existing JavaScript libraries to create a new web-based graphical data extraction tool to assist reviewers in extracting data from graphs. This tool aims to facilitate more accurate and timely data extraction through a user interface which can be used to extract data through mouse clicks. We carried out a non-inferiority evaluation to examine its performance in comparison to standard practice. Results: We found that the customised graphical data extraction tool is not inferior to users’ prior preferred current approaches. Our study was not designed to show superiority, but suggests that there may be a saving in time of around 6 minutes per graph, accompanied by a substantial increase in accuracy. Conclusions: Our study suggests that the incorporation of this type of tool in online systematic review software would be beneficial in facilitating the production of accurate and timely evidence synthesis to improve decision-making.


2022 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuelun Zhang ◽  
Siyu Liang ◽  
Yunying Feng ◽  
Qing Wang ◽  
Feng Sun ◽  
...  

Abstract Background Systematic review is an indispensable tool for optimal evidence collection and evaluation in evidence-based medicine. However, the explosive increase of the original literatures makes it difficult to accomplish critical appraisal and regular update. Artificial intelligence (AI) algorithms have been applied to automate the literature screening procedure in medical systematic reviews. In these studies, different algorithms were used and results with great variance were reported. It is therefore imperative to systematically review and analyse the developed automatic methods for literature screening and their effectiveness reported in current studies. Methods An electronic search will be conducted using PubMed, Embase, ACM Digital Library, and IEEE Xplore Digital Library databases, as well as literatures found through supplementary search in Google scholar, on automatic methods for literature screening in systematic reviews. Two reviewers will independently conduct the primary screening of the articles and data extraction, in which nonconformities will be solved by discussion with a methodologist. Data will be extracted from eligible studies, including the basic characteristics of study, the information of training set and validation set, and the function and performance of AI algorithms, and summarised in a table. The risk of bias and applicability of the eligible studies will be assessed by the two reviewers independently based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Quantitative analyses, if appropriate, will also be performed. Discussion Automating systematic review process is of great help in reducing workload in evidence-based practice. Results from this systematic review will provide essential summary of the current development of AI algorithms for automatic literature screening in medical evidence synthesis and help to inspire further studies in this field. Systematic review registration PROSPERO CRD42020170815 (28 April 2020).


BMJ Open ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. e049777
Author(s):  
Lies ter Beek ◽  
Mathieu S Bolhuis ◽  
Harriët Jager-Wittenaar ◽  
René X D Brijan ◽  
Marieke G G Sturkenboom ◽  
...  

ObjectivesMalnutrition is associated with a twofold higher risk of dying in patients with tuberculosis (TB) and considered an important potentially reversible risk factor for failure of TB treatment. The construct of malnutrition has three domains: intake or uptake of nutrition; body composition and physical and cognitive function. The objectives of this systematic review are to identify malnutrition assessment methods, and to quantify how malnutrition assessment methods capture the international consensus definition for malnutrition, in patients with TB.DesignDifferent assessment methods were identified. We determined the extent of capturing of the three domains of malnutrition, that is, intake or uptake of nutrition, body composition and physical and cognitive function.ResultsSeventeen malnutrition assessment methods were identified in 69 included studies. In 53/69 (77%) of studies, body mass index was used as the only malnutrition assessment method. Three out of 69 studies (4%) used a method that captured all three domains of malnutrition.ConclusionsOur study focused on published articles. Implementation of new criteria takes time, which may take longer than the period covered by this review. Most patients with TB are assessed for only one aspect of the conceptual definition of malnutrition. The use of international consensus criteria is recommended to establish uniform diagnostics and treatment of malnutrition.PROSPERO registration numberCRD42019122832.


Sign in / Sign up

Export Citation Format

Share Document