scholarly journals EXAMINING EVIDENCE IN U.S. PAYER COVERAGE POLICIES FOR MULTI-GENE PANELS AND SEQUENCING TESTS

2017 ◽  
Vol 33 (4) ◽  
pp. 534-540 ◽  
Author(s):  
James D. Chambers ◽  
Cayla J. Saret ◽  
Jordan E. Anderson ◽  
Patricia A. Deverka ◽  
Michael P. Douglas ◽  
...  

Objectives: The aim of this study was to examine the evidence payers cited in their coverage policies for multi-gene panels and sequencing tests (panels), and to compare these findings with the evidence payers cited in their coverage policies for other types of medical interventions.Methods: We used the University of California at San Francisco TRANSPERS Payer Coverage Registry to identify coverage policies for panels issued by five of the largest US private payers. We reviewed each policy and categorized the evidence cited within as: clinical studies, systematic reviews, technology assessments, cost-effectiveness analyses (CEAs), budget impact studies, and clinical guidelines. We compared the evidence cited in these coverage policies for panels with the evidence cited in policies for other intervention types (pharmaceuticals, medical devices, diagnostic tests and imaging, and surgical interventions) as reported in a previous study.Results: Fifty-five coverage policies for panels were included. On average, payers cited clinical guidelines in 84 percent of their coverage policies (range, 73–100 percent), clinical studies in 69 percent (50–87 percent), technology assessments 47 percent (33–86 percent), systematic reviews or meta-analyses 31 percent (7–71 percent), and CEAs 5 percent (0–7 percent). No payers cited budget impact studies in their policies. Payers less often cited clinical studies, systematic reviews, technology assessments, and CEAs in their coverage policies for panels than in their policies for other intervention types. Payers cited clinical guidelines in a comparable proportion of policies for panels and other technology types.Conclusions: Payers in our sample less often cited clinical studies and other evidence types in their coverage policies for panels than they did in their coverage policies for other types of medical interventions.

10.2196/22422 ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. e22422
Author(s):  
Tomohide Yamada ◽  
Daisuke Yoneoka ◽  
Yuta Hiraike ◽  
Kimihiro Hino ◽  
Hiroyoshi Toyoshiba ◽  
...  

Background Performing systematic reviews is a time-consuming and resource-intensive process. Objective We investigated whether a machine learning system could perform systematic reviews more efficiently. Methods All systematic reviews and meta-analyses of interventional randomized controlled trials cited in recent clinical guidelines from the American Diabetes Association, American College of Cardiology, American Heart Association (2 guidelines), and American Stroke Association were assessed. After reproducing the primary screening data set according to the published search strategy of each, we extracted correct articles (those actually reviewed) and incorrect articles (those not reviewed) from the data set. These 2 sets of articles were used to train a neural network–based artificial intelligence engine (Concept Encoder, Fronteo Inc). The primary endpoint was work saved over sampling at 95% recall (WSS@95%). Results Among 145 candidate reviews of randomized controlled trials, 8 reviews fulfilled the inclusion criteria. For these 8 reviews, the machine learning system significantly reduced the literature screening workload by at least 6-fold versus that of manual screening based on WSS@95%. When machine learning was initiated using 2 correct articles that were randomly selected by a researcher, a 10-fold reduction in workload was achieved versus that of manual screening based on the WSS@95% value, with high sensitivity for eligible studies. The area under the receiver operating characteristic curve increased dramatically every time the algorithm learned a correct article. Conclusions Concept Encoder achieved a 10-fold reduction of the screening workload for systematic review after learning from 2 randomly selected studies on the target topic. However, few meta-analyses of randomized controlled trials were included. Concept Encoder could facilitate the acquisition of evidence for clinical guidelines.


Author(s):  
Jacob Stegenga

An astonishing volume and diversity of evidence is available for many hypotheses in medicine. Some of this evidence—usually from randomized trials—is amalgamated by meta-analysis. Despite the ongoing debate regarding whether or not randomized trials are the gold standard of evidence, the most reliable source of evidence in medical science is usually thought to come from systematic reviews and meta-analyses. This chapter argues that meta-analyses are malleable. Different meta-analyses of the same evidence can reach contradictory conclusions. Meta-analysis fails to provide objective grounds for assessing the effectiveness and harms of medical interventions because numerous decisions must be made when performing a meta-analysis, which allow wide latitude for subjective idiosyncrasies to influence its outcome.


1999 ◽  
Vol 15 (4) ◽  
pp. 671-678 ◽  
Author(s):  
Mark Petticrew ◽  
Fujian Song ◽  
Paul Wilson ◽  
Kath Wright

Objectives: Database of Abstracts of Reviews of Effectiveness (DARE) (http://www.york.ac.uk/inst/crd/) at the NHS Centre for Reviews and Dissemination provides a unique international resource of structured summaries of quality-assessed reviews of health care interventions. These reviews have been identified from searches of electronic databases and by hand-searching journals. This paper describes and summarizes the DARE database, including the topic areas covered and the review methods used.Methods: The first 480 structured abstracts on the DARE database were summarized. Data were extracted from each database field and coded for analysis.Results: Most of the systematic reviews investigated the effectiveness of treatments: 54% investigated the effectiveness of medical therapies, and 10% assessed surgical interventions. Around two-thirds used meta-analytic methods to combine primary studies. The quality of the reviews was variable, with just over half of the reviews (52%, n = 251) having systematically assessed the validity of the included primary studies. Narrative reviews were more likely than meta-analyses to reach negative conclusions (42% vs. 25%, p = .0001). The 21 reviews that reported drug company funding were more likely to reach positive conclusions (81% vs. 66%, p = .15).Conclusion: The DARE database is a valuable source of quality-assessed systematic reviews, and is free and easily accessible. It provides a valuable online resource to help in filtering out poorer quality reviews when assessing the effectiveness of health technologies.


2021 ◽  
pp. 1-8
Author(s):  
Simon R. Knight

<b><i>Background:</i></b> Systematic reviews and meta-analyses are generally regarded as sitting atop the hierarchy of clinical evidence. The unbiased summary of current evidence that a systematic review provides, along with the increased statistical power from larger numbers of patients, is invaluable in guiding clinical decision-making and development of practice guidelines. Surgical specialties have historically lagged behind other areas of medicine in the application of evidence-based medicine, perhaps due to the unique challenges faced in the conduct of surgical clinical trials. These challenges extend to the conduct of systematic reviews, due to issues with the quality and heterogeneity of the underlying literature. <b><i>Summary:</i></b> Recent years have seen an improvement in the quality of randomized controlled trials in surgical topics and an explosion in the publication of systematic reviews. This review explores recent trends in systematic reviews in surgery and discussed some of the aspects in conducting and interpreting reviews that are unique to surgical topics, including blinding, surgical heterogeneity and learning curves, patient and clinician preference, and industry involvement. <b><i>Key Messages:</i></b> Clinical trials, and therefore systematic reviews, of surgical interventions pose unique challenges which are important to consider when conducting them or applying the findings to clinical practice. Despite the challenges, systematic reviews still represent the best level of evidence for development of surgical practice guidelines.


2021 ◽  
Author(s):  
Christian Gunge Riberholt ◽  
Markus Harboe Olsen ◽  
Joachim Birch Milan ◽  
Christian Gluud

Abstract Background: Adequately conducted systematic reviews with meta-analyses are considered the highest level of evidence and thus directly defines many clinical guidelines. However, the risk of type I and II errors in meta-analyses are substantial. Trial Sequential Analysis is a method for controlling these risks. Erroneous use of the method might lead to research waste or misleading conclusions. Methods: The current protocol describes a systematic review aimed to identify common and major mistakes and errors in the use of Trial Sequential Analysis by evaluating published systematic reviews and meta-analyses that include this method. We plan to include all studies using Trial Sequential Analysis published from 2018 to 2021, an estimated 400 to 600 publications. We will search Medical Literature Analysis and Retrieval System Online (MEDLINE) and the Cochrane Database of Systematic Reviews (CDSR), including studies with all types of participants, interventions, and outcomes. The search will begin in July 2021. Two independent reviewers will screen titles and abstracts, include relevant full text articles, extract data from the studies into a predefined checklist, and evaluate the methodological quality of the study using the AMSTAR 2 (Assessing the methodological quality of systematic reviews). Discussion: This protocol follows the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P). The identified mistakes and errors will form the basis of a reviewed guideline for the use of Trial Sequential Analysis. Appropriately controlling for type I and II errors might reduce research waste and improve quality and precision of the evidence that clinical guidelines are based upon.


BMJ Open ◽  
2017 ◽  
Vol 7 (12) ◽  
pp. e018494 ◽  
Author(s):  
Yihan He ◽  
Yihong Liu ◽  
Brian H May ◽  
Anthony Lin Zhang ◽  
Haibo Zhang ◽  
...  

IntroductionThe National Comprehensive Cancer Network guidelines for adult cancer pain indicate that acupuncture and related therapies may be valuable additions to pharmacological interventions for pain management. Of the systematic reviews related to this topic, some concluded that acupuncture was promising for alleviating cancer pain, while others argued that the evidence was insufficient to support its effectiveness.Methods and analysisThis review will consist of three components: (1) synthesis of findings from existing systematic reviews; (2) updated meta-analyses of randomised clinical trials and (3) analyses of results of other types of clinical studies. We will search six English and four Chinese biomedical databases, dissertations and grey literature to identify systematic reviews and primary clinical studies. Two reviewers will screen results of the literature searches independently to identify included reviews and studies. Data from included articles will be abstracted for assessment, analysis and summary. Two assessors will appraise the quality of systematic reviews using Assessment of Multiple Systematic Reviews; assess the randomised controlled trials using the Cochrane Collaboration’s risk of bias tool and other types of studies according to the Newcastle-Ottawa Scale. We will use ‘summary of evidence’ tables to present evidence from existing systematic reviews and meta-analyses. Using the primary clinical studies, we will conduct meta-analysis for each outcome, by grouping studies based on the type of acupuncture, the comparator and the specific type of pain. Sensitivity analyses are planned according to clinical factors, acupuncture method, methodological characteristics and presence of statistical heterogeneity as applicable. For the non-randomised studies, we will tabulate the characteristics, outcome measures and the reported results of each study. Consistencies and inconsistencies in evidence will be investigated and discussed. Finally, we will use the Grading of Recommendations Assessment, Development and Evaluation approach to evaluate the quality of the overall evidence.Ethics and disseminationThere are no ethical considerations associated with this review. The findings will be disseminated in peer-reviewed journals or conference presentations.PROSPERO registration numberCRD42017064113.


2016 ◽  
Vol 25 (3) ◽  
pp. 214-216 ◽  
Author(s):  
G. Guaiana ◽  
C. Barbui

In July 2015, the Canadian Agency for Drugs and Technologies in Health (CADTH) released a Rapid Response report summary, with a critical appraisal, on discontinuation strategies for patients with long-term benzodiazepines (BDZ) use. The CADTH document is a review of the literature. It includes studies whose intervention is BDZ discontinuation. Also, clinical guidelines, systematic reviews and meta-analyses are included. What emerges from the CADTH guidelines is that the best strategy remains gradual tapering of BDZ with little evidence for the use of adjunctive medications. The results show that simple interventions such as discontinuation letters from clinicians, self-help information and support in general, added to gradual tapering may be associated with a two- to three-fold higher chance of successful withdrawal, compared with treatment as usual. We suggest possible implications for day-to-day clinical practice.


BMJ Open ◽  
2017 ◽  
Vol 7 (8) ◽  
pp. e017411 ◽  
Author(s):  
Morihiro Katsura ◽  
Akira Kuriyama ◽  
Masafumi Tada ◽  
Kazumichi Yamamoto ◽  
Toshi A Furukawa

IntroductionWe are witnessing an explosive increase in redundant and overlapping publications of systematic reviews and meta-analyses (SRs/MAs) on the same topic, which often present conflicting results and interpretations, in the current medical literature. They represent wasted efforts on the part of investigators and peer reviewers and may confuse and possibly mislead clinicians and policymakers. Here, we present a protocol for a meta-epidemiological investigation to describe how often there are overlapping SRs/MAs on the same topic, to assess the quality of these multiple publications, and to investigate the causes of discrepant results between multiple SRs/MAs in the field of major surgery.Methods and analysisWe will use MEDLINE/PubMed to identify all SRs/MAs of randomised controlled trials (RCTs) published in 2015 regarding major surgical interventions. After identifying the ‘benchmark’ SRs/MAs published in 2015, a process of screening in MEDLINE will be carried out to identify the previous SRs/MAs of RCTs on the same topic that were published within 5 years of the ‘benchmark’ SRs/MAs. We will tabulate the number of previous SRs/MAs on the same topic of RCTs, and then describe their variations in numbers of RCTs included, sample sizes, effect size estimates and other characteristics. We will also assess the differences in quality of each SR/MA using A Measurement Tool to Assess Systematic Reviews (AMSTAR) score. Finally, we will investigate the potential reasons to explain the discrepant results between multiple SRs/MAs.Ethics and disseminationNo formal ethical approval and informed consent are required because this study will not collect primary individual data. The intended audiences of the findings include clinicians, healthcare researchers and policymakers. We will publish our findings as a scientific report in a peer-reviewed journal.Trial registration numberIn PROSPERO CRD42017059077, March 2017.


2020 ◽  
Author(s):  
Tomohide Yamada ◽  
Daisuke Yoneoka ◽  
Yuta Hiraike ◽  
Kimihiro Hino ◽  
Hiroyoshi Toyoshiba ◽  
...  

BACKGROUND Performing systematic reviews is a time-consuming and resource-intensive process. OBJECTIVE We investigated whether a machine learning system could perform systematic reviews more efficiently. METHODS All systematic reviews and meta-analyses of interventional randomized controlled trials cited in recent clinical guidelines from the American Diabetes Association, American College of Cardiology, American Heart Association (2 guidelines), and American Stroke Association were assessed. After reproducing the primary screening data set according to the published search strategy of each, we extracted correct articles (those actually reviewed) and incorrect articles (those not reviewed) from the data set. These 2 sets of articles were used to train a neural network–based artificial intelligence engine (Concept Encoder, Fronteo Inc). The primary endpoint was work saved over sampling at 95% recall (WSS@95%). RESULTS Among 145 candidate reviews of randomized controlled trials, 8 reviews fulfilled the inclusion criteria. For these 8 reviews, the machine learning system significantly reduced the literature screening workload by at least 6-fold versus that of manual screening based on WSS@95%. When machine learning was initiated using 2 correct articles that were randomly selected by a researcher, a 10-fold reduction in workload was achieved versus that of manual screening based on the WSS@95% value, with high sensitivity for eligible studies. The area under the receiver operating characteristic curve increased dramatically every time the algorithm learned a correct article. CONCLUSIONS Concept Encoder achieved a 10-fold reduction of the screening workload for systematic review after learning from 2 randomly selected studies on the target topic. However, few meta-analyses of randomized controlled trials were included. Concept Encoder could facilitate the acquisition of evidence for clinical guidelines.


2020 ◽  
Vol 228 (1) ◽  
pp. 1-2
Author(s):  
Michael Bošnjak ◽  
Nadine Wedderhoff

Abstract. This editorial gives a brief introduction to the six articles included in the fourth “Hotspots in Psychology” of the Zeitschrift für Psychologie. The format is devoted to systematic reviews and meta-analyses in research-active fields that have generated a considerable number of primary studies. The common denominator is the research synthesis nature of the included articles, and not a specific psychological topic or theme that all articles have to address. Moreover, methodological advances in research synthesis methods relevant for any subfield of psychology are being addressed. Comprehensive supplemental material to the articles can be found in PsychArchives ( https://www.psycharchives.org ).


Sign in / Sign up

Export Citation Format

Share Document