scholarly journals P212 Review of Systematic Reviews Related to Clinical Guidelines Implementation

2013 ◽  
Vol 22 (Suppl 1) ◽  
pp. A69.3-A70
Author(s):  
D Geba ◽  
W Chan ◽  
M Moreno ◽  
T Pearson
10.2196/22422 ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. e22422
Author(s):  
Tomohide Yamada ◽  
Daisuke Yoneoka ◽  
Yuta Hiraike ◽  
Kimihiro Hino ◽  
Hiroyoshi Toyoshiba ◽  
...  

Background Performing systematic reviews is a time-consuming and resource-intensive process. Objective We investigated whether a machine learning system could perform systematic reviews more efficiently. Methods All systematic reviews and meta-analyses of interventional randomized controlled trials cited in recent clinical guidelines from the American Diabetes Association, American College of Cardiology, American Heart Association (2 guidelines), and American Stroke Association were assessed. After reproducing the primary screening data set according to the published search strategy of each, we extracted correct articles (those actually reviewed) and incorrect articles (those not reviewed) from the data set. These 2 sets of articles were used to train a neural network–based artificial intelligence engine (Concept Encoder, Fronteo Inc). The primary endpoint was work saved over sampling at 95% recall (WSS@95%). Results Among 145 candidate reviews of randomized controlled trials, 8 reviews fulfilled the inclusion criteria. For these 8 reviews, the machine learning system significantly reduced the literature screening workload by at least 6-fold versus that of manual screening based on WSS@95%. When machine learning was initiated using 2 correct articles that were randomly selected by a researcher, a 10-fold reduction in workload was achieved versus that of manual screening based on the WSS@95% value, with high sensitivity for eligible studies. The area under the receiver operating characteristic curve increased dramatically every time the algorithm learned a correct article. Conclusions Concept Encoder achieved a 10-fold reduction of the screening workload for systematic review after learning from 2 randomly selected studies on the target topic. However, few meta-analyses of randomized controlled trials were included. Concept Encoder could facilitate the acquisition of evidence for clinical guidelines.


2017 ◽  
Vol 33 (4) ◽  
pp. 534-540 ◽  
Author(s):  
James D. Chambers ◽  
Cayla J. Saret ◽  
Jordan E. Anderson ◽  
Patricia A. Deverka ◽  
Michael P. Douglas ◽  
...  

Objectives: The aim of this study was to examine the evidence payers cited in their coverage policies for multi-gene panels and sequencing tests (panels), and to compare these findings with the evidence payers cited in their coverage policies for other types of medical interventions.Methods: We used the University of California at San Francisco TRANSPERS Payer Coverage Registry to identify coverage policies for panels issued by five of the largest US private payers. We reviewed each policy and categorized the evidence cited within as: clinical studies, systematic reviews, technology assessments, cost-effectiveness analyses (CEAs), budget impact studies, and clinical guidelines. We compared the evidence cited in these coverage policies for panels with the evidence cited in policies for other intervention types (pharmaceuticals, medical devices, diagnostic tests and imaging, and surgical interventions) as reported in a previous study.Results: Fifty-five coverage policies for panels were included. On average, payers cited clinical guidelines in 84 percent of their coverage policies (range, 73–100 percent), clinical studies in 69 percent (50–87 percent), technology assessments 47 percent (33–86 percent), systematic reviews or meta-analyses 31 percent (7–71 percent), and CEAs 5 percent (0–7 percent). No payers cited budget impact studies in their policies. Payers less often cited clinical studies, systematic reviews, technology assessments, and CEAs in their coverage policies for panels than in their policies for other intervention types. Payers cited clinical guidelines in a comparable proportion of policies for panels and other technology types.Conclusions: Payers in our sample less often cited clinical studies and other evidence types in their coverage policies for panels than they did in their coverage policies for other types of medical interventions.


2016 ◽  
Vol 10 (2) ◽  
pp. 1-2
Author(s):  
JP Neilson

In 2013, a workshop was held in Kathmandu that explored systematic reviews – what they are, how they are developed, how they are used in evidencebased clinical guidelines, and how they can inform the clinical research agenda. The workshop was funded by the Gates Foundation through FIGO, and organised by the Nepal Society of Obstetricians and Gynaecologists.


2018 ◽  
Vol 6 (1) ◽  
pp. 12 ◽  
Author(s):  
Natasa Pilipovic-Broceta ◽  
Nadja Vasiljevic ◽  
Jelena Marinkovic ◽  
Nevena Todorovic ◽  
Janko Jankovic ◽  
...  

Rationale, aims and objectives: To translate, culturally adapt and preliminary validate the original English version of the PACIC questionnaire into the Serbian language, in the Republic of Srpska, Bosnia and Herzegovina and to assess the relationship between PACIC scores and clinical guidelines implementation in family medicine.Methods: A cross-sectional study was implemented in 2 primary healthcare centers. The translated PACIC questionnaire was administered to 206 consecutive patients with hypertension, diabetes mellitus and/or chronic obstructive pulmonary disease. The validity and reliability of the Serbian version of PACIC has been tested with face validity, construct validity and internal consistency. The PACIC score and its 5 subscales were tested using Kruskal-Wallis or Mann-Whitney test. The relationship of the PACIC score and guidelines implementation was analyzed by multiple linear regression.Results: The overall PACIC score indicates an implementation of the Chronic Care Model (CCM) occurred “most of the time”. Of the 5 subscales, average scores were highest on “Delivery system/decision support”. CCM in “Follow up/Coordination” occurred sometimes. Cronbach’s alpha coefficient showed a high internal consistency level for the PACIC questionnaire. Kaiser-Meyer-Olkin Measure of Sampling Adequacy is 0.917 and Bartlett’s test of sphericity is significant (p≤0.001). Four factors were identified explaining 69% of total variance.Conclusions: There was a significant relationship between the PACIC score and the implementation of the chronic disease clinical guidelines. The PACIC questionnaire is advanced as a reliable and internally consistent instrument of use in increasing the person-centeredness of care of chronic illness.


Sign in / Sign up

Export Citation Format

Share Document