A methodological comparison of risk scores versus decision trees for predicting drug-resistant infections: A case study using extended-spectrum beta-lactamase (ESBL) bacteremia

2019 ◽  
Vol 40 (4) ◽  
pp. 400-407 ◽  
Author(s):  
Katherine E. Goodman ◽  
Justin Lessler ◽  
Anthony D. Harris ◽  
Aaron M. Milstone ◽  
Pranita D. Tamma

AbstractBackground:Timely identification of multidrug-resistant gram-negative infections remains an epidemiological challenge. Statistical models for predicting drug resistance can offer utility where rapid diagnostics are unavailable or resource-impractical. Logistic regression–derived risk scores are common in the healthcare epidemiology literature. Machine learning–derived decision trees are an alternative approach for developing decision support tools. Our group previously reported on a decision tree for predicting ESBL bloodstream infections. Our objective in the current study was to develop a risk score from the same ESBL dataset to compare these 2 methods and to offer general guiding principles for using each approach.Methods:Using a dataset of 1,288 patients with Escherichia coli or Klebsiella spp bacteremia, we generated a risk score to predict the likelihood that a bacteremic patient was infected with an ESBL-producer. We evaluated discrimination (original and cross-validated models) using receiver operating characteristic curves and C statistics. We compared risk score and decision tree performance, and we reviewed their practical and methodological attributes.Results:In total, 194 patients (15%) were infected with ESBL-producing bacteremia. The clinical risk score included 14 variables, compared to the 5 decision-tree variables. The positive and negative predictive values of the risk score and decision tree were similar (>90%), but the C statistic of the risk score (0.87) was 10% higher.Conclusions:A decision tree and risk score performed similarly for predicting ESBL infection. The decision tree was more user-friendly, with fewer variables for the end user, whereas the risk score offered higher discrimination and greater flexibility for adjusting sensitivity and specificity.

2021 ◽  
pp. 1-14
Author(s):  
Magdalena I. Tolea ◽  
Jaeyeong Heo ◽  
Stephanie Chrisphonte ◽  
James E. Galvin

Background: Although an efficacious dementia-risk score system, Cardiovascular Risk Factors, Aging, and Dementia (CAIDE) was derived using midlife risk factors in a population with low educational attainment that does not reflect today’s US population, and requires laboratory biomarkers, which are not always available. Objective: Develop and validate a modified CAIDE (mCAIDE) system and test its ability to predict presence, severity, and etiology of cognitive impairment in older adults. Methods: Population consisted of 449 participants in dementia research (N = 230; community sample; 67.9±10.0 years old, 29.6%male, 13.7±4.1 years education) or receiving dementia clinical services (N = 219; clinical sample; 74.3±9.8 years old, 50.2%male, 15.5±2.6 years education). The mCAIDE, which includes self-reported and performance-based rather than blood-derived measures, was developed in the community sample and tested in the independent clinical sample. Validity against Framingham, Hachinski, and CAIDE risk scores was assessed. Results: Higher mCAIDE quartiles were associated with lower performance on global and domain-specific cognitive tests. Each one-point increase in mCAIDE increased the odds of mild cognitive impairment (MCI) by up to 65%, those of AD by 69%, and those for non-AD dementia by >  85%, with highest scores in cases with vascular etiologies. Being in the highest mCAIDE risk group improved ability to discriminate dementia from MCI and controls and MCI from controls, with a cut-off of ≥7 points offering the highest sensitivity, specificity, and positive and negative predictive values. Conclusion: mCAIDE is a robust indicator of cognitive impairment in community-dwelling seniors, which can discriminate well between dementia severity including MCI versus controls. The mCAIDE may be a valuable tool for case ascertainment in research studies, helping flag primary care patients for cognitive testing, and identify those in need of lifestyle interventions for symptomatic control.


2020 ◽  
Author(s):  
Carissa Duru ◽  
Grace M Olanipekun ◽  
Vivian Odili ◽  
Nicholas J Kocmich ◽  
Amy Rezac ◽  
...  

AbstractBackgroundBacteremia is a leading cause of death in developing countries but etiologic evaluation is infrequent and empiric antibiotics are not evidence-based. Very little is known about the types of extended-spectrum β-lactamases (ESBL) in pediatric bacteremia patients in Nigeria. We evaluated the patterns of ESBL resistance in children enrolled into surveillance for community acquired bacteremic syndromes across health facilities in Central and Northwestern Nigeria.MethodBlood culture from suspected cases of sepsis from children age less than 5 years were processed using automated Bactec® incubator System from Sept 2008-Dec 2016. Enterobacteriacea were identified to the species level using Analytical Profile Index (API20E®) identification strip and antibiotic susceptibility profile was determined by the disc diffusion method. The multidrug resistant strains were then screened and confirmed for extended spectrum beta lactamase (ESBL) production by the combination disc method as recommended by Clinical and Laboratory Standard Institute (CLSI). Real time PCR was used to elucidate the genes responsible for ESBL production characterize the resistance genesResultOf 21,000 children screened from Sept 2008-Dec 2016, 2,625(12.5%) were culture-positive. A total of 413 Enterobacteriaceae available for analysis were screened for ESBL. ESBL production was detected in 160/413(38.7%), comprising Klebsiella pneumoniae 105/160(65.6%), Enterobacter cloacae 21/160(13.1%), Escherichia coli 22/160(13.8%), Serratia species 4/160(2.5%), Pantoea species 7/160(4.4%) and Citrobacter species 1/160(0.6%). Of the 160 ESBL-producing isolates, high resistance rates were observed among ESBL-positive isolates for Ceftriaxone (92.3%), Aztreonam (96.8%), Cefpodoxime (96.25%), Cefotaxime (98.75%) and sulphamethoxazole-trimethoprim (90%), while 87.5 %, 90.63%, and 91.87% of the isolates were susceptible to Imipenem, Amikacin and Meropenem respectively. Frequently detected resistance genes were blaTEM 83.75%) (134/160), and, blaCTX-M 83.12% (133/160) followed by blaSHV genes 66.25% (106/160). Co-existence of blaCTX-M, blaTEM and blaSHV was seen in 94/160 (58.8%), blaCTX-M and blaTEM in 118/160 (73.8%), blaTEM and blaSHV in 97/160 (60.6%) and blaCTX-M and blaSHV in 100/160 (62.5%) of isolates tested.ConclusionOur results indicate a high prevalence of ESBL resistance to commonly used antibiotics in Enterobacteriaceae isolates from bloodstream infections in children in this study. Careful choice of antibiotic treatment options and further studies to evaluate transmission dynamics of resistance genes could help in the reduction of ESBL resistance in these settings.


Author(s):  
Oguz Akbilgic ◽  
Ramin Homayouni ◽  
Kevin Heinrich ◽  
Max Raymond langham, Jr ◽  
Robert Lowell Davis

Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6,497 inpatient surgical cases with 719,308 free text notes from Le Bonheur Children’s Hospital EMR. We used a text mining approach on preoperative notes to obtain the text-based risk score algorithm as predictive of death within 30 days of surgery. We studied the additional performance obtained by including text-based risk score as a predictor of death along with other structured data based clinical risk factors. The C-statistic of a logistic regression model with 5-fold cross-validation significantly improved from 0.76 to 0.92 when text-based risk scores were included in addition to structured data. We conclude that preoperative free text notes in EMR include significant information that can predict adverse surgery outcomes.


EP Europace ◽  
2021 ◽  
Vol 23 (Supplement_3) ◽  
Author(s):  
IA Zaigraev ◽  
IS Yavelov ◽  
OM Drapkina ◽  
EV Bazaeva

Abstract Funding Acknowledgements Type of funding sources: None. Background. Left atrial thrombus (LAT) is the main source of cardiac emboly in patients with non-valvular atrial fibrillation (NAF). Several risk scores – mostly modified CHADS2 and CHA2DS2-VASc – were offered to predict LAT in patients with NAF. However, their relative predictive value requires further evaluation. Purpose. Compare the ability of different risk scores to predict LAT before catheter ablation or cardioversion in patients with NAF. Methods. In a retrospective single-center study, medical records of 1994 patients with NAF who underwent transesophageal echocardiography before catheter ablation or cardioversion were analyzed. LAT was identified in 33 (1.6%) of them. For the control group 167 patients without LAT were randomly selected from this database. Logistic regression analysis and C-statistic were used for evaluation and comparison of predictive values of CHADS2, R2CHADS2, CHA2DS2-VASc, R-CHA2DS2-VASc, R2CHA2DS2-VASc, CHA2DS2-VASc-RAF, mCHA2DS2-VASc and CHA2DS2-VASc-AFR scores. Results. The mean age of studied patients was 60.3 ± 10.9 years, 110 (55%) of them were males. The mean CHA2DS2-VASc score was 2.54 ± 1.79. Results of univariate analysis and C-statistic for above mentioned risk scores are presented in the table. Each of them was associated with LAT. In comparison with a CHA2DS2-VASc score C-statistic was significantly higher for CHA2DS2-VASc-RAF and CHA2DS2-VASc-AFR scores (p values 0.03 and 0.001 respectively). In multivariate analysis only CHA2DS2-VASc-RAF score was associated with LAT (OR 1.37; 95% CI 1.21-1.55, p < 0.0001). OR for LАT in patients with CHA2DS2-VASc-RAF >3 was 12.8 (95% CI 3.75-43.9; p < 0.0001) with sensitivity, specificity, positive and negative predictive values 90.6%, 57.1%, 33.3% and 58.9% respectively. Conclusion. In a group of patients with NAF and relatively low incidence of LAT all studied scores were associated with LAT and CHA2DS2-VASc-RAF score has appeared the most informative. Predictors of LAT in patients with NAF Risk stratification models OR (95% CI) p-value C-statistic (95% CI) CHADS2 2.12 (1.55-2.91) <0.0001 0.77 (0.68-0.85) R2CHADS2 2.00 (1.53-2.62) <0.0001 0.78 (0.69-0.87) CHA2DS2-VASc 1.65 (1.36-2.05) <0.0001 0.74 (0.65-0.84) R-CHA2DS2-VASc 1.64 (1.34-2.03) <0.0001 0.76 (0.66-0.85) R2CHA2DS2-VASc 1.59 (1.32-1.92) <0.0001 0.76 (0.66-0.85) CHA2DS2-VASc- RAF 1.35 (1.27-1.52) <0.0001 0.84 (0.76-0.91) mCHA2DS2-VASc 1.83 (1.42-2.35) <0.0001 0.75 (0.65-0.85) CHA2DS2-VASc-AFR 1.75 (1.41-2.17) <0.0001 0.80 (0.71-0.88)


Informatics ◽  
2019 ◽  
Vol 6 (1) ◽  
pp. 4 ◽  
Author(s):  
Oguz Akbilgic ◽  
Ramin Homayouni ◽  
Kevin Heinrich ◽  
Max Langham ◽  
Robert Davis

Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6497 inpatient surgical cases with 719,308 free text notes from Le Bonheur Children’s Hospital EMR. We used a text mining approach on preoperative notes to obtain a text-based risk score to predict death within 30 days of surgery. In addition, we evaluated the performance of a hybrid model that included the text-based risk score along with structured data pertaining to clinical risk factors. The C-statistic of a logistic regression model with five-fold cross-validation significantly improved from 0.76 to 0.92 when text-based risk scores were included in addition to structured data. We conclude that preoperative free text notes in EMR include significant information that can predict adverse surgery outcomes.


2018 ◽  
Vol 25 (8) ◽  
pp. 924-930 ◽  
Author(s):  
Xiruo Ding ◽  
Ziad F Gellad ◽  
Chad Mather ◽  
Pamela Barth ◽  
Eric G Poon ◽  
...  

Abstract Objective As available data increases, so does the opportunity to develop risk scores on more refined patient populations. In this paper we assessed the ability to derive a risk score for a patient no-showing to a clinic visit. Methods Using data from 2 264 235 outpatient appointments we assessed the performance of models built across 14 different specialties and 55 clinics. We used regularized logistic regression models to fit and assess models built on the health system, specialty, and clinic levels. We evaluated fits based on their discrimination and calibration. Results Overall, the results suggest that a relatively robust risk score for patient no-shows could be derived with an average C-statistic of 0.83 across clinic level models and strong calibration. Moreover, the clinic specific models, even with lower training set sizes, often performed better than the more general models. Examination of the individual models showed that risk factors had different degrees of predictability across the different specialties. Implementation of optimal modeling strategies would lead to capturing an additional 4819 no-shows per-year. Conclusion Overall, this work highlights both the opportunity for and the importance of leveraging the available electronic health record data to develop more refined risk models.


BMC Medicine ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Banafsheh Arshi ◽  
Jan C. van den Berge ◽  
Bart van Dijk ◽  
Jaap W. Deckers ◽  
M. Arfan Ikram ◽  
...  

Abstract Background Despite the growing burden of heart failure (HF), there have been no recommendations for use of any of the primary prevention models in the existing guidelines. HF was also not included as an outcome in the American College of Cardiology/American Heart Association (ACC/AHA) risk score. Methods Among 2743 men and 3646 women aged ≥ 55 years, free of HF, from the population-based Rotterdam Study cohort, 4 Cox models were fitted using the predictors of the ACC/AHA, ARIC and Health-ABC risk scores. Performance of the models for 10-year HF prediction was evaluated. Afterwards, performance and net reclassification improvement (NRI) for adding NT-proBNP to the ACC/AHA model were assessed. Results During a median follow-up of 13 years, 429 men and 489 women developed HF. The ARIC model had the highest performance [c-statistic (95% confidence interval [CI]): 0.80 (0.78; 0.83) and 0.80 (0.78; 0.83) in men and women, respectively]. The c-statistic for the ACC/AHA model was 0.76 (0.74; 0.78) in men and 0.77 (0.75; 0.80) in women. Adding NT-proBNP to the ACC/AHA model increased the c-statistic to 0.80 (0.78 to 0.83) in men and 0.81 (0.79 to 0.84) in women. Sensitivity and specificity of the ACC/AHA model did not drastically change after addition of NT-proBNP. NRI(95%CI) was − 23.8% (− 19.2%; − 28.4%) in men and − 27.6% (− 30.7%; − 24.5%) in women for events and 57.9% (54.8%; 61.0%) in men and 52.8% (50.3%; 55.5%) in women for non-events. Conclusions Acceptable performance of the model based on risk factors included in the ACC/AHA model advocates use of this model for prediction of HF risk in primary prevention setting. Addition of NT-proBNP modestly improved the model performance but did not lead to relevant discrimination improvement in clinical risk reclassification.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
JoAnne Simpson ◽  
Kieran Docherty ◽  
Mark Petrie ◽  
Pardeep S Jhund ◽  
John J McMurray

Introduction: Predicting outcomes in patients with heart failure (HF) is challenging. We compared the performance of the recently developed PREDICT-HF score with the established MAGGIC risk score in the Dapagliflozin And Prevention of Adverse-outcomes in Heart Failure trial (DAPA-HF). Methods: The MAGGIC risk score is the most commonly used risk score in heart failure and was derived using data from 39,372 patients enrolled in 30 clinical trials and cohort studies and includes 13 clinical variables. PREDICT-HF (www.predict-hf.com) incorporates demographic, comorbidity and laboratory variables, including natriuretic peptides, and was developed using data from PARADIGM-HF. The performance of both models was tested in in DAPA-HF i.e. in symptomatic patients with HFrEF receiving contemporary standard care. The outcomes examined included all-cause death and cardiovascular (CV) death at 1 and 2 years. Model discrimination and calibration were assessed by Harrell’s C statistic. For the PREDICT-HF model participants with missing values, median values from the derivation cohort were imputed. Results: The mean age of the 4744 patients in DAPA-HF was 66 years and 77% were male; 68% were in NYHA functional class II and 24% in class III. During a median follow-up of 18 months, 605 patients died, and 500 experienced CV death. Using PREDICT-HF, the C statistic at 1 and 2 years was 0.71 and 0.70 for all-cause death, and 0.73 and 0.72 for CV death, respectively. The MAGGIC risk score was available for 4740 patients in DAPA-HF. The C-statistic for all cause death at 1 and 2 years was 0.63 and 0.63, respectively. Model discrimination for PREDICT-HF and MAGGIC are shown in Table 1. Conclusions: PREDICT-HF performed well, and significantly better than the MAGGIC risk score, in predicting mortality in a contemporary cohort of patients receiving evidence-based treatment.


2020 ◽  
Vol 7 (8) ◽  
Author(s):  
Laura N Cwengros ◽  
Ryan P Mynatt ◽  
Tristan T Timbrook ◽  
Robert Mitchell ◽  
Hossein Salimnia ◽  
...  

Abstract Background Bloodstream infections (BSIs) due to ceftriaxone (CRO)-resistant Enterobacteriaceae are associated with delays in time to appropriate therapy and worse outcomes compared with infections due to susceptible isolates. However, treating all at-risk patients with empiric carbapenem therapy risks overexposure. Strategies are needed to appropriately balance these competing interests. The purpose of this study was to compare 4 methods for achieving this balance. Methods This was a retrospective hypothetical observational study of patients at the Detroit Medical Center with monomicrobial BSIs due to E. coli, K. oxytoca, K. pneumoniae, or P. mirabilis. This study compared the effectiveness of 4 methods to predict CRO resistance at the time of organism isolation. Three methods were based on applying published extended-spectrum beta-lactamase (ESBL) scoring tools. The fourth method was based on the presence or absence of the CTX-M marker from Verigene. Results Four hundred fifty-one Enterobacteriaceae BSIs were included, 73 (16%) of which were CRO-resistant. Verigene accurately predicted ceftriaxone susceptibility for 97% of isolates, compared with 70%–81% using the scoring tools (P < .001). Verigene was associated with fewer cases of treatment with CRO when the isolate was CRO-resistant (15% vs 63%–71% with scoring tools) and fewer cases of overtreatment with a carbapenem for CRO-susceptible strains (0.3% vs 10%–12%). Conclusions Verigene significantly outperformed published ESBL scoring tools for identifying CRO-resistant Enterobacteriaceae BSI. Institutions should validate scoring tools before implementation. Stewardship programs should consider adoption of rapid diagnostic tests to optimize early therapy.


2016 ◽  
Vol 54 (7) ◽  
pp. 1789-1796 ◽  
Author(s):  
Tamar Walker ◽  
Sandrea Dumadag ◽  
Christine Jiyoun Lee ◽  
Seung Heon Lee ◽  
Jeffrey M. Bender ◽  
...  

Gram-negative bacteremia is highly fatal, and hospitalizations due to sepsis have been increasing worldwide. Molecular tests that supplement Gram stain results from positive blood cultures provide specific organism information to potentially guide therapy, but more clinical data on their real-world impact are still needed. We retrospectively reviewed cases of Gram-negative bacteremia in hospitalized patients over a 6-month period before (n =98) and over a 6-month period after (n =97) the implementation of a microarray-based early identification and resistance marker detection system (Verigene BC-GN; Nanosphere) while antimicrobial stewardship practices remained constant. Patient demographics, time to organism identification, time to effective antimicrobial therapy, and other key clinical parameters were compared. The two groups did not differ statistically with regard to comorbid conditions, sources of bacteremia, or numbers of intensive care unit (ICU) admissions, active use of immunosuppressive therapy, neutropenia, or bacteremia due to multidrug-resistant organisms. The BC-GN panel yielded an identification in 87% of Gram-negative cultures and was accurate in 95/97 (98%) of the cases compared to results using conventional culture. Organism identifications were achieved more quickly post-microarray implementation (mean, 10.9 h versus 37.9 h;P< 0.001). Length of ICU stay, 30-day mortality, and mortality associated with multidrug-resistant organisms were significantly lower in the postintervention group (P< 0.05). More rapid implementation of effective therapy was statistically significant for postintervention cases of extended-spectrum beta-lactamase-producing organisms (P= 0.049) but not overall (P= 0.12). The Verigene BC-GN assay is a valuable addition for the early identification of Gram-negative organisms that cause bloodstream infections and can significantly impact patient care, particularly when resistance markers are detected.


Sign in / Sign up

Export Citation Format

Share Document