scholarly journals External validation of predictive scores for mortality following Clostridium difficile infection

2017 ◽  
Vol 4 (suppl_1) ◽  
pp. S401-S401
Author(s):  
Catherine Beauregard-Paultre ◽  
Claire Nour Abou Chakra ◽  
Allison Mcgeer ◽  
Annie-Claude Labbé ◽  
Andrew E Simor ◽  
...  

Abstract Background The burden of Clostridium difficile infection (CDI) has increased in the last decade, with more adverse outcomes and related mortality. Although many predictive scores were developed, few were validated and their performances were sub-optimal. We conducted an external validation study of predictive scores or models for mortality in CDI. Methods Published predictive tools were identified through a systematic review. We included those reporting at least an internal validation approach. A multicenter prospective cohort of 1380 adults with confirmed CDI enrolled in two Canadian provinces was used for external validation. Most cases were elderly (median age 71), had a healthcare facility-associated CDI (90%), and 52% were infected by NAP1/BI/027 strains. All-cause 30-day death occurred in 12% of patients. The performance of each scoring system was analyzed using individual primary outcomes. Results We identified two scores which performances (95% CI) are shown in the table. Both had low sensitivity and PPV, moderate specificity and NPV, and similar AUC/ROC (0.66 vs. 0.77 in the derivation cohort, and 0.69 vs. 0.75 respectively). One predictive model for 30 days all-cause mortality (Archbald-Pannone 2015, including Charlson score, WBC, BUN, diagnosis in ICU, and delirium*) was associated with only 5% increase in odds of death (crude OR = 1.05 (1.03–1.06)) with an AUC of 0.74 (0.7–0.8). Conclusion The predictive models of CDI mortality evaluated in our study have limitations in their methods and showed moderate performances in a validation cohort consisting of a majority of CDI caused by NAP1 strains. An accurate predictive tool is needed to guide clinicians in the management of CDI to prevent adverse outcomes. Disclosures J. Powis, Merck: Grant Investigator, Research grant; GSK: Grant Investigator, Research grant; Roche: Grant Investigator, Research grant; Synthetic Biologicals: Investigator, Research grant

2017 ◽  
Vol 4 (suppl_1) ◽  
pp. S402-S402 ◽  
Author(s):  
Catherine Beauregard-Paultre ◽  
Claire Nour Abou Chakra ◽  
Allison Mcgeer ◽  
Annie-Claude Labbé ◽  
Andrew E Simor ◽  
...  

Abstract Background Clostridium difficile infection (CDI) is the most common cause of nosocomial diarrhea. About one in 5 patients with CDI (median 18%) develop a complication (cCDI), including mortality. Many predictive scores have been published to identify patients at risk of cCDI but none is currently recommended for clinical use and few were validated. We conducted an external validation study of predictive tools for cCDI. Methods Predictive tools were identified through a systematic review. We included those reporting at least an internal validation process. We performed the external validation on a multicenter prospective cohort of 1380 Canadian adults with confirmed CDI. Most cases were elderly (median age 71), had a healthcare facility-associated CDI (90%), and cCDI occurred in 8%. NAP1 strain was found in 52%. The performance of each scoring system was analyzed using individual outcomes. Modifications in predictors were made to match available data in the validation cohort. Results We assessed 3 predictive scores and one predictive model. The performance (95% CI) of higher thresholds are shown in the Table. All scores had a low sensitivity and PPV, and moderate specificity and NPV. The model of Shivashankar 2013 (age, WBC> 15, narcotic use, antacids use, creatinine ratio > 1.5) predicted 25% probability of cCDI. All showed similar AUC (0.63–0.67). Conclusion The predictive tools included in our study showed moderate performance in a validation cohort with a low rate of cCDI and high proportion of NAP1 strains. Further research is needed to develop an accurate predictive tool to guide clinicians in the management of CDI. Disclosures J. Powis, Merck: Grant Investigator, Research grant; GSK: Grant Investigator, Research grant; Roche: Grant Investigator, Research grant; Synthetic Biologicals: Investigator, Research grant


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S811-S811
Author(s):  
Anitha Menon ◽  
Donald A Perry ◽  
Jonathan Motyka ◽  
Shayna Weiner ◽  
Alexandra Standke ◽  
...  

Abstract Background The optimal diagnostic strategy for Clostridium difficile infection (CDI) is not known, and no test is shown to clearly differentiate colonization from symptomatic infection. We hypothesized that detection and/or quantification of stool toxins would associate with severe disease and adverse outcomes. Methods We conducted a retrospective cohort study among subjects with CDI diagnosed in 2016 at the University of Michigan. The clinical microbiology laboratory tested for glutamate dehydrogenase antigen and toxins A/B by enzyme immunoassay (EIA). Discordant results reflexed to PCR for the tcdB gene. Stool toxin levels were quantified via a modified cell cytotoxicity assay (CCA). C. difficile was isolated by anaerobic culture and ribotyped. Severe CDI was defined by the IDSA criteria: white blood cell count >15,000 cells/µL or a 1.5-fold increase in serum creatinine above baseline. The primary outcomes were all-cause 30-day mortality and a composite of colectomy, ICU admission, and/or death attributable to CDI within 30 days. Analysis included standard bivariable tests and adjusted models via logistic regression. Results From 565 adult patients, we obtained 646 samples; 199 (30.8%) contained toxins by EIA. Toxin positivity associated with IDSA severity (Table 1), but not our primary outcomes on unadjusted analysis. After adjustment for putative confounders, we still did not observe an association between toxin positivity and our primary outcomes. Stool toxin levels by CCA >6.4 ng/mL associated with IDSA severity (Table 1), but not the primary outcomes. Compared with the period from 2010 to 2013, the circulating ribotypes of C. difficile at our institution changed in 2016. Notably ribotype 106 newly emerged, accounting for 10.6% of strains, and ribotype 027 fell to 9.3% (Table 2). The incidence of ribotype 014-027 has remained stable at 18.9%, but this strain was associated with both IDSA severity and 30-day mortality (OR = 3.32; P = 0.001). Conclusion Toxin detection by EIA/CCA associated with IDSA severity, but this study was unable to confirm an association with subsequent adverse outcomes. The molecular epidemiology of C. difficile has shifted, and this may have implications for the optimal diagnostic strategy for CDI. Disclosures All authors: No reported disclosures.


2014 ◽  
Vol 1 (2) ◽  
Author(s):  
Fernanda C. Lessa ◽  
Yi Mu ◽  
Lisa G. Winston ◽  
Ghinwa K. Dumyati ◽  
Monica M. Farley ◽  
...  

Abstract Background.  Clostridium difficile infection (CDI) is no longer restricted to hospital settings, and population-based incidence measures are needed. Understanding the determinants of CDI incidence will allow for more meaningful comparisons of rates and accurate national estimates. Methods.  Data from active population- and laboratory-based CDI surveillance in 7 US states were used to identify CDI cases (ie, residents with positive C difficile stool specimen without a positive test in the prior 8 weeks). Cases were classified as community-associated (CA) if stool was collected as outpatients or ≤3 days of admission and no overnight healthcare facility stay in the past 12 weeks; otherwise, cases were classified as healthcare-associated (HA). Two regression models, one for CA-CDI and another for HA-CDI, were built to evaluate predictors of high CDI incidence. Site-specific incidence was adjusted based on the regression models. Results.  Of 10 062 cases identified, 32% were CA. Crude incidence varied by geographic area; CA-CDI ranged from 28.2 to 79.1/100 000 and HA-CDI ranged from 45.7 to 155.9/100 000. Independent predictors of higher CA-CDI incidence were older age, white race, female gender, and nucleic acid amplification test (NAAT) use. For HA-CDI, older age and a greater number of inpatient-days were predictors. After adjusting for relevant predictors, the range of incidence narrowed greatly; CA-CDI rates ranged from 30.7 to 41.3/100 000 and HA-CDI rates ranged from 58.5 to 94.8/100 000. Conclusions.  Differences in CDI incidence across geographic areas can be partially explained by differences in NAAT use, age, race, sex, and inpatient-days. Variation in antimicrobial use may contribute to the remaining differences in incidence.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S479-S479
Author(s):  
Punit Shah ◽  
Jessica Kay ◽  
Adanma Akogun ◽  
Silvia Wise ◽  
Sarfraz Aly ◽  
...  

Abstract Background Exposure to antimicrobials is a known risk factor for Clostridium difficile infection (CDI). Antimicrobials cause collateral damage by disrupting the natural intestinal microbiota allowing for C.difficile to thrive and production of C.difficile toxins. Probiotics could modulate the onset and course of CDI. However, the data on probiotics for the prevention of CDI is conflicting. Methods We conducted an IRB approved retrospective cohort study at a 340-bed community hospital. All hospitalized patients from August 1, 2017 through July 31, 2020 were evaluated for enrollment. Patients were included if they received at least one dose of intravenous (IV) antibiotic and had a length of stay of at least 3 days. Patients were excluded if they were younger than 18 years, or if they had a positive C.difficile polymerase chain reaction test before antibiotics were started. The primary outcome was the incidence of healthcare facility-onset Clostridium difficile infection (HO-CDI). Descriptive statistics were used to analyze demographics data, and the primary outcome of HO-CDI was analyzed using Fisher’s exact test and multiple logistic regression. Results A total of 20,257 patients received IV antibiotics during the study time frame. Of these, 2,659 patients received probiotics. Primary outcome of HO-CDI occurred in 46 patients in the IV antibiotics alone cohort (0.26%) and 5 patients in the probiotics plus IV antibiotics cohort (0.19%). The difference in HO-CDI between these two groups was not statistically significant, p=0.677. A multiple logistic regression was performed to see the impact of proton pump inhibitor use, age, ICU admission, Charlson Comorbidity Index, probiotic use and CDI in the past 12 months on the primary outcome. C.difficile infection in prior 12 months [OR 3.37, 95%CI 1.04-10.97] and ICU admission [OR 1.81, 95%CI 1.02-3.19] were associated with higher CDI. The addition of probiotics to patients on IV antibiotics did not exhibit a protective effect [OR 0.72, 95% CI 0.28-1.81]. Conclusion The addition of probiotics to standard of care was not beneficial in the prevention of HO-CDI. We endorse robust antibiotic stewardship practices as part of the standard of care bundle that institutions should employ to decrease the incidence of HO-CDI. Disclosures All Authors: No reported disclosures


2019 ◽  
Vol 14 (4) ◽  
pp. 506-514 ◽  
Author(s):  
Pavan Kumar Bhatraju ◽  
Leila R. Zelnick ◽  
Ronit Katz ◽  
Carmen Mikacenic ◽  
Susanna Kosamo ◽  
...  

Background and objectivesCritically ill patients with worsening AKI are at high risk for poor outcomes. Predicting which patients will experience progression of AKI remains elusive. We sought to develop and validate a risk model for predicting severe AKI within 72 hours after intensive care unit admission.Design, setting, participants, & measurementsWe applied least absolute shrinkage and selection operator regression methodology to two prospectively enrolled, critically ill cohorts of patients who met criteria for the systemic inflammatory response syndrome, enrolled within 24–48 hours after hospital admission. The risk models were derived and internally validated in 1075 patients and externally validated in 262 patients. Demographics and laboratory and plasma biomarkers of inflammation or endothelial dysfunction were used in the prediction models. Severe AKI was defined as Kidney Disease Improving Global Outcomes (KDIGO) stage 2 or 3.ResultsSevere AKI developed in 62 (8%) patients in the derivation, 26 (8%) patients in the internal validation, and 15 (6%) patients in the external validation cohorts. In the derivation cohort, a three-variable model (age, cirrhosis, and soluble TNF receptor-1 concentrations [ACT]) had a c-statistic of 0.95 (95% confidence interval [95% CI], 0.91 to 0.97). The ACT model performed well in the internal (c-statistic, 0.90; 95% CI, 0.82 to 0.96) and external (c-statistic, 0.93; 95% CI, 0.89 to 0.97) validation cohorts. The ACT model had moderate positive predictive values (0.50–0.95) and high negative predictive values (0.94–0.95) for severe AKI in all three cohorts.ConclusionsACT is a simple, robust model that could be applied to improve risk prognostication and better target clinical trial enrollment in critically ill patients with AKI.


Author(s):  
Alexandros Rovas ◽  
Efe Paracikoglu ◽  
Mark Michael ◽  
André Gries ◽  
Janina Dziegielewski ◽  
...  

Abstract Background While there are clear national resuscitation room admission guidelines for major trauma patients, there are no comparable alarm criteria for critically ill nontrauma (CINT) patients in the emergency department (ED). The aim of this study was to define and validate specific trigger factor cut-offs for identification of CINT patients in need of a structured resuscitation management protocol. Methods All CINT patients at a German university hospital ED for whom structured resuscitation management would have been deemed desirable were prospectively enrolled over a 6-week period (derivation cohort, n = 108). The performance of different thresholds and/or combinations of trigger factors immediately available during triage were compared with the National Early Warning Score (NEWS) and Quick Sequential Organ Failure Assessment (qSOFA) score. Identified combinations were then tested in a retrospective sample of consecutive nontrauma patients presenting at the ED during a 4-week period (n = 996), and two large external datasets of CINT patients treated in two German university hospital EDs (validation cohorts 1 [n = 357] and 2 [n = 187]). Results The any-of-the-following trigger factor iteration with the best performance in the derivation cohort included: systolic blood pressure < 90 mmHg, oxygen saturation < 90%, and Glasgow Coma Scale score < 15 points. This set of triggers identified > 80% of patients in the derivation cohort and performed better than NEWS and qSOFA scores in the internal validation cohort (sensitivity = 98.5%, specificity = 98.6%). When applied to the external validation cohorts, need for advanced resuscitation measures and hospital mortality (6.7 vs. 28.6%, p < 0.0001 and 2.7 vs. 20.0%, p < 0.012) were significantly lower in trigger factor-negative patients. Conclusion Our simple, any-of-the-following decision rule can serve as an objective trigger for initiating resuscitation room management of CINT patients in the ED.


Sign in / Sign up

Export Citation Format

Share Document