scholarly journals 2409. External Validation and Comparison of Clostridioides difficile Severity Scoring Systems

2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S831-S832
Author(s):  
Donald A Perry ◽  
Daniel Shirley ◽  
Dejan Micic ◽  
Rosemary K B Putler ◽  
Pratish Patel ◽  
...  

Abstract Background Annually in the US alone, Clostridioides difficile infection (CDI) afflicts nearly 500,000 patients causing 29,000 deaths. Since early and aggressive interventions could save lives but are not optimally deployed in all patients, numerous studies have published predictive models for adverse outcomes. These models are usually developed at a single institution, and largely are not externally validated. This aim of this study was to validate the predictability for severe CDI with previously published risk scores in a multicenter cohort of patients with CDI. Methods We conducted a retrospective study on four separate inpatient cohorts with CDI from three distinct sites: the Universities of Michigan (2010–2012 and 2016), Chicago (2012), and Wisconsin (2012). The primary composite outcome was admission to an intensive care unit, colectomy, and/or death attributed to CDI within 30 days of positive test. Structured query and manual chart review abstracted data from the medical record at each site. Published CDI severity scores were assessed and compared with each other and the IDSA guideline definition of severe CDI. Sensitivity, specificity, area under the receiver operator characteristic curve (AuROC), precision-recall curves, and net reclassification index (NRI) were calculated to compare models. Results We included 3,775 patients from the four cohorts (Table 1) and evaluated eight severity scores (Table 2). The IDSA (baseline comparator) model showed poor performance across cohorts(Table 3). Of the binary classification models, including those that were most predictive of the primary composite outcome, Jardin, performed poorly with minimal to no NRI improvement compared with IDSA. The continuous score models, Toro and ATLAS, performed better, but the AuROC varied by site by up to 17% (Table 3). The Gujja model varied the most: from most predictive in the University of Michigan 2010–2012 cohort to having no predictive value in the 2016 cohort (Table 3). Conclusion No published CDI severity score showed stable, acceptable predictive ability across multiple cohorts/institutions. To maximize performance and clinical utility, future efforts should focus on a multicenter-derived and validated scoring system, and/or incorporate novel biomarkers. Disclosures All authors: No reported disclosures.

Author(s):  
D Alexander Perry ◽  
Daniel Shirley ◽  
Dejan Micic ◽  
C Pratish Patel ◽  
Rosemary Putler ◽  
...  

Abstract Background Many models have been developed to predict severe outcomes from Clostridioides difficile infection. These models are usually developed at a single institution and largely are not externally validated. This aim of this study was to validate previously published risk scores in a multicenter cohort of patients with CDI. Methods Retrospective study on four separate inpatient cohorts with CDI from three distinct sites: The Universities of Michigan (2010-2012 and 2016), Chicago (2012), and Wisconsin (2012). The primary composite outcome was admission to an intensive care unit, colectomy, and/or death attributed to CDI within 30 days of positive testing. Both within each cohort and combined across all cohorts, published CDI severity scores were assessed and compared to each other and the IDSA guideline definitions of severe and fulminant CDI. Results A total of 3,646 patients were included for analysis. Including the two IDSA guideline definitions, fourteen scores were assessed. Performance of scores varied within each cohort and in the combined set (mean area under the receiver operator characteristic curve(AUC 0.61, range 0.53-0.66). Only half of the scores had performance at or better than IDSA severe and fulminant definitions (AUCs 0.64 and 0.63, respectively). Most of the scoring systems had more false than true positives in the combined set (mean: 81.5%, range:0-91.5%). Conclusions No published CDI severity score showed stable, good predictive ability for adverse outcomes across multiple cohorts/institutions or in a combined multicenter cohort.


2015 ◽  
Vol 42 (1) ◽  
pp. 57-64 ◽  
Author(s):  
Tetsu Ohnuma ◽  
Shigehiko Uchino ◽  
Noriyoshi Toki ◽  
Kenta Takeda ◽  
Yoshitomo Namba ◽  
...  

Background/Aims: Acute kidney injury (AKI) is associated with high mortality. Multiple AKI severity scores have been derived to predict patient outcome. We externally validated new AKI severity scores using the Japanese Society for Physicians and Trainees in Intensive Care (JSEPTIC) database. Methods: New AKI severity scores published in the 21st century (Mehta, Stuivenberg Hospital Acute Renal Failure (SHARF) II, Program to Improve Care in Acute Renal Disease (PICARD), Vellore and Demirjian), Liano, Simplified Acute Physiology Score (SAPS) II and lactate were compared using the JSEPTIC database that collected retrospectively 343 patients with AKI who required continuous renal replacement therapy (CRRT) in 14 intensive care units. Accuracy of the severity scores was assessed by the area under the receiver-operator characteristic curve (AUROC, discrimination) and Hosmer-Lemeshow test (H-L test, calibration). Results: The median age was 69 years and 65.8% were male. The median SAPS II score was 53 and the hospital mortality was 58.6%. The AUROC curves revealed low discrimination ability of the new AKI severity scores (Mehta 0.65, SHARF II 0.64, PICARD 0.64, Vellore 0.64, Demirjian 0.69), similar to Liano 0.67, SAPS II 0.67 and lactate 0.64. The H-L test also demonstrated that all assessed scores except for Liano had significantly low calibration ability. Conclusions: Using a multicenter database of AKI patients requiring CRRT, this study externally validated new AKI severity scores. While the Demirjian's score and Liano's score showed a better performance, further research will be required to confirm these findings.


2021 ◽  
Vol 14 ◽  
pp. 175628482097738
Author(s):  
Tessel M. van Rossen ◽  
Laura J. van Dijk ◽  
Martijn W. Heymans ◽  
Olaf M. Dekkers ◽  
Christina M. J. E. Vandenbroucke-Grauls ◽  
...  

Background: One in four patients with primary Clostridioides difficile infection (CDI) develops recurrent CDI (rCDI). With every recurrence, the chance of a subsequent CDI episode increases. Early identification of patients at risk for rCDI might help doctors to guide treatment. The aim of this study was to externally validate published clinical prediction tools for rCDI. Methods: The validation cohort consisted of 129 patients, diagnosed with CDI between 2018 and 2020. rCDI risk scores were calculated for each individual patient in the validation cohort using the scoring tools described in the derivation studies. Per score value, we compared the average predicted risk of rCDI with the observed number of rCDI cases. Discrimination was assessed by calculating the area under the receiver operating characteristic curve (AUC). Results: Two prediction tools were selected for validation (Cobo 2018 and Larrainzar-Coghen 2016). The two derivation studies used different definitions for rCDI. Using Cobo’s definition, rCDI occurred in 34 patients (26%) of the validation cohort: using the definition of Larrainzar-Coghen, we observed 19 recurrences (15%). The performance of both prediction tools was poor when applied to our validation cohort. The estimated AUC was 0.43 [95% confidence interval (CI); 0.32–0.54] for Cobo’s tool and 0.42 (95% CI; 0.28–0.56) for Larrainzar-Coghen’s tool. Conclusion: Performance of both prediction tools was disappointing in the external validation cohort. Currently identified clinical risk factors may not be sufficient for accurate prediction of rCDI.


Cancers ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2866
Author(s):  
Fernando Navarro ◽  
Hendrik Dapper ◽  
Rebecca Asadpour ◽  
Carolin Knebel ◽  
Matthew B. Spraker ◽  
...  

Background: In patients with soft-tissue sarcomas, tumor grading constitutes a decisive factor to determine the best treatment decision. Tumor grading is obtained by pathological work-up after focal biopsies. Deep learning (DL)-based imaging analysis may pose an alternative way to characterize STS tissue. In this work, we sought to non-invasively differentiate tumor grading into low-grade (G1) and high-grade (G2/G3) STS using DL techniques based on MR-imaging. Methods: Contrast-enhanced T1-weighted fat-saturated (T1FSGd) MRI sequences and fat-saturated T2-weighted (T2FS) sequences were collected from two independent retrospective cohorts (training: 148 patients, testing: 158 patients). Tumor grading was determined following the French Federation of Cancer Centers Sarcoma Group in pre-therapeutic biopsies. DL models were developed using transfer learning based on the DenseNet 161 architecture. Results: The T1FSGd and T2FS-based DL models achieved area under the receiver operator characteristic curve (AUC) values of 0.75 and 0.76 on the test cohort, respectively. T1FSGd achieved the best F1-score of all models (0.90). The T2FS-based DL model was able to significantly risk-stratify for overall survival. Attention maps revealed relevant features within the tumor volume and in border regions. Conclusions: MRI-based DL models are capable of predicting tumor grading with good reproducibility in external validation.


2020 ◽  
Vol 41 (S1) ◽  
pp. s77-s78
Author(s):  
Jonathan Motyka ◽  
Aline Penkevich ◽  
Vincent Young ◽  
Krishna Rao

Background:Clostridioides difficile infection (CDI) frequently recurs after initial treatment. Predicting recurrent CDI (rCDI) early in the disease course can assist clinicians in their decision making and improve outcomes. However, predictions based on clinical criteria alone are not accurate and/or do not validate other results. Here, we tested the hypothesis that circulating and stool-derived inflammatory mediators predict rCDI. Methods: Consecutive subjects with available specimens at diagnosis were included if they tested positive for toxigenic C. difficile (+enzyme immunoassay [EIA] for glutamate dehydrogenase and toxins A/B, with reflex to PCR for the tcdB gene for discordants). Stool was thawed on ice, diluted 1:1 in PBS with protease inhibitor, centrifuged, and used immediately. A 17-plex panel of inflammatory mediators was run on a Luminex 200 machine using a custom antibody-linked bead array. Prior to analysis, all measurements were normalized and log-transformed. Stool toxin activity levels were quantified using a custom cell-culture assay. Recurrence was defined as a second episode of CDI within 100 days. Ordination characterized variation in the panel between outcomes, tested with a permutational, multivariate ANOVA. Machine learning via elastic net regression with 100 iterations of 5-fold cross validation selected the optimal model and the area under the receiver operator characteristic curve (AuROC) was computed. Sensitivity analyses excluding those that died and/or lived >100 km away were performed. Results: We included 186 subjects, with 95 women (51.1%) and average age of 55.9 years (±20). More patients were diagnosed by PCR than toxin EIA (170 vs 55, respectively). Death, rCDI, and no rCDI occurred in 32 (17.2%), 36 (19.4%), and 118 (63.4%) subjects, respectively. Ordination revealed that the serum panel was associated with rCDI (P = .007) but the stool panel was not. Serum procalcitonin, IL-8, IL-6, CCL5, and EGF were associated with recurrence. The machine-learning models using the serum panel predicted rCDI with AuROCs between 0.74 and 0.8 (Fig. 1). No stool inflammatory mediators independently predicted rCDI. However, stool IL-8 interacted with toxin activity to predict rCDI (Fig. 2). These results did not change significantly upon sensitivity analysis. Conclusions: A panel of serum inflammatory mediators predicted rCDI with up to 80% accuracy, but the stool panel alone was less successful. Incorporating toxin activity levels alongside inflammatory mediator measurements is a novel, promising approach to studying stool-derived biomarkers of rCDI. This approach revealed that stool IL-8 is a potential biomarker for rCDI. These results need to be confirmed both with a larger dataset and after adjustment for clinical covariates.Funding: NoneDisclosure: Vincent Young is a consultant for Bio-K+ International, Pantheryx, and Vedanta Biosciences.


2018 ◽  
Vol 118 (5) ◽  
pp. 750-759 ◽  
Author(s):  
J A Usher-Smith ◽  
A Harshfield ◽  
C L Saunders ◽  
S J Sharp ◽  
J Emery ◽  
...  

Abstract Background: This study aimed to compare and externally validate risk scores developed to predict incident colorectal cancer (CRC) that include variables routinely available or easily obtainable via self-completed questionnaire. Methods: External validation of fourteen risk models from a previous systematic review in 373 112 men and women within the UK Biobank cohort with 5-year follow-up, no prior history of CRC and data for incidence of CRC through linkage to national cancer registries. Results: There were 1719 (0.46%) cases of incident CRC. The performance of the risk models varied substantially. In men, the QCancer10 model and models by Tao, Driver and Ma all had an area under the receiver operating characteristic curve (AUC) between 0.67 and 0.70. Discrimination was lower in women: the QCancer10, Wells, Tao, Guesmi and Ma models were the best performing with AUCs between 0.63 and 0.66. Assessment of calibration was possible for six models in men and women. All would require country-specific recalibration if estimates of absolute risks were to be given to individuals. Conclusions: Several risk models based on easily obtainable data have relatively good discrimination in a UK population. Modelling studies are now required to estimate the potential health benefits and cost-effectiveness of implementing stratified risk-based CRC screening.


2021 ◽  
Author(s):  
W. Alton Russell ◽  
David Schienker ◽  
Brian Custer

ABSTRACTBackgroundDespite a fingerstick hemoglobin requirement and 56-day minimum donation interval, repeat blood donation continues to cause and exacerbate iron deficiency.Study design and methodsUsing data from the REDS-II Donor Iron Status Evaluation study, we developed multiclass prediction models to estimate the competing risk of hemoglobin deferral and collecting blood from a donor with sufficient hemoglobin but low or absent underlying iron stores. We compared models developed with and without two biomarkers not routinely measured in most blood centers: ferritin and soluble transferrin receptor. We generated and analyzed ‘individual risk trajectories’: estimates of how each donors’ risk developed as a function of the time interval until their next donation attempt.ResultsWith standard biomarkers, the top model had a multiclass area under the receiver operator characteristic curve (AUC) of 77.6% (95% CI 77.3% - 77.8%). With extra biomarkers, multiclass AUC increased to 82.8% (95% CI 82.5% - 83.1%). In the extra biomarkers model, ferritin was the single most important variable, followed by the donation interval. We identified three risk archetypes: ‘fast recoverers’ (<10% risk of any adverse outcome on post-donation day 56), ‘slow recoverers’ (>60% adverse outcome risk on day 56 that declines to <35% by day 250), and ‘chronic high-risk’ (>85% risk of adverse outcome on day 250).DiscussionA longer donation interval reduced estimated risk of iron-related adverse events for most donors, but risk remained high for some. Tailoring safeguards to individual risk estimates could reduce blood collections from donors with low or absent iron stores.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jiandong Zhou ◽  
Sharen Lee ◽  
Xiansong Wang ◽  
Yi Li ◽  
William Ka Kei Wu ◽  
...  

AbstractRecent studies have reported numerous predictors for adverse outcomes in COVID-19 disease. However, there have been few simple clinical risk scores available for prompt risk stratification. The objective is to develop a simple risk score for predicting severe COVID-19 disease using territory-wide data based on simple clinical and laboratory variables. Consecutive patients admitted to Hong Kong’s public hospitals between 1 January and 22 August 2020 and diagnosed with COVID-19, as confirmed by RT-PCR, were included. The primary outcome was composite intensive care unit admission, need for intubation or death with follow-up until 8 September 2020. An external independent cohort from Wuhan was used for model validation. COVID-19 testing was performed in 237,493 patients and 4442 patients (median age 44.8 years old, 95% confidence interval (CI): [28.9, 60.8]); 50% males) were tested positive. Of these, 209 patients (4.8%) met the primary outcome. A risk score including the following components was derived from Cox regression: gender, age, diabetes mellitus, hypertension, atrial fibrillation, heart failure, ischemic heart disease, peripheral vascular disease, stroke, dementia, liver diseases, gastrointestinal bleeding, cancer, increases in neutrophil count, potassium, urea, creatinine, aspartate transaminase, alanine transaminase, bilirubin, D-dimer, high sensitive troponin-I, lactate dehydrogenase, activated partial thromboplastin time, prothrombin time, and C-reactive protein, as well as decreases in lymphocyte count, platelet, hematocrit, albumin, sodium, low-density lipoprotein, high-density lipoprotein, cholesterol, glucose, and base excess. The model based on test results taken on the day of admission demonstrated an excellent predictive value. Incorporation of test results on successive time points did not further improve risk prediction. The derived score system was evaluated with out-of-sample five-cross-validation (AUC: 0.86, 95% CI: 0.82–0.91) and external validation (N = 202, AUC: 0.89, 95% CI: 0.85–0.93). A simple clinical score accurately predicted severe COVID-19 disease, even without including symptoms, blood pressure or oxygen status on presentation, or chest radiograph results.


2021 ◽  
Author(s):  
Brandon J. Webb ◽  
Nicholas M. Levin ◽  
Nancy Grisel ◽  
Samuel M. Brown ◽  
Ithan D. Peltan ◽  
...  

AbstractBackgroundAccurate methods of identifying patients with COVID-19 who are at high risk of poor outcomes has become especially important with the advent of limited-availability therapies such as monoclonal antibodies. Here we describe development and validation of a simple but accurate scoring tool to classify risk of hospitalization and mortality.MethodsAll consecutive patients testing positive for SARS-CoV-2 from March 25-October 1, 2020 within the Intermountain Healthcare system were included. The cohort was randomly divided into 70% derivation and 30% validation cohorts. A multivariable logistic regression model was fitted for 14-day hospitalization. The optimal model was then adapted to a simple, probabilistic score and applied to the validation cohort and evaluated for prediction of hospitalization and 28-day mortality.Results22,816 patients were included; mean age was 40 years, 50.1% were female and 44% identified as non-white race or Hispanic/Latinx ethnicity. 6.2% required hospitalization and 0.4% died. Criteria in the simple model included: age (0.5 points per decade); high-risk comorbidities (2 points each): diabetes mellitus, severe immunocompromised status and obesity (body mass index≥30); non-white race/Hispanic or Latinx ethnicity (2 points), and 1 point each for: male sex, dyspnea, hypertension, coronary artery disease, cardiac arrythmia, congestive heart failure, chronic kidney disease, chronic pulmonary disease, chronic liver disease, cerebrovascular disease, and chronic neurologic disease. In the derivation cohort (n=16,030) area under the receiver-operator characteristic curve (AUROC) was 0.82 (95% CI 0.81-0.84) for hospitalization and 0.91 (0.83-0.94) for 28-day mortality; in the validation cohort (n=6,786) AUROC for hospitalization was 0.8 (CI 0.78-0.82) and for mortality 0.8 (CI 0.69-0.9).ConclusionA prediction score based on widely available patient attributes accurately risk stratifies patients with COVID-19 at the time of testing. Applications include patient selection for therapies targeted at preventing disease progression in non-hospitalized patients, including monoclonal antibodies. External validation in independent healthcare environments is needed.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Duc Trong Quach ◽  
Uyen Pham-Phuong Vo ◽  
Nguyet Thi-My Nguyen ◽  
Ly Thi-Kim Le ◽  
Minh-Cong Hong Vo ◽  
...  

Aims. This study is aimed at (1) validating the performance of Oakland and Glasgow-Blatchford (GBS) scores and (2) comparing these scores with the SALGIB score in predicting adverse outcomes of acute lower gastrointestinal bleeding (ALGIB) in a Vietnamese population. Methods. A multicenter cohort study was conducted on ALGIB patients admitted to seven hospitals across Vietnam. The adverse outcomes of ALGIB consisted of blood transfusion; endoscopic, radiologic, or surgical interventions; severe bleeding; and in-hospital death. The Oakland and GBS scores were calculated, and their performance was compared with that of SALGIB, a locally developed prediction score for adverse outcomes of ALGIB in Vietnamese, based on the data at admission. The accuracy of these scores was measured using the area under the receiver operating characteristic curve (AUC) and compared by the chi-squared test. Results. There were 414 patients with a median age of 60 (48–71). The rates of blood transfusion, hemostatic intervention, severe bleeding, and in-hospital death were 26.8%, 15.2%, 16.4, and 1.4%, respectively. The SALGIB score had comparable performance with the Oakland score (AUC: 0.81 and 0.81, respectively; p = 0.631 ) and outperformed the GBS score (AUC: 0.81 and 0.76, respectively; p = 0.002 ) for predicting the presence of any adverse outcomes of ALGIB. All of the three scores had acceptable and comparable performance for in-hospital death but poor performance for hemostatic intervention. The Oakland score had the best performance for predicting severe bleeding. Conclusions. The Oakland and SALGIB scores had excellent and comparable performance and outperformed the GBS score for predicting adverse outcomes of ALGIB in Vietnamese.


Sign in / Sign up

Export Citation Format

Share Document