scholarly journals Added value of clinical prediction rules for bacteremia in hemodialysis patients: An external validation study

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0247624
Author(s):  
Sho Sasaki ◽  
Yoshihiko Raita ◽  
Minoru Murakami ◽  
Shungo Yamamoto ◽  
Kentaro Tochitani ◽  
...  

Introduction Having developed a clinical prediction rule (CPR) for bacteremia among hemodialysis (HD) outpatients (BAC-HD score), we performed external validation. Materials & methods Data were collected on maintenance HD patients at two Japanese tertiary-care hospitals from January 2013 to December 2015. We enrolled 429 consecutive patients (aged ≥ 18 y) on maintenance HD who had had two sets of blood cultures drawn on admission to assess for bacteremia. We validated the predictive ability of the CPR using two validation cohorts. Index tests were the BAC-HD score and a CPR developed by Shapiro et al. The outcome was bacteremia, based on the results of the admission blood cultures. For added value, we also measured changes in the area under the receiver operating characteristic curve (AUC) using logistic regression and Net Reclassification Improvement (NRI), in which each CPR was added to the basic model. Results In Validation cohort 1 (360 subjects), compared to a Model 1 (Basic Model) AUC of 0.69 (95% confidence interval [95% CI]: 0.59–0.80), the AUC of Model 2 (Basic model + BAC-HD score) and Model 3 (Basic model + Shapiro’s score) increased to 0.8 (95% CI: 0.71–0.88) and 0.73 (95% CI: 0.63–0.83), respectively. In validation cohort 2 (96 subjects), compared to a Model 1 AUC of 0.81 (95% CI: 0.68–0.94), the AUCs of Model 2 and Model 3 increased to 0.83 (95% CI: 0.72–0.95) and 0.85 (95% CI: 0.76–0.94), respectively. NRIs on addition of the BAC-HD score and Shapiro’s score were 0.3 and 0.06 in Validation cohort 1, and 0.27 and 0.13, respectively, in Validation cohort 2. Conclusion Either the BAC-HD score or Shapiro’s score may improve the ability to diagnose bacteremia in HD patients. Reclassification was better with the BAC-HD score.

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245281
Author(s):  
Bianca Magro ◽  
Valentina Zuccaro ◽  
Luca Novelli ◽  
Lorenzo Zileri ◽  
Ciro Celsa ◽  
...  

Backgrounds Validated tools for predicting individual in-hospital mortality of COVID-19 are lacking. We aimed to develop and to validate a simple clinical prediction rule for early identification of in-hospital mortality of patients with COVID-19. Methods and findings We enrolled 2191 consecutive hospitalized patients with COVID-19 from three Italian dedicated units (derivation cohort: 1810 consecutive patients from Bergamo and Pavia units; validation cohort: 381 consecutive patients from Rome unit). The outcome was in-hospital mortality. Fine and Gray competing risks multivariate model (with discharge as a competing event) was used to develop a prediction rule for in-hospital mortality. Discrimination and calibration were assessed by the area under the receiver operating characteristic curve (AUC) and by Brier score in both the derivation and validation cohorts. Seven variables were independent risk factors for in-hospital mortality: age (Hazard Ratio [HR] 1.08, 95% Confidence Interval [CI] 1.07–1.09), male sex (HR 1.62, 95%CI 1.30–2.00), duration of symptoms before hospital admission <10 days (HR 1.72, 95%CI 1.39–2.12), diabetes (HR 1.21, 95%CI 1.02–1.45), coronary heart disease (HR 1.40 95% CI 1.09–1.80), chronic liver disease (HR 1.78, 95%CI 1.16–2.72), and lactate dehydrogenase levels at admission (HR 1.0003, 95%CI 1.0002–1.0005). The AUC was 0.822 (95%CI 0.722–0.922) in the derivation cohort and 0.820 (95%CI 0.724–0.920) in the validation cohort with good calibration. The prediction rule is freely available as a web-app (COVID-CALC: https://sites.google.com/community.unipa.it/covid-19riskpredictions/c19-rp). Conclusions A validated simple clinical prediction rule can promptly and accurately assess the risk for in-hospital mortality, improving triage and the management of patients with COVID-19.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tomoharu Suzuki ◽  
David Itokazu ◽  
Yasuharu Tokuda

AbstractThe Ottawa subarachnoid hemorrhage (OSAH) rule is a validated clinical prediction rule for ruling out subarachnoid hemorrhage (SAH). Another SAH rule (Ottawa-like rule) was developed in Japan but was not well validated. We aimed to validate both rules by examining the sensitivity for ruling out SAH in Japanese patients diagnosed with SAH. We conducted a retrospective cohort study by reviewing the medical records of consecutive adult patients hospitalized with SAH at a tertiary-care teaching hospital in Japan who visited our emergency department between July 2009 and June 2019. Sensitivity and its 95% confidence interval (CI) were estimated for each rule for the diagnosis of SAH. In a total of 280 patients with SAH, 56 (20.0%) patients met the inclusion criteria and were analyzed for the OSAH rule, and a sensitivity of the OSAH rule was 56/56 (100%; 95% CI 93.6–100%). While, 126 (45%) patients met the inclusion criteria of the Ottawa-like rule, and the rule showed a sensitivity of 125/126 (99.2%; 95%CI 95.7–100%). The OSAH rule showed 100% sensitivity among our Japanese patients diagnosed with SAH. The implementation of the Ottawa-like rule should be cautious because the false-negative rate is up to 4%.


2021 ◽  
Vol 28 (1) ◽  
pp. e100267
Author(s):  
Keerthi Harish ◽  
Ben Zhang ◽  
Peter Stella ◽  
Kevin Hauck ◽  
Marwa M Moussa ◽  
...  

ObjectivesPredictive studies play important roles in the development of models informing care for patients with COVID-19. Our concern is that studies producing ill-performing models may lead to inappropriate clinical decision-making. Thus, our objective is to summarise and characterise performance of prognostic models for COVID-19 on external data.MethodsWe performed a validation of parsimonious prognostic models for patients with COVID-19 from a literature search for published and preprint articles. Ten models meeting inclusion criteria were either (a) externally validated with our data against the model variables and weights or (b) rebuilt using original features if no weights were provided. Nine studies had internally or externally validated models on cohorts of between 18 and 320 inpatients with COVID-19. One model used cross-validation. Our external validation cohort consisted of 4444 patients with COVID-19 hospitalised between 1 March and 27 May 2020.ResultsMost models failed validation when applied to our institution’s data. Included studies reported an average validation area under the receiver–operator curve (AUROC) of 0.828. Models applied with reported features averaged an AUROC of 0.66 when validated on our data. Models rebuilt with the same features averaged an AUROC of 0.755 when validated on our data. In both cases, models did not validate against their studies’ reported AUROC values.DiscussionPublished and preprint prognostic models for patients infected with COVID-19 performed substantially worse when applied to external data. Further inquiry is required to elucidate mechanisms underlying performance deviations.ConclusionsClinicians should employ caution when applying models for clinical prediction without careful validation on local data.


PEDIATRICS ◽  
2018 ◽  
Vol 141 (5) ◽  
pp. e20173674 ◽  
Author(s):  
Helena Pfeiffer ◽  
Anne Smith ◽  
Alison Mary Kemp ◽  
Laura Elizabeth Cowley ◽  
John A. Cheek ◽  
...  

2019 ◽  
Author(s):  
Matthias Gijsen ◽  
Chao-yuan Huang ◽  
Marine Flechet ◽  
Ruth Van Daele ◽  
Peter Declercq ◽  
...  

Abstract BackgroundAugmented renal clearance (ARC) might lead to subtherapeutic plasma levels of drugs with predominant renal clearance. Early identification of ARC remains challenging for the intensive care unit (ICU) physician. We developed and validated the ARC predictor, a clinical prediction model for ARC on the next day during ICU stay, and made it available via an online calculator. Its predictive performance was compared with that of two existing models for ARC.MethodsA large multicenter database including medical, surgical and cardiac ICU patients (n = 33258 ICU days) from three Belgian tertiary care academic hospitals was used for the development of the prediction model. Development was based on clinical information available during ICU stay. We assessed performance by measuring discrimination, calibration and net benefit. The final model was externally validated (n = 10259 ICU days) in a single-center population.ResultsARC was found on 19.6% of all ICU days in the development cohort. Six clinical variables were retained in the ARC predictor: day from ICU admission, age, sex, serum creatinine, trauma and cardiac surgery. External validation confirmed good performance with an area under the curve of 0.88 (95% CI 0.87 – 0.88), and a sensitivity and specificity of 84.1 (95% CI 82.5 – 85.7) and 76.3 (95% CI 75.4 – 77.2) at the default threshold probability of 0.2, respectively.ConclusionARC on the next day can be predicted with good performance during ICU stay, using routinely collected clinical information that is readily available at bedside. The ARC predictor is available at www.arcpredictor.com.


2020 ◽  
Author(s):  
Chundong Zhang ◽  
Zubing Mei ◽  
Junpeng Pei ◽  
Masanobu Abe ◽  
Xiantao Zeng ◽  
...  

Abstract Background The American Joint Committee on Cancer (AJCC) 8th tumor/node/metastasis (TNM) classification for colorectal cancer (CRC) has limited ability to predict prognosis. Methods We included 45,379 eligible stage I-III CRC patients from the Surveillance, Epidemiology, and End Results Program. Patients were randomly assigned individually to a training (N =31,772) or an internal validation cohort (N =13,607). External validation was performed in 10,902 additional patients. Patients were divided according to T and N stage permutations. Survival analyses were conducted by a Cox proportional hazard model and Kaplan-Meier analysis, with T1N0 as the reference. Area under receiver operating characteristic curve (AUC) and Akaike information criteria (AIC) were applied for prognostic discrimination and model-fitting, respectively. Clinical benefits were further assessed by decision curve analyses. Results We created a modified TNM (mTNM) classification: stages I (T1-2N0-1a), IIA (T1N1b, T2N1b, T3N0), IIB (T1-2N2a-2b, T3N1a-1b, T4aN0), IIC (T3N2a, T4aN1a-2a, T4bN0), IIIA (T3N2b, T4bN1a), IIIB (T4aN2b, T4bN1b), and IIIC (T4bN2a-2b). In the internal validation cohort, compared to the AJCC 8th TNM classification, the mTNM classification showed superior prognostic discrimination (AUC = 0.675 vs. 0.667, respectively; two-sided P &lt;0.001) and better model-fitting (AIC = 70,937 vs. 71,238, respectively). Similar findings were obtained in the external validation cohort. Decision curve analyses revealed that the mTNM had superior net benefits over the AJCC 8th TNM classification in the internal and external validation cohorts. Conclusions The mTNM classification provides better prognostic discrimination than AJCC 8th TNM classification, with good applicability in various populations and settings, to help better stratify stage I-III CRC patients into prognostic groups.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S544-S544
Author(s):  
Joel Iverson Howard ◽  
Joni Aoki ◽  
Jeffrey Ferraro ◽  
Ben Haaland ◽  
Andrew Pavia ◽  
...  

Abstract Background Infectious diarrheal illness is a significant contributor to healthcare costs in the US pediatric population. New multi-pathogen PCR-based panels have shown increased sensitivity over previous methods; however, they are costly and clinical utility may be limited in many cases. Clinical Prediction Rules (CPRs) may help optimize the appropriate use of these tests. Furthermore, Natural Language Processing (NLP) is an emerging tool to extract clinical history for decision support. Here, we examine NLP for the validation of a CPR for pediatric diarrhea. Methods Using data from a prospective clinical trial at 5 US pediatric hospitals, 961 diarrheal cases were assessed for etiology and relevant clinical variables. Of 65 variables collected in that study, 42 were excluded in our models based on a scarcity of documentation in reviewed clinical charts. The remaining 23 variables were ranked by random forest (RF) variable importance and utilized in both an RF and stepwise logistic regression (LR) model for viral-only etiology. We investigated whether NLP could accurately extract data from clinical notes comparable to study questionnaires. We used the eHOST abstraction software to abstract 6 clinical variables from patient charts that were useful in our CPR. These data will be used to train an NLP algorithm to extract the same variables from additional charts, and be combined with data from 2 other variables coded in the EMR to externally validate our model. Results Both RF and LR models achieved cross-validated area under the receiver operating characteristic curves of 0.74 using the top 5 variables (season, age, bloody diarrhea, vomiting/nausea, and fever), which did not improve significantly with the addition of more variables. Of 270 charts abstracted for NLP training, there were 41 occurrences of bloody diarrhea annotated, 339 occurrences of vomiting, and 145 occurrences of fever. Inter-annotator agreement over 9 training sets ranged between 0.63 and 0.83. Conclusion We have constructed a parsimonious CPR involving only 5 inputs for the prediction of a viral-only etiology for pediatric diarrheal illness using prospectively collected data. With the training of an NLP algorithm for automated chart abstraction we will validate the CPR. NLP could allow a CPR to run without manual data entry to improve care. Disclosures All authors: No reported disclosures.


2020 ◽  
Author(s):  
Matthias Gijsen ◽  
Chao-yuan Huang ◽  
Marine Flechet ◽  
Ruth Van Daele ◽  
Peter Declercq ◽  
...  

Abstract Background Augmented renal clearance (ARC) might lead to subtherapeutic plasma levels of drugs with predominant renal clearance. Early identification of ARC remains challenging for the intensive care unit (ICU) physician. We developed and validated the ARC predictor, a clinical prediction model for ARC on the next day during ICU stay, and made it available via an online calculator. Its predictive performance was compared with that of two existing models for ARC. Methods A large multicenter database including medical, surgical and cardiac surgery ICU patients (n = 33258 ICU days) from three Belgian tertiary care academic hospitals was used for the development of the prediction model. Development was based on clinical information available during ICU stay. We assessed performance by measuring discrimination, calibration and net benefit. The final model was externally validated (n = 10259 ICU days) in a single-center population. Results ARC was found on 19.6% of all ICU days in the development cohort. Six clinical variables were retained in the ARC predictor: day from ICU admission, age, sex, serum creatinine, trauma and cardiac surgery. External validation confirmed good performance with an area under the curve of 0.88 (95% CI 0.87 – 0.88), and a sensitivity and specificity of 84.1 (95% CI 82.5 – 85.7) and 76.3 (95% CI 75.4 – 77.2) at the default threshold probability of 0.2, respectively. Conclusion ARC on the next day can be predicted with good performance during ICU stay, using routinely collected clinical information that is readily available at bedside. The ARC predictor is available at www.arcpredictor.com .


2020 ◽  
Author(s):  
David N. Fisman ◽  
Amy L. Greer ◽  
Ashleigh R. Tuite

AbstractBackgroundSARS-CoV-2 is currently causing a high mortality global pandemic. However, the clinical spectrum of disease caused by this virus is broad, ranging from asymptomatic infection to cytokine storm with organ failure and death. Risk stratification of individuals with COVID-19 would be desirable for management, prioritization for trial enrollment, and risk stratification. We sought to develop a prediction rule for mortality due to COVID-19 in individuals with diagnosed infection in Ontario, Canada.MethodsData from Ontario’s provincial iPHIS system were extracted for the period from January 23 to May 15, 2020. Both logistic regression-based prediction rules, and a rule derived using a Cox proportional hazards model, were developed in half the study and validated in remaining patients. Sensitivity analyses were performed with varying approaches to missing data.Results21,922 COVID-19 cases were reported. Individuals assigned to the derivation and validation sets were broadly similar. Age and comorbidities (notably diabetes, renal disease and immune compromise) were strong predictors of mortality. Four point-based prediction rules were derived (base case, smoking excluded as a predictor, long-term care excluded as a predictor, and Cox model based). All rules displayed excellent discrimination (AUC for all rules > 0.92) and calibration (both by graphical inspection and P > 0.50 by Hosmer-Lemeshow test) in the derivation set. All rules performed well in the validation set and were robust to random replacement of missing variables, and to the assumption that missing variables indicated absence of the comorbidity or characteristic in question.ConclusionsWe were able to use a public health case-management data system to derive and internally validate four accurate, well-calibrated and robust clinical prediction rules for COVID-19 mortality in Ontario, Canada. While these rules need external validation, they may be a useful tool for clinical management, risk stratification, and clinical trials.


Sign in / Sign up

Export Citation Format

Share Document