scholarly journals Development and Validation of a Sepsis Mortality Risk Score for Sepsis-3 Patients in Intensive Care Unit

2021 ◽  
Vol 7 ◽  
Author(s):  
Kai Zhang ◽  
Shufang Zhang ◽  
Wei Cui ◽  
Yucai Hong ◽  
Gensheng Zhang ◽  
...  

Background: Many severity scores are widely used for clinical outcome prediction for critically ill patients in the intensive care unit (ICU). However, for patients identified by sepsis-3 criteria, none of these have been developed. This study aimed to develop and validate a risk stratification score for mortality prediction in sepsis-3 patients.Methods: In this retrospective cohort study, we employed the Medical Information Mart for Intensive Care III (MIMIC III) database for model development and the eICU database for external validation. We identified septic patients by sepsis-3 criteria on day 1 of ICU entry. The Least Absolute Shrinkage and Selection Operator (LASSO) technique was performed to select predictive variables. We also developed a sepsis mortality prediction model and associated risk stratification score. We then compared model discrimination and calibration with other traditional severity scores.Results: For model development, we enrolled a total of 5,443 patients fulfilling the sepsis-3 criteria. The 30-day mortality was 16.7%. With 5,658 septic patients in the validation set, there were 1,135 deaths (mortality 20.1%). The score had good discrimination in development and validation sets (area under curve: 0.789 and 0.765). In the validation set, the calibration slope was 0.862, and the Brier value was 0.140. In the development dataset, the score divided patients according to mortality risk of low (3.2%), moderate (12.4%), high (30.7%), and very high (68.1%). The corresponding mortality in the validation dataset was 2.8, 10.5, 21.1, and 51.2%. As shown by the decision curve analysis, the score always had a positive net benefit.Conclusion: We observed moderate discrimination and calibration for the score termed Sepsis Mortality Risk Score (SMRS), allowing stratification of patients according to mortality risk. However, we still require further modification and external validation.

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1582
Author(s):  
Tawsifur Rahman ◽  
Fajer A. Al-Ishaq ◽  
Fatima S. Al-Mohannadi ◽  
Reem S. Mubarak ◽  
Maryam H. Al-Hitmi ◽  
...  

Healthcare researchers have been working on mortality prediction for COVID-19 patients with differing levels of severity. A rapid and reliable clinical evaluation of disease intensity will assist in the allocation and prioritization of mortality mitigation resources. The novelty of the work proposed in this paper is an early prediction model of high mortality risk for both COVID-19 and non-COVID-19 patients, which provides state-of-the-art performance, in an external validation cohort from a different population. Retrospective research was performed on two separate hospital datasets from two different countries for model development and validation. In the first dataset, COVID-19 and non-COVID-19 patients were admitted to the emergency department in Boston (24 March 2020 to 30 April 2020), and in the second dataset, 375 COVID-19 patients were admitted to Tongji Hospital in China (10 January 2020 to 18 February 2020). The key parameters to predict the risk of mortality for COVID-19 and non-COVID-19 patients were identified and a nomogram-based scoring technique was developed using the top-ranked five parameters. Age, Lymphocyte count, D-dimer, CRP, and Creatinine (ALDCC), information acquired at hospital admission, were identified by the logistic regression model as the primary predictors of hospital death. For the development cohort, and internal and external validation cohorts, the area under the curves (AUCs) were 0.987, 0.999, and 0.992, respectively. All the patients are categorized into three groups using ALDCC score and death probability: Low (probability < 5%), Moderate (5% < probability < 50%), and High (probability > 50%) risk groups. The prognostic model, nomogram, and ALDCC score will be able to assist in the early identification of both COVID-19 and non-COVID-19 patients with high mortality risk, helping physicians to improve patient management.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yan Luo ◽  
Zhiyu Wang ◽  
Cong Wang

Abstract Background Prognostication is an essential tool for risk adjustment and decision making in the intensive care units (ICUs). In order to improve patient outcomes, we have been trying to develop a more effective model than Acute Physiology and Chronic Health Evaluation (APACHE) II to measure the severity of the patients in ICUs. The aim of the present study was to provide a mortality prediction model for ICUs patients, and to assess its performance relative to prediction based on the APACHE II scoring system. Methods We used the Medical Information Mart for Intensive Care version III (MIMIC-III) database to build our model. After comparing the APACHE II with 6 typical machine learning (ML) methods, the best performing model was screened for external validation on anther independent dataset. Performance measures were calculated using cross-validation to avoid making biased assessments. The primary outcome was hospital mortality. Finally, we used TreeSHAP algorithm to explain the variable relationships in the extreme gradient boosting algorithm (XGBoost) model. Results We picked out 14 variables with 24,777 cases to form our basic data set. When the variables were the same as those contained in the APACHE II, the accuracy of XGBoost (accuracy: 0.858) was higher than that of APACHE II (accuracy: 0.742) and other algorithms. In addition, it exhibited better calibration properties than other methods, the result in the area under the ROC curve (AUC: 0.76). we then expand the variable set by adding five new variables to improve the performance of our model. The accuracy, precision, recall, F1, and AUC of the XGBoost model increased, and were still higher than other models (0.866, 0.853, 0.870, 0.845, and 0.81, respectively). On the external validation dataset, the AUC was 0.79 and calibration properties were good. Conclusions As compared to conventional severity scores APACHE II, our XGBoost proposal offers improved performance for predicting hospital mortality in ICUs patients. Furthermore, the TreeSHAP can help to enhance the understanding of our model by providing detailed insights into the impact of different features on the disease risk. In sum, our model could help clinicians determine prognosis and improve patient outcomes.


2021 ◽  
Vol 9 ◽  
Author(s):  
Fu-Sheng Chou ◽  
Laxmi V. Ghimire

Background: Pediatric myocarditis is a rare disease. The etiologies are multiple. Mortality associated with the disease is 5–8%. Prognostic factors were identified with the use of national hospitalization databases. Applying these identified risk factors for mortality prediction has not been reported.Methods: We used the Kids' Inpatient Database for this project. We manually curated fourteen variables as predictors of mortality based on the current knowledge of the disease, and compared performance of mortality prediction between linear regression models and a machine learning (ML) model. For ML, the random forest algorithm was chosen because of the categorical nature of the variables. Based on variable importance scores, a reduced model was also developed for comparison.Results: We identified 4,144 patients from the database for randomization into the primary (for model development) and testing (for external validation) datasets. We found that the conventional logistic regression model had low sensitivity (~50%) despite high specificity (&gt;95%) or overall accuracy. On the other hand, the ML model struck a good balance between sensitivity (89.9%) and specificity (85.8%). The reduced ML model with top five variables (mechanical ventilation, cardiac arrest, ECMO, acute kidney injury, ventricular fibrillation) were sufficient to approximate the prediction performance of the full model.Conclusions: The ML algorithm performs superiorly when compared to the linear regression model for mortality prediction in pediatric myocarditis in this retrospective dataset. Prospective studies are warranted to further validate the applicability of our model in clinical settings.


2019 ◽  
Vol 98 (10) ◽  
pp. 1088-1095 ◽  
Author(s):  
J. Krois ◽  
C. Graetz ◽  
B. Holtfreter ◽  
P. Brinkmann ◽  
T. Kocher ◽  
...  

Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients’ age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly.


2007 ◽  
Vol 37 (3) ◽  
pp. 580-597 ◽  
Author(s):  
Adrian J. Das ◽  
John J. Battles ◽  
Nathan L. Stephenson ◽  
Phillip J. van Mantgem

We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ≥20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk.


Author(s):  
A. Yu. Zemchenkov ◽  
R. P. Gerasimchuk ◽  
A. B. Sabodash ◽  
K. A. Vishnevskii ◽  
G. A. Zemchenkov ◽  
...  

Aim.The optimal time for initiating of chronic dialysis remains unknown. The scale for mortality risk assessment could help in decision-making concerning dialysis start timing.Methods.We randomly divided 1856 patients started dialysis in 2009–2016 into developmental and validation group (1:1) to create and validate scoring system «START» predicting mortality risk at dialysis initiation in order to fi nd unmodifi able and modifi able factors which could help in the decision-making of dialysis start. In the series of univariate regression models in the developmental set, we evaluated the mortality risk linked with available parameters: age, eGFR, serum phosphate, total calcium, hemoglobin, Charlson comorbidity index, diabetes status, urgency of start (turned to be signifi cant) and gender, serum sodium, potassium, blood pressure (without impact on survival). Similar hazard ratios were converted to score points.Results.The START score was highly predictive of death: C-statistic was 0.82 (95% CI 0.79–0.85) for the developmental dataset and 0.79 (95% CI 0.74–0.84) for validation dataset (both p < 0.001). On applying the cutoff between 7–8 points in the developmental dataset, the risk score was highly sensitive 81.1% and specifi c 67.9%; for validation dataset, the sensitivity was 78.9%, specifi city 67.9%. We confi rmed the similarity in survival prediction in the validation set to developmental set in low, medium and high START score groups. The difference in survival between three levels of START-score in validation set remained similar to that of developmental set: Wilcoxon = 8.78 (p = 0.02) vs 15.31 (p < 0.001) comparing low–medium levels and 25.18 (p < 0.001) vs 39.21 (p < 0.001) comparing medium–high levels.Conclusion.Developed START score system including modifi able factors showed good mortality prediction and could be used in dialysis start decision-making. 


2019 ◽  
Vol 4 (6) ◽  
pp. e001801
Author(s):  
Sarah Hanieh ◽  
Sabine Braat ◽  
Julie A Simpson ◽  
Tran Thi Thu Ha ◽  
Thach D Tran ◽  
...  

IntroductionGlobally, an estimated 151 million children under 5 years of age still suffer from the adverse effects of stunting. We sought to develop and externally validate an early life predictive model that could be applied in infancy to accurately predict risk of stunting in preschool children.MethodsWe conducted two separate prospective cohort studies in Vietnam that intensively monitored children from early pregnancy until 3 years of age. They included 1168 and 475 live-born infants for model development and validation, respectively. Logistic regression on child stunting at 3 years of age was performed for model development, and the predicted probabilities for stunting were used to evaluate the performance of this model in the validation data set.ResultsStunting prevalence was 16.9% (172 of 1015) in the development data set and 16.4% (70 of 426) in the validation data set. Key predictors included in the final model were paternal and maternal height, maternal weekly weight gain during pregnancy, infant sex, gestational age at birth, and infant weight and length at 6 months of age. The area under the receiver operating characteristic curve in the validation data set was 0.85 (95% Confidence Interval, 0.80–0.90).ConclusionThis tool applied to infants at 6 months of age provided valid prediction of risk of stunting at 3 years of age using a readily available set of parental and infant measures. Further research is required to examine the impact of preventive measures introduced at 6 months of age on those identified as being at risk of growth faltering at 3 years of age.


2015 ◽  
Vol 42 (1) ◽  
pp. 57-64 ◽  
Author(s):  
Tetsu Ohnuma ◽  
Shigehiko Uchino ◽  
Noriyoshi Toki ◽  
Kenta Takeda ◽  
Yoshitomo Namba ◽  
...  

Background/Aims: Acute kidney injury (AKI) is associated with high mortality. Multiple AKI severity scores have been derived to predict patient outcome. We externally validated new AKI severity scores using the Japanese Society for Physicians and Trainees in Intensive Care (JSEPTIC) database. Methods: New AKI severity scores published in the 21st century (Mehta, Stuivenberg Hospital Acute Renal Failure (SHARF) II, Program to Improve Care in Acute Renal Disease (PICARD), Vellore and Demirjian), Liano, Simplified Acute Physiology Score (SAPS) II and lactate were compared using the JSEPTIC database that collected retrospectively 343 patients with AKI who required continuous renal replacement therapy (CRRT) in 14 intensive care units. Accuracy of the severity scores was assessed by the area under the receiver-operator characteristic curve (AUROC, discrimination) and Hosmer-Lemeshow test (H-L test, calibration). Results: The median age was 69 years and 65.8% were male. The median SAPS II score was 53 and the hospital mortality was 58.6%. The AUROC curves revealed low discrimination ability of the new AKI severity scores (Mehta 0.65, SHARF II 0.64, PICARD 0.64, Vellore 0.64, Demirjian 0.69), similar to Liano 0.67, SAPS II 0.67 and lactate 0.64. The H-L test also demonstrated that all assessed scores except for Liano had significantly low calibration ability. Conclusions: Using a multicenter database of AKI patients requiring CRRT, this study externally validated new AKI severity scores. While the Demirjian's score and Liano's score showed a better performance, further research will be required to confirm these findings.


2021 ◽  
Vol 10 (8) ◽  
pp. 1615
Author(s):  
Jaime Feliu ◽  
Alvaro Pinto ◽  
Laura Basterretxea ◽  
Borja López-San Vicente ◽  
Irene Paredero ◽  
...  

Background: Estimation of life expectancy in older patients is relevant to select the best treatment strategy. We aimed to develop and validate a score to predict early mortality in older patients with cancer. Patients and Methods: A total of 749 patients over 70 years starting new chemotherapy regimens were prospectively included. A prechemotherapy assessment that included sociodemographic variables, tumor/treatment variables, and geriatric assessment variables was performed. Association between these factors and early death was examined using multivariable logistic regression. Score points were assigned to each risk factor. External validation was performed on an independent cohort. Results: In the training cohort, the independent predictors of 6-month mortality were metastatic stage (OR 4.8, 95% CI [2.4–9.6]), ECOG-PS 2 (OR 2.3, 95% CI [1.1–5.2]), ADL ≤ 5 (OR 1.7, 95% CI [1.1–3.5]), serum albumin levels ≤ 3.5 g/dL (OR 3.4, 95% CI [1.7–6.6]), BMI < 23 kg/m2 (OR 2.5, 95% CI [1.3–4.9]), and hemoglobin levels < 11 g/dL (OR 2.4, 95% CI (1.2–4.7)). With these results, we built a prognostic score. The area under the ROC curve was 0.78 (95% CI, 0.73 to 0.84), and in the validation set, it was 0.73 (95% CI: 0.67–0.79). Conclusions: This simple and highly accurate tool can help physicians making decisions in elderly patients with cancer who are planned to initiate chemotherapy treatment.


2021 ◽  
Author(s):  
Luis Serviá ◽  
Juan Antonio Llompart-Pou ◽  
Mario Chico-Fernández ◽  
Neus Montserrat ◽  
Mariona Badia ◽  
...  

Abstract BackgroundSeverity scores are commonly used for outcome adjustment and benchmarking of trauma care provided. No specific models performed only with critically ill patients are available. Our objective was to develop a new score for early mortality prediction in trauma ICU patients.MethodsRetrospective study using the Spanish Trauma ICU registry (RETRAUCI) 2015-2019. Patients were divided and analysed into the derivation (2015-2017) and validation sets (2018-2019). We used as candidate variables to be associated with mortality those available in RETRAUCI that could be collected in the first 24 hours after ICU admission. Using logistic regression methodology, a simple score (RETRASCORE) was created with points assigned to each selected variable. The performance of the model was carried out according to global measures, discrimination and calibration.ResultsThe analysis included 9465 patients. Derivation set 5976 and validation set 3489. Thirty-day mortality was 12.2%. The predicted probability of 30-day mortality was determined by the following equation: 1 / (1+exp (-y)), where y=0.598 (Age 50–65) + 1.239 (Age 66–75) + 2.198 (Age > 75) + 0.349 (PRECOAG) + 0.336 (Pre-hospital intubation) + 0.662 (High risk mechanism) + 0.950 (unilateral mydriasis) + 3.217 (bilateral mydriasis) + 0.841 (Glasgow ≤ 8) + 0.495 (MAIS-Head) - 0.271 (MAIS-Thorax) + 1.148 (Hemodynamic failure) + 0.708 (Respiratory failure) + 0.567 (Coagulopathy) + 0.580 (Mechanical ventilation) + 0.452 (Massive haemorrhage) - 5.432. The AUROC was 0.913 (0.903-0.923) in the derivation set and 0.929 (0.918-0.940) in the validation set.ConclusionsThe newly developed RETRASCORE is an early, easy-to-calculate and specific score to predict in-hospital mortality in trauma ICU patients. Although it has achieved adequate internal validation, it must be externally validated.


Sign in / Sign up

Export Citation Format

Share Document