scholarly journals Comparison of in-hospital mortality risk prediction models from COVID-19

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244629
Author(s):  
Ali A. El-Solh ◽  
Yolanda Lawson ◽  
Michael Carter ◽  
Daniel A. El-Solh ◽  
Kari A. Mergenhagen

Objective Our objective is to compare the predictive accuracy of four recently established outcome models of patients hospitalized with coronavirus disease 2019 (COVID-19) published between January 1st and May 1st 2020. Methods We used data obtained from the Veterans Affairs Corporate Data Warehouse (CDW) between January 1st, 2020, and May 1st 2020 as an external validation cohort. The outcome measure was hospital mortality. Areas under the ROC (AUC) curves were used to evaluate discrimination of the four predictive models. The Hosmer–Lemeshow (HL) goodness-of-fit test and calibration curves assessed applicability of the models to individual cases. Results During the study period, 1634 unique patients were identified. The mean age of the study cohort was 68.8±13.4 years. Hypertension, hyperlipidemia, and heart disease were the most common comorbidities. The crude hospital mortality was 29% (95% confidence interval [CI] 0.27–0.31). Evaluation of the predictive models showed an AUC range from 0.63 (95% CI 0.60–0.66) to 0.72 (95% CI 0.69–0.74) indicating fair to poor discrimination across all models. There were no significant differences among the AUC values of the four prognostic systems. All models calibrated poorly by either overestimated or underestimated hospital mortality. Conclusions All the four prognostic models examined in this study portend high-risk bias. The performance of these scores needs to be interpreted with caution in hospitalized patients with COVID-19.

2018 ◽  
Vol 17 (8) ◽  
pp. 675-689 ◽  
Author(s):  
Satish M Mahajan ◽  
Paul Heidenreich ◽  
Bruce Abbott ◽  
Ana Newton ◽  
Deborah Ward

Aims: Readmission rates for patients with heart failure have consistently remained high over the past two decades. As more electronic data, computing power, and newer statistical techniques become available, data-driven care could be achieved by creating predictive models for adverse outcomes such as readmissions. We therefore aimed to review models for predicting risk of readmission for patients admitted for heart failure. We also aimed to analyze and possibly group the predictors used across the models. Methods: Major electronic databases were searched to identify studies that examined correlation between readmission for heart failure and risk factors using multivariate models. We rigorously followed the review process using PRISMA methodology and other established criteria for quality assessment of the studies. Results: We did a detailed review of 334 papers and found 25 multivariate predictive models built using data from either health system or trials. A majority of models was built using multiple logistic regression followed by Cox proportional hazards regression. Some newer studies ventured into non-parametric and machine learning methods. Overall predictive accuracy with C-statistics ranged from 0.59 to 0.84. We examined significant predictors across the studies using clinical, administrative, and psychosocial groups. Conclusions: Complex disease management and correspondingly increasing costs for heart failure are driving innovations in building risk prediction models for readmission. Large volumes of diverse electronic data and new statistical methods have improved the predictive power of the models over the past two decades. More work is needed for calibration, external validation, and deployment of such models for clinical use.


2020 ◽  
pp. 112070002094795 ◽  
Author(s):  
Rocío Menéndez-Colino ◽  
Alicia Gutiérrez Misis ◽  
Teresa Alarcon ◽  
Jesús Díez-Sebastián ◽  
Macarena Díaz de Bustamante ◽  
...  

Purpose: The aim of this study was to develop a new comprehensive preoperative risk score for predicting mortality during the first year after hip fracture (HF) and its comparison with 3 other risk prediction models. Methods: All patients admitted consecutively with a fragility HF during 1 year in a co-managed orthogeriatric unit at a university hospital were assessed and followed for 1 year. Factors independently associated with 1-year mortality were used to create the HULP-HF (Hospital Universitario La Paz – Hip Fracture) score. The predictive validity, discrimination and calibration of the HULP-HF score, the American Society of Anesthesiologists (ASA) scale, the abbreviated Charlson comorbidity index (a-CCI) and the Nottingham Hip Fracture score (NHFS) were compared. Discriminative performance was assessed using the area under the curve (AUC) and calibration by the Hosmer-Lemeshow goodness-of-fit-test. Results: 509 patients were included. 1-year mortality was 23.2%. The 8 independent mortality risk factors included in the HULP-HF score were age >85 years, baseline functional and cognitive impairment, low body mass index, heart disease, low hand-grip strength, anaemia on admission, and secondary hyperparathyroidism associated with vitamin D deficiency. The AUC was 0.79 in the HULP-HF score, 0.66 in the NHFS, 0.61 in the abbreviated CCI and 0.59 in the ASA scale. The HULP-HF score, the NHFS and the abbreviated CCI all presented good levels of calibration ( p > 0.05). Conclusions: The HULP-HF score has a predictive capacity for 1-year mortality in HF patients slightly superior to that of other previously existing scores.


2014 ◽  
Vol 133 (3) ◽  
pp. 199-205 ◽  
Author(s):  
Ary Serpa Neto ◽  
Murillo Santucci Cesar de Assunção ◽  
Andréia Pardini ◽  
Eliézer Silva

CONTEXT AND OBJECTIVE: Prognostic models reflect the population characteristics of the countries from which they originate. Predictive models should be customized to fit the general population where they will be used. The aim here was to perform external validation on two predictive models and compare their performance in a mixed population of critically ill patients in Brazil.DESIGN AND SETTING: Retrospective study in a Brazilian general intensive care unit (ICU).METHODS: This was a retrospective review of all patients admitted to a 41-bed mixed ICU from August 2011 to September 2012. Calibration (assessed using the Hosmer-Lemeshow goodness-of-fit test) and discrimination (assessed using area under the curve) of APACHE II and SAPS III were compared. The standardized mortality ratio (SMR) was calculated by dividing the number of observed deaths by the number of expected deaths.RESULTS: A total of 3,333 ICU patients were enrolled. The Hosmer-Lemeshow goodness-of-fit test showed good calibration for all models in relation to hospital mortality. For in-hospital mortality there was a worse fit for APACHE II in clinical patients. Discrimination was better for SAPS III for in-ICU and in-hospital mortality (P = 0.042). The SMRs for the whole population were 0.27 (confidence interval [CI]: 0.23 - 0.33) for APACHE II and 0.28 (CI: 0.22 - 0.36) for SAPS III.CONCLUSIONS: In this group of critically ill patients, SAPS III was a better prognostic score, with higher discrimination and calibration power.


2020 ◽  
Author(s):  
Jenna Marie Reps ◽  
Ross Williams ◽  
Seng Chan You ◽  
Thomas Falconer ◽  
Evan Minty ◽  
...  

Abstract Objective: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.Materials & Methods: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.Discussion: This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. Conclusion : In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


2020 ◽  
Author(s):  
Samaneh Asgari ◽  
Fatemeh Moosaie ◽  
Davood Khalili ◽  
Fereidoun Azizi ◽  
Farzad Hadaegh

Abstract Background: High burden of chronic cardio-metabolic disease (CCD) including type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and cardiovascular disease (CVD) have been reported in the Middle East and North Africa region. We aimed to externally validate a Europoid risk assessment tool designed by Alssema et al, including non-laboratory measures, for the prediction of the CCD in the Iranian population. Methods: The predictors included age, body mass index, waist circumference, use of antihypertensive, current smoking, and family history of cardiovascular disease and or diabetes. For external validation of the model in the Tehran lipids and glucose study (TLGS), the Area under the curve (AUC) and the Hosmer-Lemeshow (HL) goodness of fit test were performed for discrimination and calibration, respectively. Results: Among 1310 men and 1960 women aged 28-85 years, 29.5% and 47.4% experienced CCD during the 6 and 9-year follow-up, respectively. The model showed acceptable discrimination, with an AUC of 0.72(95% CI: 0.69-0.75) for men and 0.73(95% CI: 0.71-0.76) for women. The calibration of the model was good for both genders (min HL P=0.5). Considering separate outcomes, AUC was highest for CKD (0.76(95% CI: 0.72-0.79)) and lowest for T2DM (0.65(95% CI: 0.61-0.69)), in men. As for women, AUC was highest for CVD (0.82(95% CI: 0.78-0.86)) and lowest for T2DM (0.69(95% CI: 0.66-0.73)). The 9-year follow-up demonstrated almost similar performances compared to the 6-year follow-up. Conclusion: This model showed acceptable discrimination and good calibration for risk prediction of CCD in short and long-term follow-up in the Iranian population.


2021 ◽  
Author(s):  
Beibei Zhu ◽  
Yan Han ◽  
Fen Deng ◽  
Kun Huang ◽  
Shuangqin Yan ◽  
...  

Objectives: Compared with other thyroid markers, fewer studies explored the associations between triiodothyronine (T3) and T3/free thyroxine (fT4) and glucose abnormality during pregnancy. Thus, we aimed to: (1) examine the associations of T3 and T3/fT4 with glucose metabolism indicators; and (2) evaluate, in the first trimester, the performance of the two markers as predictors of gestational diabetes mellitus (GDM) risk. Methods: Longitudinal data from 2723 individuals, consisting of three repeated measurements of T3 and fT4, from the Man’anshan birth cohort study (MABC), China, were analyzed using a time-specific generalized estimating equation (GEE). The receiver operating characteristic curve (ROC) - area under the curve (AUC) and Hosmer-Lemeshow goodness of fit test were used to assess the discrimination and calibration of prediction models. Results: T3 and T3/fT4 presented stable associations with the level of fasting glucose, glucose at 1h/2h across pregnancy. T3 and T3/fT4 in both the first and second trimesters were positively associated with the risk of GDM, with the larger magnitude of association observed in the second trimester (Odds ratio (OR) = 2.50, 95%CI = 1.95, 3.21 for T3; OR = 1.09, 95%CI = 1.07, 1.12 for T3/fT4). T3 ((AUC) = 0.726, 95%CI = 0.698, 0.754) and T3/fT4 (AUC = 0.724, 95%CI = 0.696, 0.753) in the first trimester could improve the performance of the predicting model; however, the overall performance is not good. Conclusion: Significant and stable associations of T3, T3/fT4 and glucose metabolism indicators were documented. Both T3 and T3/fT4 improve the performance of the GDM predictive model.


Author(s):  
Jenna Marie Reps ◽  
Ross D Williams ◽  
Seng Chan You ◽  
Thomas Falconer ◽  
Evan Minty ◽  
...  

Abstract Background: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.Methods: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.Conclusion : This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


Gut ◽  
2018 ◽  
Vol 68 (4) ◽  
pp. 672-683 ◽  
Author(s):  
Todd Smith ◽  
David C Muller ◽  
Karel G M Moons ◽  
Amanda J Cross ◽  
Mattias Johansson ◽  
...  

ObjectiveTo systematically identify and validate published colorectal cancer risk prediction models that do not require invasive testing in two large population-based prospective cohorts.DesignModels were identified through an update of a published systematic review and validated in the European Prospective Investigation into Cancer and Nutrition (EPIC) and the UK Biobank. The performance of the models to predict the occurrence of colorectal cancer within 5 or 10 years after study enrolment was assessed by discrimination (C-statistic) and calibration (plots of observed vs predicted probability).ResultsThe systematic review and its update identified 16 models from 8 publications (8 colorectal, 5 colon and 3 rectal). The number of participants included in each model validation ranged from 41 587 to 396 515, and the number of cases ranged from 115 to 1781. Eligible and ineligible participants across the models were largely comparable. Calibration of the models, where assessable, was very good and further improved by recalibration. The C-statistics of the models were largely similar between validation cohorts with the highest values achieved being 0.70 (95% CI 0.68 to 0.72) in the UK Biobank and 0.71 (95% CI 0.67 to 0.74) in EPIC.ConclusionSeveral of these non-invasive models exhibited good calibration and discrimination within both external validation populations and are therefore potentially suitable candidates for the facilitation of risk stratification in population-based colorectal screening programmes. Future work should both evaluate this potential, through modelling and impact studies, and ascertain if further enhancement in their performance can be obtained.


2020 ◽  
Author(s):  
Chava L Ramspek ◽  
Kitty J Jager ◽  
Friedo W Dekker ◽  
Carmine Zoccali ◽  
Merel van Diepen

Abstract Prognostic models that aim to improve the prediction of clinical events, individualized treatment and decision-making are increasingly being developed and published. However, relatively few models are externally validated and validation by independent researchers is rare. External validation is necessary to determine a prediction model’s reproducibility and generalizability to new and different patients. Various methodological considerations are important when assessing or designing an external validation study. In this article, an overview is provided of these considerations, starting with what external validation is, what types of external validation can be distinguished and why such studies are a crucial step towards the clinical implementation of accurate prediction models. Statistical analyses and interpretation of external validation results are reviewed in an intuitive manner and considerations for selecting an appropriate existing prediction model and external validation population are discussed. This study enables clinicians and researchers to gain a deeper understanding of how to interpret model validation results and how to translate these results to their own patient population.


Sign in / Sign up

Export Citation Format

Share Document