To TQIP or Not to TQIP? that is the Question

2014 ◽  
Vol 80 (4) ◽  
pp. 386-390
Author(s):  
Jiselle Bock Heaney ◽  
Chrissy Guidry ◽  
Eric Simms ◽  
Jennifer Turney ◽  
Peter Meade ◽  
...  

The Trauma Quality Improvement Program (TQIP) reports a feasible mortality prediction model. We hypothesize that our institutional characteristics differ from TQIP aggregate data, questioning its applicability. We conducted a 2-year (2008 to 2009) retrospective analysis of all trauma activations at a Level 1 trauma center. Data were analyzed using TQIP methodology (three groups: blunt single system, blunt multisystem, and penetrating) to develop a mortality prediction model using multiple logistic regression. These data were compared with TQIP data. Four hundred fifty-seven patients met TQIP inclusion criteria. Penetrating and blunt trauma differed significantly at our institution versus TQIP aggregates (61.9 vs 7.8%; 38.0 vs 92.2%, P < 0.01). There were more firearm mechanisms of injury and less falls compared with TQIP aggregates (28.9 vs 4.2%; 8.5 vs 34.8%, P < 0.01). All other mechanisms were not significantly different. Variables significant in the TQIP model but not found to be predictors of mortality included Glasgow Coma Score motor 2 to 5, systolic blood pressure greater than 90 mmHg, age, initial pulse rate in the emergency department, mechanism of injury, head Abbreviated Injury Score, and abdominal Abbreviated Injury Score. External benchmarking of trauma center performance using mortality prediction models is important in quality improvement for trauma patient care. From our results, TQIP methodology from the pilot study may not be applicable to all institutions.

Author(s):  
Tara Lagu ◽  
Mihaela Stefan ◽  
Quinn Pack ◽  
Auras Atreya ◽  
Mohammad A Kashef ◽  
...  

Background: Mortality prediction models, developed with the goal of improving risk stratification in hospitalized heart failure (HF) patients, show good performance characteristics in the datasets in which they were developed but have not been validated in external populations. Methods: We used a novel multi-hospital dataset [HealthFacts (Cerner Corp)] derived from the electronic health record (years 2010-2012). We examined the performance of four published HF inpatient mortality prediction models developed using data from: the Acute Decompensated Heart Failure National Registry (ADHERE), the Enhanced Feedback for Effective Cardiac Treatment (EFFECT) study, and the Get With the Guidelines-Heart Failure (GWTG-HF) registry. We compared to an administrative HF mortality prediction model (Premier model) that includes selected patient demographics, comorbidities, prior heart failure admissions, and therapies administered (e.g., inotropes, mechanical ventilation) in the first 2 hospital days. We also compared to a model that uses clinical data but is not heart failure-specific: the Laboratory-Based Acute Physiology Score (LAPS2). We included patients aged ≥18 years admitted with HF to one of 62 hospitals in the database. We applied all 6 models to the data and calculated the c-statistics. Results: We identified 13,163 patients ≥18 years old with a diagnosis of heart failure. Median age was 74 years; approximately half were women; 65% of patients were white and 27% were black. In-hospital mortality was 4.3%. Bland-Altman plots revealed that, at higher predicted mortality, the Premier model outperformed the clinical models. Discrimination of the models varied: ADHERE model (0.68); EFFECT (0.70); GWTG-HF, Peterson (0.69); GWTG-HF, Eapen (0.70); LAPS2 (0.74); Premier (0.81) (Figure). Conclusions: Clinically-derived inpatient heart failure mortality models exhibited similar performance with c statistics hovering around 0.70. A generic clinical mortality prediction model (LAPS2) had slightly better performance, as did a detailed administrative model. Any of these models may be useful for severity adjustment in comparative effectiveness studies of heart failure patients. When clinical data are not available, the administrative model performs similarly to clinical models.


2018 ◽  
Author(s):  
Maliazurina Saad

AbstractTo date, developing a reliable mortality prediction model remains challenging. Although clinical predictors like age, gender and laboratory results are of considerable predictive value, the accuracy often ranges only between 60-80%. In this study, we proposed prediction models built on the basis of clinical covariates with adjustment for additional variables that was radiographically induced. The proposed method exhibited a high degree of prediction accuracy of between 83-92%, as well as overall improvement of between 6-20% in all other metrics, such as ROC Area, False Positive Rates, Recall and Root Mean Square Error. We provide a proof of concept that there is an added value for incorporating the additional variables while predicting 24-month mortality in pulmonary carcinomas patients with cavitary lesions. It is hoped that the findings will be clinically useful to the medical community.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yohei Hirano ◽  
Yutaka Kondo ◽  
Toru Hifumi ◽  
Shoji Yokobori ◽  
Jun Kanda ◽  
...  

AbstractIn this study, we aimed to develop and validate a machine learning-based mortality prediction model for hospitalized heat-related illness patients. After 2393 hospitalized patients were extracted from a multicentered heat-related illness registry in Japan, subjects were divided into the training set for development (n = 1516, data from 2014, 2017–2019) and the test set (n = 877, data from 2020) for validation. Twenty-four variables including characteristics of patients, vital signs, and laboratory test data at hospital arrival were trained as predictor features for machine learning. The outcome was death during hospital stay. In validation, the developed machine learning models (logistic regression, support vector machine, random forest, XGBoost) demonstrated favorable performance for outcome prediction with significantly increased values of the area under the precision-recall curve (AUPR) of 0.415 [95% confidence interval (CI) 0.336–0.494], 0.395 [CI 0.318–0.472], 0.426 [CI 0.346–0.506], and 0.528 [CI 0.442–0.614], respectively, compared to that of the conventional acute physiology and chronic health evaluation (APACHE)-II score of 0.287 [CI 0.222–0.351] as a reference standard. The area under the receiver operating characteristic curve (AUROC) values were also high over 0.92 in all models, although there were no statistical differences compared to APACHE-II. This is the first demonstration of the potential of machine learning-based mortality prediction models for heat-related illnesses.


2021 ◽  
Author(s):  
Jaeyoung Yang ◽  
Hong-Gook Lim ◽  
Wonhyeong Park ◽  
Dongseok Kim ◽  
Jin Sun Yoon ◽  
...  

Abstract BackgroundPrediction of mortality in intensive care units is very important. Thus, various mortality prediction models have been developed for this purpose. However, they do not accurately reflect the changing condition of the patient in real time. The aim of this study was to develop and evaluate a machine learning model that predicts short-term mortality in the intensive care unit using four easy-to-collect vital signs.MethodsTwo independent retrospective observational cohorts were included in this study. The primary training cohort included the data of 1968 patients admitted to the intensive care unit at the Veterans Health Service Medical Center, Seoul, South Korea, from January 2018 to March 2019. The external validation cohort comprised the records of 409 patients admitted to the medical intensive care unit at Seoul National University Hospital, Seoul, South Korea, from January 2019 to December 2019. Datasets of four vital signs (heart rate, systolic blood pressure, diastolic blood pressure, and peripheral capillary oxygen saturation [SpO2]) measured every hour for 10 h were used for the development of the machine learning model. The performances of mortality prediction models generated using five machine learning algorithms, Random Forest (RF), XGboost, perceptron, convolutional neural network, and Long Short-Term Memory, were calculated and compared using area under the receiver operating characteristic curve (AUROC) values and an external validation dataset.ResultsThe machine learning model generated using the RF algorithm showed the best performance. Its AUROC was 0.922, which is much better than the 0.8408 of the Acute Physiology and Chronic Health Evaluation II. Thus, to investigate the importance of variables that influence the performance of the machine learning model, machine learning models were generated for each observation time or vital sign using the RF algorithm. The machine learning model developed using SpO2 showed the best performance (AUROC, 0.89). ConclusionsThe mortality prediction model developed in this study using data from only four types of commonly recorded vital signs is simpler than any existing mortality prediction model. This simple yet powerful new mortality prediction model could be useful for early detection of probable mortality and appropriate medical intervention, especially in rapidly deteriorating patients.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alcade Rudakemwa ◽  
Amyl Lucille Cassidy ◽  
Théogène Twagirumugabe

Abstract Background Reasons for admission to intensive care units (ICUs) for obstetric patients vary from one setting to another. Outcomes from ICU and prediction models are not well explored in Rwanda owing to lack of appropriate scores. This study aimed to assess reasons for admission and accuracy of prediction models for mortality of obstetric patients admitted to ICUs of two public tertiary hospitals in Rwanda. Methods We prospectively collected data from all obstetric patients admitted to the ICUs of the two public tertiary hospitals in Rwanda from March 2017 to February 2018 to identify reasons for admission, demographic and clinical characteristics, outcome including death and its predictability by both the Modified Early Obstetric Warning Score (MEOWS) and quick Sequential Organ Failure Assessment (qSOFA). We analysed the accuracy of mortality prediction models by MEOWS or qSOFA by using logistic regression adjusting for factors associated with mortality. Area under the Receiver Operating characteristic (AUROC) curves is used to show the predicting capacity for each individual tool. Results Obstetric patients (n = 94) represented 12.8 % of all 747 ICU admissions which is 1.8 % of all 4.999 admitted women for pregnancy or labor. Sepsis (n = 30; 31.9 %) and obstetric haemorrhage (n = 24; 25.5 %) were the two commonest reasons for ICU admission. Overall ICU mortality for obstetric patients was 54.3 % (n = 51) with average length of stay of 6.6 ± 7.525 days. MEOWS score was an independent predictor of mortality (adjusted (a)OR 1.25; 95 % CI 1.07–1.46) and so was qSOFA score (aOR 2.81; 95 % CI 1.25–6.30) with an adjusted AUROC of 0.773 (95 % CI 0.67–0.88) and 0.764 (95 % CI 0.65–0.87), indicating fair accuracy for ICU mortality prediction in these settings of both MEOWS and qSOFA scores. Conclusions Sepsis and obstetric haemorrhage were the commonest reasons for obstetric admissions to ICU in Rwanda. MEOWS and qSOFA scores could accurately predict ICU mortality of obstetric patients in resource-limited settings, but larger studies are needed before a recommendation for their use in routine practice in similar settings.


2012 ◽  
Vol 40 (7) ◽  
pp. 2268-2269 ◽  
Author(s):  
Tara Lagu ◽  
Thomas L. Higgins ◽  
Brian H. Nathanson ◽  
Peter K. Lindenauer

Sign in / Sign up

Export Citation Format

Share Document