scholarly journals Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry

2018 ◽  
Vol 12 (3) ◽  
pp. 204-209 ◽  
Author(s):  
Alexander R. van Rosendael ◽  
Gabriel Maliakal ◽  
Kranthi K. Kolli ◽  
Ashley Beecy ◽  
Subhi J. Al’Aref ◽  
...  
Open Heart ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. e001802
Author(s):  
Ashish Sarraju ◽  
Andrew Ward ◽  
Sukyung Chung ◽  
Jiang Li ◽  
David Scheinker ◽  
...  

ObjectivesIdentifying high-risk patients is crucial for effective cardiovascular disease (CVD) prevention. It is not known whether electronic health record (EHR)-based machine-learning (ML) models can improve CVD risk stratification compared with a secondary prevention risk score developed from randomised clinical trials (Thrombolysis in Myocardial Infarction Risk Score for Secondary Prevention, TRS 2°P).MethodsWe identified patients with CVD in a large health system, including atherosclerotic CVD (ASCVD), split into 80% training and 20% test sets. A rich set of EHR patient features was extracted. ML models were trained to estimate 5-year CVD event risk (random forests (RF), gradient-boosted machines (GBM), extreme gradient-boosted models (XGBoost), logistic regression with an L2 penalty and L1 penalty (Lasso)). ML models and TRS 2°P were evaluated by the area under the receiver operating characteristic curve (AUC).ResultsThe cohort included 32 192 patients (median age 74 years, with 46% female, 63% non-Hispanic white and 12% Asian patients and 23 475 patients with ASCVD). There were 4010 events over 5 years of follow-up. ML models demonstrated good overall performance; XGBoost demonstrated AUC 0.70 (95% CI 0.68 to 0.71) in the full CVD cohort and AUC 0.71 (95% CI 0.69 to 0.73) in patients with ASCVD, with comparable performance by GBM, RF and Lasso. TRS 2°P performed poorly in all CVD (AUC 0.51, 95% CI 0.50 to 0.53) and ASCVD (AUC 0.50, 95% CI 0.48 to 0.52) patients. ML identified nontraditional predictive variables including education level and primary care visits.ConclusionsIn a multiethnic real-world population, EHR-based ML approaches significantly improved CVD risk stratification for secondary prevention.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Verena Schöning ◽  
Evangelia Liakoni ◽  
Christine Baumgartner ◽  
Aristomenis K. Exadaktylos ◽  
Wolf E. Hautz ◽  
...  

Abstract Background Clinical risk scores and machine learning models based on routine laboratory values could assist in automated early identification of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) patients at risk for severe clinical outcomes. They can guide patient triage, inform allocation of health care resources, and contribute to the improvement of clinical outcomes. Methods In- and out-patients tested positive for SARS-CoV-2 at the Insel Hospital Group Bern, Switzerland, between February 1st and August 31st (‘first wave’, n = 198) and September 1st through November 16th 2020 (‘second wave’, n = 459) were used as training and prospective validation cohort, respectively. A clinical risk stratification score and machine learning (ML) models were developed using demographic data, medical history, and laboratory values taken up to 3 days before, or 1 day after, positive testing to predict severe outcomes of hospitalization (a composite endpoint of admission to intensive care, or death from any cause). Test accuracy was assessed using the area under the receiver operating characteristic curve (AUROC). Results Sex, C-reactive protein, sodium, hemoglobin, glomerular filtration rate, glucose, and leucocytes around the time of first positive testing (− 3 to + 1 days) were the most predictive parameters. AUROC of the risk stratification score on training data (AUROC = 0.94, positive predictive value (PPV) = 0.97, negative predictive value (NPV) = 0.80) were comparable to the prospective validation cohort (AUROC = 0.85, PPV = 0.91, NPV = 0.81). The most successful ML algorithm with respect to AUROC was support vector machines (median = 0.96, interquartile range = 0.85–0.99, PPV = 0.90, NPV = 0.58). Conclusion With a small set of easily obtainable parameters, both the clinical risk stratification score and the ML models were predictive for severe outcomes at our tertiary hospital center, and performed well in prospective validation.


2021 ◽  
Vol 27 ◽  
pp. 107602962199118
Author(s):  
Logan Ryan ◽  
Samson Mataraso ◽  
Anna Siefkas ◽  
Emily Pellegrini ◽  
Gina Barnes ◽  
...  

Deep venous thrombosis (DVT) is associated with significant morbidity, mortality, and increased healthcare costs. Standard scoring systems for DVT risk stratification often provide insufficient stratification of hospitalized patients and are unable to accurately predict which inpatients are most likely to present with DVT. There is a continued need for tools which can predict DVT in hospitalized patients. We performed a retrospective study on a database collected from a large academic hospital, comprised of 99,237 total general ward or ICU patients, 2,378 of whom experienced a DVT during their hospital stay. Gradient boosted machine learning algorithms were developed to predict a patient’s risk of developing DVT at 12- and 24-hour windows prior to onset. The primary outcome of interest was diagnosis of in-hospital DVT. The machine learning predictors obtained AUROCs of 0.83 and 0.85 for DVT risk prediction on hospitalized patients at 12- and 24-hour windows, respectively. At both 12 and 24 hours before DVT onset, the most important features for prediction of DVT were cancer history, VTE history, and internal normalized ratio (INR). Improved risk stratification may prevent unnecessary invasive testing in patients for whom DVT cannot be ruled out using existing methods. Improved risk stratification may also allow for more targeted use of prophylactic anticoagulants, as well as earlier diagnosis and treatment, preventing the development of pulmonary emboli and other sequelae of DVT.


2021 ◽  
pp. 219256822110193
Author(s):  
Kevin Y. Wang ◽  
Ijezie Ikwuezunma ◽  
Varun Puvanesarajah ◽  
Jacob Babu ◽  
Adam Margalit ◽  
...  

Study Design: Retrospective review. Objective: To use predictive modeling and machine learning to identify patients at risk for venous thromboembolism (VTE) following posterior lumbar fusion (PLF) for degenerative spinal pathology. Methods: Patients undergoing single-level PLF in the inpatient setting were identified in the National Surgical Quality Improvement Program database. Our outcome measure of VTE included all patients who experienced a pulmonary embolism and/or deep venous thrombosis within 30-days of surgery. Two different methodologies were used to identify VTE risk: 1) a novel predictive model derived from multivariable logistic regression of significant risk factors, and 2) a tree-based extreme gradient boosting (XGBoost) algorithm using preoperative variables. The methods were compared against legacy risk-stratification measures: ASA and Charlson Comorbidity Index (CCI) using area-under-the-curve (AUC) statistic. Results: 13, 500 patients who underwent single-level PLF met the study criteria. Of these, 0.95% had a VTE within 30-days of surgery. The 5 clinical variables found to be significant in the multivariable predictive model were: age > 65, obesity grade II or above, coronary artery disease, functional status, and prolonged operative time. The predictive model exhibited an AUC of 0.716, which was significantly higher than the AUCs of ASA and CCI (all, P < 0.001), and comparable to that of the XGBoost algorithm ( P > 0.05). Conclusion: Predictive analytics and machine learning can be leveraged to aid in identification of patients at risk of VTE following PLF. Surgeons and perioperative teams may find these tools useful to augment clinical decision making risk stratification tool.


2020 ◽  
Vol 152 ◽  
pp. S169-S170
Author(s):  
K. Unger ◽  
D.F. Fleischmann ◽  
V. Ruf ◽  
J. Felsberg ◽  
D. Piehlmaier ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document