Machine Learning Models with Preoperative Risk Factors and Intraoperative Hypotension Parameters Predict Mortality After Cardiac Surgery

Author(s):  
Marta Priscila Bento Fernandes ◽  
Miguel Armengol de la Hoz ◽  
Valluvan Rangasamy ◽  
Balachundhar Subramaniam
2016 ◽  
Vol 23 (2) ◽  
pp. 99-109 ◽  
Author(s):  
Donata Ringaitienė ◽  
Dalia Gineitytė ◽  
Vaidas Vicka ◽  
Tadas Žvirblis ◽  
Jūratė Šipylaitė ◽  
...  

Background. Malnutrition (MN) is prevalent in cardiac surgery, but there are no specific preoperative risk factors of MN. The aim of this study is to assess the clinically relevant risk factors of MN for cardiac surgery patients. Materials and methods. The nutritional state of the patients was evaluated one day prior to surgery using a bioelectrical impedance analysis phase angle (PA). Two groups of patients were generated according to low PA: malnourished and well nourished. Risk factors of MN were divided into three clinically relevant groups: psychosocial and lifestyle factors, laboratory findings and disease-associated factors. Variables in each different group were entered into separate multivariate logistic regression models. Results. A total of 712 patients were included in the study. The majority of them were 65-year old men after a CABG procedure. Low PA was present in 22.9% (163) of patients. The analysis of disease-related factors of MN revealed the importance of heart functions (NYHA IV class OR: 3.073, CI95%: 1.416–6.668, p = 0.007), valve pathology (OR: 1.825, CI95%: 1.182–2.819, p = 0.007), renal insufficiency (OR: 4.091, CI95%: 1.995–8.389, p 


2021 ◽  
Vol 28 (1) ◽  
pp. e100439
Author(s):  
Lukasz S Wylezinski ◽  
Coleman R Harris ◽  
Cody N Heiser ◽  
Jamieson D Gray ◽  
Charles F Spurlock

IntroductionThe SARS-CoV-2 (COVID-19) pandemic has exposed health disparities throughout the USA, particularly among racial and ethnic minorities. As a result, there is a need for data-driven approaches to pinpoint the unique constellation of clinical and social determinants of health (SDOH) risk factors that give rise to poor patient outcomes following infection in US communities.MethodsWe combined county-level COVID-19 testing data, COVID-19 vaccination rates and SDOH information in Tennessee. Between February and May 2021, we trained machine learning models on a semimonthly basis using these datasets to predict COVID-19 incidence in Tennessee counties. We then analyzed SDOH data features at each time point to rank the impact of each feature on model performance.ResultsOur results indicate that COVID-19 vaccination rates play a crucial role in determining future COVID-19 disease risk. Beginning in mid-March 2021, higher vaccination rates significantly correlated with lower COVID-19 case growth predictions. Further, as the relative importance of COVID-19 vaccination data features grew, demographic SDOH features such as age, race and ethnicity decreased while the impact of socioeconomic and environmental factors, including access to healthcare and transportation, increased.ConclusionIncorporating a data framework to track the evolving patterns of community-level SDOH risk factors could provide policy-makers with additional data resources to improve health equity and resilience to future public health emergencies.


2012 ◽  
Vol 27 (2) ◽  
pp. 203-210 ◽  
Author(s):  
Marcos Gradim Tiveron ◽  
Alfredo Inácio Fiorelli ◽  
Eduardo Moeller Mota ◽  
Omar Asdrúbal Vilca Mejia ◽  
Carlos Manuel de Almeida Brandão ◽  
...  

2021 ◽  
Vol 8 ◽  
Author(s):  
Hong Zhao ◽  
Jiaming You ◽  
Yuexing Peng ◽  
Yi Feng

Background: Elderly patients undergoing hip fracture repair surgery are at increased risk of delirium due to aging, comorbidities, and frailty. But current methods for identifying the high risk of delirium among hospitalized patients have moderate accuracy and require extra questionnaires. Artificial intelligence makes it possible to establish machine learning models that predict incident delirium risk based on electronic health data.Methods: We conducted a retrospective case-control study on elderly patients (≥65 years of age) who received orthopedic repair with hip fracture under spinal or general anesthesia between June 1, 2018, and May 31, 2019. Anesthesia records and medical charts were reviewed to collect demographic, surgical, anesthetic features, and frailty index to explore potential risk factors for postoperative delirium. Delirium was assessed by trained nurses using the Confusion Assessment Method (CAM) every 12 h during the hospital stay. Four machine learning risk models were constructed to predict the incidence of postoperative delirium: random forest, eXtreme Gradient Boosting (XGBoosting), support vector machine (SVM), and multilayer perception (MLP). K-fold cross-validation was deployed to accomplish internal validation and performance evaluation.Results: About 245 patients were included and postoperative delirium affected 12.2% (30/245) of the patients. Multiple logistic regression revealed that dementia/history of stroke [OR 3.063, 95% CI (1.231, 7.624)], blood transfusion [OR 2.631, 95% CI (1.055, 6.559)], and preparation time [OR 1.476, 95% CI (1.170, 1.862)] were associated with postoperative delirium, achieving an area under receiver operating curve (AUC) of 0.779, 95% CI (0.703, 0.856).The accuracy of machine learning models for predicting the occurrence of postoperative delirium ranged from 83.67 to 87.75%. Machine learning methods detected 16 risk factors contributing to the development of delirium. Preparation time, frailty index uses of vasopressors during the surgery, dementia/history of stroke, duration of surgery, and anesthesia were the six most important risk factors of delirium.Conclusion: Electronic chart-derived machine learning models could generate hospital-specific delirium prediction models and calculate the contribution of risk factors to the occurrence of delirium. Further research is needed to evaluate the significance and applicability of electronic chart-derived machine learning models for the detection risk of delirium in elderly patients undergoing hip fracture repair surgeries.


2021 ◽  
Author(s):  
Lukasz S Wylezinski ◽  
Coleman R Harris ◽  
Cody N Heiser ◽  
Jamieson D Gray ◽  
Charles F Spurlock

The SARS-CoV-2 (COVID-19) pandemic has exposed health disparities throughout the United States, particularly among racial and ethnic minorities. As a result, there is a need for data-driven approaches to pinpoint the unique constellation of clinical and social determinants of health (SDOH) risk factors that give rise to poor patient outcomes following infection in US communities. We combined county-level COVID-19 testing data, COVID-19 vaccination rates, and SDOH information in Tennessee. Between February-May 2021, we trained machine learning models on a semi-monthly basis using these datasets to predict COVID-19 incidence in Tennessee counties. We then analyzed SDOH data features at each time point to rank the impact of each feature on model performance. Our results indicate that COVID-19 vaccination rates play a crucial role in determining future COVID-19 disease risk. Beginning in mid-March 2021, higher vaccination rates significantly correlated with lower COVID-19 case growth predictions. Further, as the relative importance of COVID-19 vaccination data features grew, demographic SDOH features such as age, race, and ethnicity decreased while the impact of socioeconomic and environmental factors, including access to healthcare and transportation, increased. Incorporating a data framework to track the evolving patterns of community-level SDOH risk factors could provide policymakers with additional data resources to improve health equity and resilience to future public health emergencies.


2021 ◽  
Author(s):  
Zhongjun Chen ◽  
Haowen Luo ◽  
Lijun Xu

Abstract Object: Identify the risk factors for hemorrhage/ischemia in patients with moyamoya disease and establish models using Logistic regression (LR), XGboost and Multilayer Perceptron (MLP), evaluating and comparison the effects of those models; providing theoretical basis for moyamoya disease patients to prevent stroke recurrence. Methods: This retrospective study used data from the database of Jiang Xi Province Medical Big Data Engineering & Technology Research Center; the data of patients with moyamoya disease admitted to the second affiliated hospital of Nanchang university from January 1, 2012 to December 31, 2019 were collected. A total of 994 patients with moyamoya disease were screened, including 496 patients with cerebral infarction and 498 patients with cerebral hemorrhage. LR, XGboost and MLP were used to establish models for hemorrhage /ischemia in moyamoya disease, the effects of different models were verified and compared. Result: LR, XGboost and MLP models all had good discrimination (AUC>0.75), and their AUC value are 0.9227(95%CI:0.9215-0.9239)、0.9677(95%CI:0.9657-0.9696)、 0.9672(95%CI:0.9643-0.9701). Compared with LR model, the prediction ability of XGboost and MLP model in training and test set is improved, which is increased by 18.11% and 14.34% respectively in training set, and there is a significant difference. Conclusion: Compared with the traditional LR model, the machine learning models are more effective in predicting hemorrhage/ischemia in moyamoya disease.


Author(s):  
A.A. Shevchenko ◽  
◽  
N.G. Zhila ◽  
E.A. Kashkarov ◽  
K.S. Shevchenko ◽  
...  

Median sternotomy remains the most common access in cardiac surgery, while postoperative sternomediastinitis is one of the most severe complications of the transsternal approach. The article analyzes the preoperative risk factors for the development of this complication, including concomitant pathology, constitutional features, bad habits, length of hospital stay, and the urgency of the operation. It was also noted that intraoperative risk factors consist of technical errors in the performance of the operation, intraoperative features of the course of surgery, the nature of the choice of the shunt during myocardial vascularization and the final stage of the operation. Postoperative risk factors include the specific management of the postoperative period in cardiac surgery patients, which can lead to the development of sternomediastinitis. The analysis of measures taken by cardiac surgeons to prevent the development of this complication was carried out


2021 ◽  
pp. 100712
Author(s):  
Junjie Liu ◽  
Yiyang Sun ◽  
Jing Ma ◽  
Jiachen Tu ◽  
Yuhui Deng ◽  
...  

Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 2113-2113
Author(s):  
Zhuo-Yu An ◽  
Ye-Jun Wu ◽  
Yun He ◽  
Xiao-Lu Zhu ◽  
Yan Su ◽  
...  

Abstract Introduction Allogeneic haematopoietic stem cell transplantation (allo-HSCT) has been demonstrated to be the most effective therapy for various malignant as well as nonmalignant haematological diseases. The wide use of allo-HSCT has inevitably led to a variety of complications after transplantation, with bleeding complications such as disseminated intravascular coagulation (DIC). DIC accounts for a significant proportion of life-threatening bleeding cases occurring after allo-HSCT. However, information on markers for early identification remains limited, and no predictive tools for DIC after allo-HSCT are available. This research aimed to identify the risk factors for DIC after allo-HSCT and establish prediction models to predict the occurrence of DIC after allo-HSCT. Methods The definition of DIC was based on the International Society of Thrombosis and Hemostasis (ISTH) scoring system. Overall, 197 patients with DIC after allo-HSCT at Peking University People's Hospital and other 7 centers in China from January 2010 to June 2021 were retrospectively identified. Each patient was randomly matched to 3 controls based on the time of allo-HSCT (±3 months) and length of follow-up (±6 months). A lasso regression model was used for data dimension reduction, feature selection, and risk factor building. Multivariable logistic regression analysis was used to develop the prediction model. We incorporated the clinical risk factors, and this was presented with a nomogram. The performance of the nomogram was assessed with respect to its calibration, discrimination, and clinical usefulness. Internal and external validation was assessed. Various machine learning models were further used to perform machine learning modeling by attempting to complete the data sample classification task, including XGBClassifier, LogisticRegression, MLPClassifier, RandomForestClassifier, and AdaBoostClassifier. Results A total of 7280 patients received allo-HSCT from January 2010 to June 2021, and DIC occurred in 197 of these patients (incidence of 2.7%). The derivation cohort included 120 DIC patients received allo-HSCT and 360 patients received allo-HSCT from Peking University People's Hospital, and the validation cohort included the remaining 77 patients received allo-HSCT and 231 patients received allo-HSCT from the other 7 centers. The median time for DIC events was 99.0 (IQR, 46.8-220) days after allo-HSCT. The overall survival of patients with DIC was significantly reduced (P < 0.0001). By Lasso regression, the 10 variables with the highest importance were found to be prothrombin time activity (PTA), shock, C-reactive protein, internationalization normalized ratio, bacterial infection, oxygenation, fibrinogen, blood creatinine, white blood cell count, and acute respiratory distress syndrome (from highest to lowest). In the multivariate analysis, the independent risk factors for DIC included PTA, bacterial infection and shock (P <0.001), and these predictors were included in the clinical prediction nomogram. The model showed good discrimination, with a C-index of 0.975 (95%CI, 0.939 to 0.987 through internal validation) and good calibration. Application of the nomogram in the validation cohort still gave good discrimination (C-index, 0.778 [95% CI, 0.759 to 0.766]) and good calibration. Decision curve analysis demonstrated that the nomogram was clinically useful. The predictive value ROC curves of different machine learning models show that XGBClassifier is the best performing model for this dataset, with an area under the curve of 0.86. Conclusions Risk factors for DIC after allo-HSCT were identified, and a nomogram model and various machine learning models were established to predict the occurrence of DIC after allo-HSCT. Combined, these can help recognize high-risk patients and provide timely treatment. In the future, we will further refine the prognostic model utilizing nationwide multicenter data and conduct prospective clinical trials to reduce the incidence of DIC after allo-HSCT and improve the prognosis. Disclosures No relevant conflicts of interest to declare.


Sign in / Sign up

Export Citation Format

Share Document