scholarly journals Machine Learning Methods for Prediction of Hospital Mortality in Patients with Coronary Heart Disease after Coronary Artery Bypass Grafting

Kardiologiia ◽  
2020 ◽  
Vol 60 (10) ◽  
pp. 38-46
Author(s):  
B. I. Geltser ◽  
K. J. Shahgeldyan ◽  
V. Y. Rublev ◽  
V. N. Kotelnikov ◽  
A. B. Krieger ◽  
...  

Aim      To compare the accuracy of predicting an in-hospital fatal outcome for models based on current machine-learning technologies in patients with ischemic heart disease (IHD) after coronary bypass (CB) surgery.Material and methods  A retrospective analysis of 866 electronic medical records was performed for patients (685 men and 181 women) who have had a CB surgery for IHD in 2008–2018. Results of clinical, laboratory, and instrumental evaluations obtained prior to the CB surgery were analyzed. Patients were divided into two groups: group 1 included 35 (4 %) patients who died within the first 20 days of CB, and group 2 consisted of 831 (96 %) patients with a beneficial outcome of the surgery. Predictors of the in-hospital fatal outcome were identified by a multistep selection procedure with analysis of statistical hypotheses and calculation of weight coefficients. For construction of models and verification of predictors, machine-learning methods were used, including the multifactorial logistic regression (LR), random forest (RF), and artificial neural networks (ANN). Model accuracy was evaluated by three metrics: area under the ROC curve (AUC), sensitivity, and specificity. Cross validation of the models was performed on test samples, and the control validation was performed on a cohort of patients with IHD after CB, whose data were not used in development of the models.Results The following 7 risk factors for in-hospital fatal outcome with the greatest predictive potential were isolated from the EuroSCORE II scale: ejection fraction (EF) <30 %, EF 30-50 %, age of patients with recent MI, damage of peripheral arterial circulation, urgency of CB, functional class III-IV chronic heart failure, and 5 additional predictors, including heart rate, systolic blood pressure, presence of aortic stenosis, posterior left ventricular (LV) wall relative thickness index (RTI), and LV relative mass index (LVRMI). The models developed by the authors using LR, RF and ANN methods had higher AUC values and sensitivity compared to the classical EuroSCORE II scale. The ANN models including the RTI and LVRMI predictors demonstrated a maximum level of prognostic accuracy, which was illustrated by values of the quality metrics, AUC 93 %, sensitivity 90 %, and specificity 96 %. The predictive robustness of the models was confirmed by results of the control validation.Conclusion      The use of current machine-learning technologies allowed developing a novel algorithm for selection of predictors and highly accurate models for predicting an in-hospital fatal outcome after CB. 

2019 ◽  
Author(s):  
Zhenzhen Du ◽  
Yujie Yang ◽  
Jing Zheng ◽  
Qi Li ◽  
Denan Lin ◽  
...  

BACKGROUND Predictions of cardiovascular disease risks based on health records have long attracted broad research interests. Despite extensive efforts, the prediction accuracy has remained unsatisfactory. This raises the question as to whether the data insufficiency, statistical and machine-learning methods, or intrinsic noise have hindered the performance of previous approaches, and how these issues can be alleviated. OBJECTIVE Based on a large population of patients with hypertension in Shenzhen, China, we aimed to establish a high-precision coronary heart disease (CHD) prediction model through big data and machine-learning METHODS Data from a large cohort of 42,676 patients with hypertension, including 20,156 patients with CHD onset, were investigated from electronic health records (EHRs) 1-3 years prior to CHD onset (for CHD-positive cases) or during a disease-free follow-up period of more than 3 years (for CHD-negative cases). The population was divided evenly into independent training and test datasets. Various machine-learning methods were adopted on the training set to achieve high-accuracy prediction models and the results were compared with traditional statistical methods and well-known risk scales. Comparison analyses were performed to investigate the effects of training sample size, factor sets, and modeling approaches on the prediction performance. RESULTS An ensemble method, XGBoost, achieved high accuracy in predicting 3-year CHD onset for the independent test dataset with an area under the receiver operating characteristic curve (AUC) value of 0.943. Comparison analysis showed that nonlinear models (K-nearest neighbor AUC 0.908, random forest AUC 0.938) outperform linear models (logistic regression AUC 0.865) on the same datasets, and machine-learning methods significantly surpassed traditional risk scales or fixed models (eg, Framingham cardiovascular disease risk models). Further analyses revealed that using time-dependent features obtained from multiple records, including both statistical variables and changing-trend variables, helped to improve the performance compared to using only static features. Subpopulation analysis showed that the impact of feature design had a more significant effect on model accuracy than the population size. Marginal effect analysis showed that both traditional and EHR factors exhibited highly nonlinear characteristics with respect to the risk scores. CONCLUSIONS We demonstrated that accurate risk prediction of CHD from EHRs is possible given a sufficiently large population of training data. Sophisticated machine-learning methods played an important role in tackling the heterogeneity and nonlinear nature of disease prediction. Moreover, accumulated EHR data over multiple time points provided additional features that were valuable for risk prediction. Our study highlights the importance of accumulating big data from EHRs for accurate disease predictions.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
G Sng ◽  
D Y Z Lim ◽  
C H Sia ◽  
J S W Lee ◽  
X Y Shen ◽  
...  

Abstract Background/Introduction Classic electrocardiographic (ECG) criteria for left ventricular hypertrophy (LVH) have been well studied in Western populations, particularly in hypertensive patients. However, their utility in Asian populations is not well studied, and their applicability to young pre-participation cohorts is unclear. We sought to evaluate the performance of classical criteria against that of machine learning models. Aims We sought to evaluate the performance of classical criteria against the performance of novel machine learning models in the identification of LVH. Methodology Between November 2009 and December 2014, pre-participation screening ECG and subsequent echocardiographic data was collected from 13,954 males aged 16 to 22, who reported for medical screening prior to military conscription. Final diagnosis of LVH was made on echocardiography, with LVH defined as a left ventricular mass index >115g/m2. The continuous and binary forms of classical criteria were compared against machine learning models using receiver-operating characteristics (ROC) curve analysis. An 80:20 split was used to divide the data into training and test sets for the machine learning models, and three fold cross validation was used in training the models. We also compared the important variables identified by machine learning models with the input variables of classical criteria. Results Prevalence of echocardiographic LVH in this population was 0.91% (127 cases). Classical ECG criteria had poor performance in predicting LVH, with the best predictions achieved by the continuous Sokolow-Lyon (AUC = 0.63, 95% CI = 0.58–0.68) and the continuous Modified Cornell (AUC = 0.63, 95% CI = 0.58–0.68). Machine learning methods achieved superior performance – Random Forest (AUC = 0.74, 95% CI = 0.66–0.82), Gradient Boosting Machines (AUC = 0.70, 95% CI = 0.61–0.79), GLMNet (AUC = 0.78, 95% CI = 0.70–0.86). Novel and less recognized ECG parameters identified by the machine learning models as being predictive of LVH included mean QT interval, mean QRS interval, R in V4, and R in I. ROC curves of models studies Conclusion The prevalence of LVH in our population is lower than that previously reported in other similar populations. Classical ECG criteria perform poorly in this context. Machine learning methods show superior predictive performance and demonstrate non-traditional predictors of LVH from ECG data. Further research is required to improve the predictive ability of machine learning models, and to understand the underlying pathology of the novel ECG predictors identified.


2005 ◽  
Vol 17 (2) ◽  
pp. 158-164 ◽  
Author(s):  
Christine S. Hotz ◽  
Steven J. Templeton ◽  
Mary M. Christopher

A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California–Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.


10.2196/17257 ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. e17257
Author(s):  
Zhenzhen Du ◽  
Yujie Yang ◽  
Jing Zheng ◽  
Qi Li ◽  
Denan Lin ◽  
...  

Background Predictions of cardiovascular disease risks based on health records have long attracted broad research interests. Despite extensive efforts, the prediction accuracy has remained unsatisfactory. This raises the question as to whether the data insufficiency, statistical and machine-learning methods, or intrinsic noise have hindered the performance of previous approaches, and how these issues can be alleviated. Objective Based on a large population of patients with hypertension in Shenzhen, China, we aimed to establish a high-precision coronary heart disease (CHD) prediction model through big data and machine-learning Methods Data from a large cohort of 42,676 patients with hypertension, including 20,156 patients with CHD onset, were investigated from electronic health records (EHRs) 1-3 years prior to CHD onset (for CHD-positive cases) or during a disease-free follow-up period of more than 3 years (for CHD-negative cases). The population was divided evenly into independent training and test datasets. Various machine-learning methods were adopted on the training set to achieve high-accuracy prediction models and the results were compared with traditional statistical methods and well-known risk scales. Comparison analyses were performed to investigate the effects of training sample size, factor sets, and modeling approaches on the prediction performance. Results An ensemble method, XGBoost, achieved high accuracy in predicting 3-year CHD onset for the independent test dataset with an area under the receiver operating characteristic curve (AUC) value of 0.943. Comparison analysis showed that nonlinear models (K-nearest neighbor AUC 0.908, random forest AUC 0.938) outperform linear models (logistic regression AUC 0.865) on the same datasets, and machine-learning methods significantly surpassed traditional risk scales or fixed models (eg, Framingham cardiovascular disease risk models). Further analyses revealed that using time-dependent features obtained from multiple records, including both statistical variables and changing-trend variables, helped to improve the performance compared to using only static features. Subpopulation analysis showed that the impact of feature design had a more significant effect on model accuracy than the population size. Marginal effect analysis showed that both traditional and EHR factors exhibited highly nonlinear characteristics with respect to the risk scores. Conclusions We demonstrated that accurate risk prediction of CHD from EHRs is possible given a sufficiently large population of training data. Sophisticated machine-learning methods played an important role in tackling the heterogeneity and nonlinear nature of disease prediction. Moreover, accumulated EHR data over multiple time points provided additional features that were valuable for risk prediction. Our study highlights the importance of accumulating big data from EHRs for accurate disease predictions.


2019 ◽  
Vol 25 (5) ◽  
pp. 716-742 ◽  
Author(s):  
Gang Kou ◽  
Xiangrui Chao ◽  
Yi Peng ◽  
Fawaz E. Alsaadi ◽  
Enrique Herrera-Viedma

Financial systemic risk is an important issue in economics and financial systems. Trying to detect and respond to systemic risk with growing amounts of data produced in financial markets and systems, a lot of researchers have increasingly employed machine learning methods. Machine learning methods study the mechanisms of outbreak and contagion of systemic risk in the financial network and improve the current regulation of the financial market and industry. In this paper, we survey existing researches and methodologies on assessment and measurement of financial systemic risk combined with machine learning technologies, including big data analysis, network analysis and sentiment analysis, etc. In addition, we identify future challenges, and suggest further research topics. The main purpose of this paper is to introduce current researches on financial systemic risk with machine learning methods and to propose directions for future work.


Healthcare ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 547
Author(s):  
Yen-Chun Huang ◽  
Shao-Jung Li ◽  
Mingchih Chen ◽  
Tian-Shyug Lee ◽  
Yu-Ning Chien

Coronary artery bypass surgery grafting (CABG) is a commonly efficient treatment for coronary artery disease patients. Even if we know the underlying disease, and advancing age is related to survival, there is no research using the one year before surgery and operation-associated factors as predicting elements. This research used different machine-learning methods to select the features and predict older adults’ survival (more than 65 years old). This nationwide population-based cohort study used the National Health Insurance Research Database (NHIRD), the largest and most complete dataset in Taiwan. We extracted the data of older patients who had received their first CABG surgery criteria between January 2008 and December 2009 (n = 3728), and we used five different machine-learning methods to select the features and predict survival rates. The results show that, without variable selection, XGBoost had the best predictive ability. Upon selecting XGBoost and adding the CHA2DS score, acute pancreatitis, and acute kidney failure for further predictive analysis, MARS had the best prediction performance, and it only needed 10 variables. This study’s advantages are that it is innovative and useful for clinical decision making, and machine learning could achieve better prediction with fewer variables. If we could predict patients’ survival risk before a CABG operation, early prevention and disease management would be possible.


Sign in / Sign up

Export Citation Format

Share Document