P4622Prediction of in-hospital bleeding for AMI patients undergoing PCI using machine learning method

2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
X Y Zhao ◽  
J G Yang ◽  
T G Chen ◽  
J M Wang ◽  
X Li ◽  
...  

Abstract Background Prediction of in-hospital bleeding is critical for clinical decision making for acute myocardial infarction (AMI) patients undergoing percutaneous coronary intervention (PCI). Machine learning methods can automatically select the combination of the important features and learn their underlying relationship with the outcome. Objective We aim to evaluate the predictive value of machine learning methods to predict in-hospital bleeding for AMI patients. Methods We used data from the multicenter China Acute Myocardial Infarction (CAMI) registry. We randomly partitioned the cohort into derivation set (75%) and validation set (25%). Using data from the derivation set, we applied a state-of-art machine learning algorithm, XGBoost, to automatically select features from 106 candidate variables and train a risk prediction model to predict in-hospital bleeding (BARC 3, 5 definition). Results 16736 AMI patients who underwent PCI were consecutively included in the analysis, while 70 (0.42%) patients had in-hospital bleeding followed the BARC 3,5 definition of bleeding. Fifty-nine features were automatically selected from the candidate features and were used to construct the prediction model. The area under the curve (AUC) of the XGBoost model was 0.816 (95% CI: 0.745–0.887) on the validation set, while AUC of the CRUSADE risk score was 0.723 (95% CI: 0.619–0.828). Relative contribution of the 12 most important features Feature Relative Importance Direct bilirubin 0.078 Heart rate 0.077 CKMB 0.076 Creatinine 0.064 GPT 0.052 Age 0.048 SBP 0.036 TG 0.035 Glucose 0.035 HCT 0.031 Total bilirubin 0.030 Neutrophil 0.030 ROC of the XGBoost model and CRUSADE Conclusion The XGBoost model derived from the CAMI cohort accurately predicts in-hospital bleeding among Chinese AMI patients undergoing PCI. Acknowledgement/Funding the CAMS innovation Fund for Medical Sciences (CIFMS) (2016-12M-1-009); the Twelfth Five-year Planning Project of China (2011BAI11B02)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Imogen Schofield ◽  
David C. Brodbelt ◽  
Noel Kennedy ◽  
Stijn J. M. Niessen ◽  
David B. Church ◽  
...  

AbstractCushing’s syndrome is an endocrine disease in dogs that negatively impacts upon the quality-of-life of affected animals. Cushing’s syndrome can be a challenging diagnosis to confirm, therefore new methods to aid diagnosis are warranted. Four machine-learning algorithms were applied to predict a future diagnosis of Cushing's syndrome, using structured clinical data from the VetCompass programme in the UK. Dogs suspected of having Cushing's syndrome were included in the analysis and classified based on their final reported diagnosis within their clinical records. Demographic and clinical features available at the point of first suspicion by the attending veterinarian were included within the models. The machine-learning methods were able to classify the recorded Cushing’s syndrome diagnoses, with good predictive performance. The LASSO penalised regression model indicated the best overall performance when applied to the test set with an AUROC = 0.85 (95% CI 0.80–0.89), sensitivity = 0.71, specificity = 0.82, PPV = 0.75 and NPV = 0.78. The findings of our study indicate that machine-learning methods could predict the future diagnosis of a practicing veterinarian. New approaches using these methods could support clinical decision-making and contribute to improved diagnosis of Cushing’s syndrome in dogs.


2019 ◽  
Author(s):  
Rohan Khera ◽  
Julian Haimovich ◽  
Nate Hurley ◽  
Robert McNamara ◽  
John A Spertus ◽  
...  

ABSTRACTIntroductionAccurate prediction of risk of death following acute myocardial infarction (AMI) can guide the triage of care services and shared decision-making. Contemporary machine-learning may improve risk-prediction by identifying complex relationships between predictors and outcomes.Methods and ResultsWe studied 993,905 patients in the American College of Cardiology Chest Pain-MI Registry hospitalized with AMI (mean age 64 ± 13 years, 34% women) between January 2011 and December 2016. We developed and validated three machine learning models to predict in-hospital mortality and compared the performance characteristics with a logistic regression model. In an independent validation cohort, we compared logistic regression with lasso regularization (c-statistic, 0.891 [95% CI, 0.890-0.892]), gradient descent boosting (c-statistic, 0.902 [0.901-0.903]), and meta-classification that combined gradient descent boosting with a neural network (c-statistic, 0.904 [0.903-0.905]) with traditional logistic regression (c-statistic, 0.882 [0.881-0.883]). There were improvements in classification of individuals across the spectrum of patient risk with each of the three methods; the meta-classifier model – our best performing model - reclassified 20.9% of individuals deemed high-risk for mortality in logistic regression appropriately as low-to-moderate risk, and 8.2% of deemed low-risk to moderate-to-high risk based consistent with the actual event rates.ConclusionsMachine-learning methods improved the prediction of in-hospital mortality for AMI compared with logistic regression. Machine learning methods enhance the utility of risk models developed using traditional statistical approaches through additional exploration of the relationship between variables and outcomes.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7078
Author(s):  
Yueting Wang ◽  
Minzan Li ◽  
Ronghua Ji ◽  
Minjuan Wang ◽  
Lihua Zheng

Visible-near-infrared spectrum (Vis-NIR) spectroscopy technology is one of the most important methods for non-destructive and rapid detection of soil total nitrogen (STN) content. In order to find a practical way to build STN content prediction model, three conventional machine learning methods and one deep learning approach are investigated and their predictive performances are compared and analyzed by using a public dataset called LUCAS Soil (19,019 samples). The three conventional machine learning methods include ordinary least square estimation (OLSE), random forest (RF), and extreme learning machine (ELM), while for the deep learning method, three different structures of convolutional neural network (CNN) incorporated Inception module are constructed and investigated. In order to clarify effectiveness of different pre-treatments on predicting STN content, the three conventional machine learning methods are combined with four pre-processing approaches (including baseline correction, smoothing, dimensional reduction, and feature selection) are investigated, compared, and analyzed. The results indicate that the baseline-corrected and smoothed ELM model reaches practical precision (coefficient of determination (R2) = 0.89, root mean square error of prediction (RMSEP) = 1.60 g/kg, and residual prediction deviation (RPD) = 2.34). While among three different structured CNN models, the one with more 1 × 1 convolutions preforms better (R2 = 0.93; RMSEP = 0.95 g/kg; and RPD = 3.85 in optimal case). In addition, in order to evaluate the influence of data set characteristics on the model, the LUCAS data set was divided into different data subsets according to dataset size, organic carbon (OC) content and countries, and the results show that the deep learning method is more effective and practical than conventional machine learning methods and, on the premise of enough data samples, it can be used to build a robust STN content prediction model with high accuracy for the same type of soil with similar agricultural treatment.


2021 ◽  
Author(s):  
Jincheng Yang

BACKGROUND Diabetes mellitus and cancer are amongst the leading causes of deaths worldwide; hyperglycemia plays a major contributory role in neoplastic transformation risk. Support Vector Machine (SVM) is a type of supervised learning method which analyzes data and recognizes patterns, mainly used for statistical classification and regression. OBJECTIVE From reported adverse events of PD-1 or PD-L1 (programmed death 1 or ligand 1) inhibitors in post-marketing monitoring, we aimed to construct an effective machine learning algorithm to predict the probability of hyperglycemic adverse reaction from PD-1/PD-L1 inhibitors treated patients efficiently and rapidly. METHODS Raw data was downloaded from US Food and Drug Administration Adverse Event Reporting System (FDA FAERS). Signal of relationship between drug and adverse reaction based on disproportionality analysis and Bayesian analysis. A multivariate pattern classification of SVM was used to construct classifier to separate adverse hyperglycemic reaction patients. A 10-fold-3-time cross validation for model setup within training data (80% data) output best parameter values in SVM within R software. The model was validated in each testing data (20% data) and two total drug data, with exactly predictor parameter variables: gamma and nu. RESULTS Total 95918 case files were downloaded from 7 relevant drugs (cemiplimab, avelumab, durvalumab, atezolizumab, pembrolizumab, ipilimumab, nivolumab). The number-type/number-optimization method was selected to optimize model. Both gamma and nu values correlated with case number showed high adjusted r2 in curve regressions (both r2 >0.95). Indexes of accuracy, F1 score, kappa and sensitivity were greatly improved from the prediction model in training data and two total drug data. CONCLUSIONS The SVM prediction model established here can non-invasively and precisely predict occurrence of hyperglycemic adverse drug reaction (ADR) in PD-1/PD-L1 inhibitors treated patients. Such information is vital to overcome ADR and to improve outcomes by distinguish high hyperglycemia-risk patients, and this machine learning algorithm can eventually add value onto clinical decision making. CLINICALTRIAL N/A


2020 ◽  
Author(s):  
Jincheng Yang ◽  
Weilong Lin ◽  
Liming Shi ◽  
Ming Deng ◽  
Wenjing Yang

Abstract Background: Diabetes mellitus and cancer are amongst the leading causes of deaths worldwide; hyperglycemia plays a major contributory role in neoplastic transformation risk. From reported adverse events of PD-1 or PD-L1 (programmed death 1 or ligand 1) inhibitors in post-marketing monitoring, we aimed to construct an effective machine learning algorithm to predict the probability of hyperglycemic adverse reaction from PD-1/PD-L1 inhibitors treated patients efficiently and rapidly. Methods: Raw data was downloaded from US Food and Drug Administration Adverse Event Reporting System (FDA FAERS). Signal of relationship between drug and adverse reaction based on disproportionality analysis and Bayesian analysis. A multivariate pattern classification of Support Vector Machine (SVM) was used to construct classifier to separate adverse hyperglycemic reaction patients. A 10-fold-3-time cross validation for model setup within training data (80% data) output best parameter values in SVM within R software. The model was validated in each testing data (20% data) and two total drug data, with exactly predictor parameter variables: gamma and nu. Results: Total 95918 case files were downloaded from 7 relevant drugs (cemiplimab, avelumab, durvalumab, atezolizumab, pembrolizumab, ipilimumab, nivolumab). The number-type/number-optimization method was selected to optimize model. Both gamma and nu values correlated with case number showed high adjusted r2 in curve regressions (both r2 >0.95). Indexes of accuracy, F1 score, kappa and sensitivity were greatly improved from the prediction model in training data and two total drug data. Conclusions: The SVM prediction model established here can non-invasively and precisely predict occurrence of hyperglycemic adverse drug reaction (ADR) in PD-1/PD-L1 inhibitors treated patients. Such information is vital to overcome ADR and to improve outcomes by distinguish high hyperglycemia-risk patients, and this machine learning algorithm can eventually add value onto clinical decision making.


Author(s):  
Kevin Matsuno ◽  
Vidya Nandikolla

Abstract Brain computer interface (BCI) systems are developed in biomedical fields to increase the quality of life. The development of a six class BCI controller to operate a semi-autonomous robotic arm is presented. The controller uses the following mental tasks: imagined left/right hand squeeze, imagined left/right foot tap, rest, one physical task, and jaw clench. To design a controller, the locations of active electrodes are verified and an appropriate machine learning algorithm is determined. Three subjects, ages ranging between 22-27, participated in five sessions of motor imagery experiments to record their brainwaves. These recordings were analyzed using event related potential plots and topographical maps to determine active electrodes. BCILAB was used to train two, three, five, and six class BCI controllers using linear discriminant analysis (LDA) and relevance vector machine (RVM) machine learning methods. The subjects' data was used to compare the two-method's performance in terms of error rate percentage. While a two class BCI controller showed the same accuracy for both methods, the three and five class BCI controllers showed the RVM approach having a higher accuracy than the LDA approach. For the five-class controller, error rate percentage was 33.3% for LDA and 29.2% for RVM. The six class BCI controller error rate percentage for both LDA and RVM was 34.5%. While the percentage values are the same, RVM was chosen as the desired machine learning algorithm based on the trend seen in the three and five class controller performances.


Sign in / Sign up

Export Citation Format

Share Document