Abstract 17133: The Use of Novel Hematological Markers in Predicting Stroke After TAVR Using a Machine Learning Algorithm

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Aliaskar Z Hasani ◽  
Kusha Rahgozar ◽  
Aaron Wengrofsky ◽  
Narasimha Kuchimanchi ◽  
Mohammad Hashim Mustehsan ◽  
...  

Introduction: Aortic Stenosis is the most common valvular disorder with a predominance in the elderly. Trans-Aortic Valve Replacement (TAVR) has been an effective procedure with marked improvement in quality of life for patients. The procedure carries a small, yet clinically significant risk of stroke. The use of Neutrophil-Lymphocyte Ratios (NLR) and Platelet-Lymphocyte Ratios (PLR) have been growing as novel markers of systemic inflammation. We investigated the ability of a machine learning algorithm (Light GBM) to predict and weigh these ratios along with other clinical parameters for prediction of stroke after TAVR. Objective: To demonstrate the efficacy of the Supervised Machine Learning algorithm, Light GBM, in identifying important variables to predict stroke after TAVR. Methods: We performed a retrospective analysis of 291 patients who underwent TAVR from 2015-2019 at Montefiore Medical Center. Age (80±8), 50.2% Female, BMI (28.7 ±6.3). Clinical data was collected through our Hospital EMR. NLR and PLR averages were obtained using the mean of baseline (prior to surgery), Immediate Post-operative, and Post-operative Day 1 values. A supervised machine learning algorithm, Light GBM, used decision tree algorithms with both level-wise growth and leaf-wise growth. The algorithm was trained on 80% of the data and internally validated on the other 20%. Results: We identified NLR and PLR as the second and third most important feature of importance (Table 1) Clinical and demographic features of importance included BMI, Age, and Sex. Our model when internally validated yield a Sensitivity of 75.0%, Specificity of 91.5%, Accuracy of 91.5%, and F1 of 0.75. The AUC for the model was 0.84. Conclusions: Using Novel hematological parameters in conjunction with machine learning algorithms have highlighted important variables in predicting stroke after TAVR. Extrapolated, average NLR and PLR can be an inexpensive tool in stratifying patients those patients most at risk.

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Aurelle Tchagna Kouanou ◽  
Thomas Mih Attia ◽  
Cyrille Feudjio ◽  
Anges Fleurio Djeumo ◽  
Adèle Ngo Mouelas ◽  
...  

Background and Objective. To mitigate the spread of the virus responsible for COVID-19, known as SARS-CoV-2, there is an urgent need for massive population testing. Due to the constant shortage of PCR (polymerase chain reaction) test reagents, which are the tests for COVID-19 by excellence, several medical centers have opted for immunological tests to look for the presence of antibodies produced against this virus. However, these tests have a high rate of false positives (positive but actually negative test results) and false negatives (negative but actually positive test results) and are therefore not always reliable. In this paper, we proposed a solution based on Data Analysis and Machine Learning to detect COVID-19 infections. Methods. Our analysis and machine learning algorithm is based on most cited two clinical datasets from the literature: one from San Raffaele Hospital Milan Italia and the other from Hospital Israelita Albert Einstein São Paulo Brasilia. The datasets were processed to select the best features that most influence the target, and it turned out that almost all of them are blood parameters. EDA (Exploratory Data Analysis) methods were applied to the datasets, and a comparative study of supervised machine learning models was done, after which the support vector machine (SVM) was selected as the one with the best performance. Results. SVM being the best performant is used as our proposed supervised machine learning algorithm. An accuracy of 99.29%, sensitivity of 92.79%, and specificity of 100% were obtained with the dataset from Kaggle (https://www.kaggle.com/einsteindata4u/covid19) after applying optimization to SVM. The same procedure and work were performed with the dataset taken from San Raffaele Hospital (https://zenodo.org/record/3886927#.YIluB5AzbMV). Once more, the SVM presented the best performance among other machine learning algorithms, and 92.86%, 93.55%, and 90.91% for accuracy, sensitivity, and specificity, respectively, were obtained. Conclusion. The obtained results, when compared with others from the literature based on these same datasets, are superior, leading us to conclude that our proposed solution is reliable for the COVID-19 diagnosis.


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Zeyar Aung ◽  
Mohamed Sassi

For defining the optimal machine learning algorithm, the decision was not easy for which we shall choose. To help future researchers, we describe in this paper the optimal among the best of the algorithms. We built a synthetic data set and performed the supervised machine learning runs for five different algorithms. For heterogeneity, we identified Random Forest, among others, to be the best algorithm.


Author(s):  
Shahadat Uddin ◽  
Arif Khan ◽  
Md Ekramul Hossain ◽  
Mohammad Ali Moni

Abstract Background Supervised machine learning algorithms have been a dominant method in the data mining field. Disease prediction using health data has recently shown a potential application area for these methods. This study aims to identify the key trends among different types of supervised machine learning algorithms, and their performance and usage for disease risk prediction. Methods In this study, extensive research efforts were made to identify those studies that applied more than one supervised machine learning algorithm on single disease prediction. Two databases (i.e., Scopus and PubMed) were searched for different types of search items. Thus, we selected 48 articles in total for the comparison among variants supervised machine learning algorithms for disease prediction. Results We found that the Support Vector Machine (SVM) algorithm is applied most frequently (in 29 studies) followed by the Naïve Bayes algorithm (in 23 studies). However, the Random Forest (RF) algorithm showed superior accuracy comparatively. Of the 17 studies where it was applied, RF showed the highest accuracy in 9 of them, i.e., 53%. This was followed by SVM which topped in 41% of the studies it was considered. Conclusion This study provides a wide overview of the relative performance of different variants of supervised machine learning algorithms for disease prediction. This important information of relative performance can be used to aid researchers in the selection of an appropriate supervised machine learning algorithm for their studies.


2018 ◽  
Vol 7 (4.15) ◽  
pp. 400 ◽  
Author(s):  
Thuy Nguyen Thi Thu ◽  
Vuong Dang Xuan

The exchange rate of each money pair can be predicted by using machine learning algorithm during classification process. With the help of supervised machine learning model, the predicted uptrend or downtrend of FoRex rate might help traders to have right decision on FoRex transactions. The installation of machine learning algorithms in the FoRex trading online market can automatically make the transactions of buying/selling. All the transactions in the experiment are performed by using scripts added-on in transaction application. The capital, profits results of use support vector machine (SVM) models are higher than the normal one (without use of SVM). 


2019 ◽  
Author(s):  
Massimiliano Grassi ◽  
Nadine Rouleaux ◽  
Daniela Caldirola ◽  
David Loewenstein ◽  
Koen Schruers ◽  
...  

ABSTRACTBackgroundDespite the increasing availability in brain health related data, clinically translatable methods to predict the conversion from Mild Cognitive Impairment (MCI) to Alzheimer’s disease (AD) are still lacking. Although MCI typically precedes AD, only a fraction of 20-40% of MCI individuals will progress to dementia within 3 years following the initial diagnosis. As currently available and emerging therapies likely have the greatest impact when provided at the earliest disease stage, the prompt identification of subjects at high risk for conversion to full AD is of great importance in the fight against this disease. In this work, we propose a highly predictive machine learning algorithm, based only on non-invasively and easily in-the-clinic collectable predictors, to identify MCI subjects at risk for conversion to full AD.MethodsThe algorithm was developed using the open dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), employing a sample of 550 MCI subjects whose diagnostic follow-up is available for at least 3 years after the baseline assessment. A restricted set of information regarding sociodemographic and clinical characteristics, neuropsychological test scores was used as predictors and several different supervised machine learning algorithms were developed and ensembled in final algorithm. A site-independent stratified train/test split protocol was used to provide an estimate of the generalized performance of the algorithm.ResultsThe final algorithm demonstrated an AUROC of 0.88, sensitivity of 77.7%, and a specificity of 79.9% on excluded test data. The specificity of the algorithm was 40.2% for 100% sensitivity.DiscussionThe algorithm we developed achieved sound and high prognostic performance to predict AD conversion using easily clinically derived information that makes the algorithm easy to be translated into practice. This indicates beneficial application to improve recruitment in clinical trials and to more selectively prescribe new and newly emerging early interventions to high AD risk patients.


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Zeyar Aung ◽  
Mohamed Sassi

For defining the optimal machine learning algorithm, the decision was not easy for which we shall choose. To help future researchers, we describe in this paper the optimal among the best of the algorithms. We built a synthetic data set and performed the supervised machine learning runs for five different algorithms. For heterogeneity, we identified Random Forest, among others, to be the best algorithm.


2019 ◽  
Author(s):  
Eunhyung Lee ◽  
Sanghyun Kim

Abstract. Time series of soil moisture were measured at 30 points for 396 rainfall events on a steep, forested hillslope between 2007 and 2016. We then analyzed the dataset using an unsupervised machine learning algorithm to cluster the hydrologic events based on the dissimilarity distances between weighting components of a self-organizing map (SOM). Generation patterns of two primary hillslope hydrological processes, namely, vertical flow and lateral flow, at the upslope and downslope areas were responsible for the distinction of the hydrologic events. Two-dimensional spatial weighting patterns in the SOM provided explanations for the relationships between rainfall characteristics and hydrological processes at different locations and depths. High reliability in hydrologic classification was achieved for both the driest and wettest events; as assessed through k-fold cross validation using 10 years of data. Representative soil moisture monitoring points were found through temporal stability analysis of the event structure delineated from the machine learning classification. Application of a supervised machine learning algorithm provided a scheme using soil moisture for the cluster identification of hydrologic event even without rainfall data which is useful to configure hillslope hydrologic process with the least cost in data acquisition.


2020 ◽  
Author(s):  
Lydia Chougar ◽  
Johann Faouzi ◽  
Nadya Pyatigorskaya ◽  
Rahul Gaurav ◽  
Emma Biondetti ◽  
...  

ABSTRACTBackgroundSeveral studies have shown that machine learning algorithms using MRI data can accurately discriminate parkinsonian syndromes. Validation under clinical conditions is missing.ObjectivesTo evaluate the accuracy for the categorization of parkinsonian syndromes of a machine learning algorithm trained with a research cohort and tested on an independent clinical replication cohort.Methods361 subjects, including 94 healthy controls, 139 patients with PD, 60 with PSP with Richardson’s syndrome, 41 with MSA of the parkinsonian variant (MSA-P) and 27 with MSA of the cerebellar variant (MSA-P), were recruited. They were divided into a training cohort (n=179) scanned in a research environment, and a replication cohort (n=182), scanned in clinical conditions on different MRI systems. Volumes and DTI metrics in 13 brain regions were used as input for a supervised machine learning algorithm.ResultHigh accuracy was achieved using volumetry in the classification of PD versus PSP, PD versus MSA-P, PD versus MSA-C, PD versus atypical parkinsonian syndromes and PSP versus MSA-C in both cohorts, although slightly lower in the replication cohort (balanced accuracy: 0.800 to 0.915 in the training cohort; 0.741 to 0.928 in the replication cohort). Performance was lower in the classification of PSP versus MSA-P and MSA-P versus MSA-C. When adding DTI metrics, the performance tended to increase in the training cohort, but not in the replication cohort.ConclusionsA machine learning approach based on volumetric and DTI data can accurately classify subjects with early-stage parkinsonism, scanned on different MRI systems, in the setting of their clinical workup.


Sign in / Sign up

Export Citation Format

Share Document