scholarly journals Performance of a machine-learning algorithm for fully automatic LGE scar quantification in the large multi-national derivate registry

2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
F Ghanbari ◽  
T Joyce ◽  
S Kozerke ◽  
AI Guaricci ◽  
PG Masci ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Other. Main funding source(s): J. Schwitter receives research support by “ Bayer Schweiz AG “. C.N.C. received grant by Siemens. Gianluca Pontone received institutional fees by General Electric, Bracco, Heartflow, Medtronic, and Bayer. U.J.S received grand by Astellas, Bayer, General Electric. This work was supported by Italian Ministry of Health, Rome, Italy (RC 2017 R659/17-CCM698). This work was supported by Gyrotools, Zurich, Switzerland. Background  Late Gadolinium enhancement (LGE) scar quantification is generally recognized as an accurate and reproducible technique, but it is observer-dependent and time consuming. Machine learning (ML) potentially offers to solve this problem.  Purpose  to develop and validate a ML-algorithm to allow for scar quantification thereby fully avoiding observer variability, and to apply this algorithm to the prospective international multicentre Derivate cohort. Method  The Derivate Registry collected heart failure patients with LV ejection fraction <50% in 20 European and US centres. In the post-myocardial infarction patients (n = 689) quality of the LGE short-axis breath-hold images was determined (good, acceptable, sufficient, borderline, poor, excluded) and ground truth (GT) was produced (endo-epicardial contours, 2 remote reference regions, artefact elimination) to determine mass of non-infarcted myocardium and of dense (≥5SD above mean-remote) and non-dense scar (>2SD to <5SD above mean-remote). Data were divided into the learning (total n = 573; training: n = 289; testing: n = 284) and validation set (n = 116). A Ternaus-network (loss function = average of dice and binary-cross-entropy) produced 4 outputs (initial prediction, test time augmentation (TTA), threshold-based prediction (TB), and TTA + TB) representing normal myocardium, non-dense, and dense scar (Figure 1).Outputs were evaluated by dice metrics, Bland-Altman, and correlations.  Results  In the validation and test data sets, both not used for training, the dense scar GT was 20.8 ± 9.6% and 21.9 ± 13.3% of LV mass, respectively. The TTA-network yielded the best results with small biases vs GT (-2.2 ± 6.1%, p < 0.02; -1.7 ± 6.0%, p < 0.003, respectively) and 95%CI vs GT in the range of inter-human comparisons, i.e. TTA yielded SD of the differences vs GT in the validation and test data of 6.1 and 6.0 percentage points (%p), respectively (Fig 2), which was comparable to the 7.7%p for the inter-observer comparison (n = 40). For non-dense scar, TTA performance was similar with small biases (-1.9 ± 8.6%, p < 0.0005, -1.4 ± 8.2%, p < 0.0001, in the validation and test sets, respectively, GT 39.2 ± 13.8% and 42.1 ± 14.2%) and acceptable 95%CI with SD of the differences of 8.6 and 8.2%p for TTA vs GT, respectively, and 9.3%p for inter-observer.  Conclusions  In the large Derivate cohort from 20 centres, performance of the presented ML-algorithm to quantify dense and non-dense scar fully automatically is comparable to that of experienced humans with small bias and acceptable 95%-CI. Such a tool could facilitate scar quantification in clinical routine as it eliminates human observer variability and can handle large data sets.

2021 ◽  
Author(s):  
Diti Roy ◽  
Md. Ashiq Mahmood ◽  
Tamal Joyti Roy

<p>Heart Disease is the most dominating disease which is taking a large number of deaths every year. A report from WHO in 2016 portrayed that every year at least 17 million people die of heart disease. This number is gradually increasing day by day and WHO estimated that this death toll will reach the summit of 75 million by 2030. Despite having modern technology and health care system predicting heart disease is still beyond limitations. As the Machine Learning algorithm is a vital source predicting data from available data sets we have used a machine learning approach to predict heart disease. We have collected data from the UCI repository. In our study, we have used Random Forest, Zero R, Voted Perceptron, K star classifier. We have got the best result through the Random Forest classifier with an accuracy of 97.69.<i><b></b></i></p> <p><b> </b></p>


2022 ◽  
Vol 12 ◽  
Author(s):  
Bin Zhu ◽  
Jianlei Zhao ◽  
Mingnan Cao ◽  
Wanliang Du ◽  
Liuqing Yang ◽  
...  

Background: Thrombolysis with r-tPA is recommended for patients after acute ischemic stroke (AIS) within 4.5 h of symptom onset. However, only a few patients benefit from this therapeutic regimen. Thus, we aimed to develop an interpretable machine learning (ML)–based model to predict the thrombolysis effect of r-tPA at the super-early stage.Methods: A total of 353 patients with AIS were divided into training and test data sets. We then used six ML algorithms and a recursive feature elimination (RFE) method to explore the relationship among the clinical variables along with the NIH stroke scale score 1 h after thrombolysis treatment. Shapley additive explanations and local interpretable model–agnostic explanation algorithms were applied to interpret the ML models and determine the importance of the selected features.Results: Altogether, 353 patients with an average age of 63.0 (56.0–71.0) years were enrolled in the study. Of these patients, 156 showed a favorable thrombolysis effect and 197 showed an unfavorable effect. A total of 14 variables were enrolled in the modeling, and 6 ML algorithms were used to predict the thrombolysis effect. After RFE screening, seven variables under the gradient boosting decision tree (GBDT) model (area under the curve = 0.81, specificity = 0.61, sensitivity = 0.9, and F1 score = 0.79) demonstrated the best performance. Of the seven variables, activated partial thromboplastin clotting time (time), B-type natriuretic peptide, and fibrin degradation products were the three most important clinical characteristics that might influence r-tPA efficiency.Conclusion: This study demonstrated that the GBDT model with the seven variables could better predict the early thrombolysis effect of r-tPA.


2021 ◽  
Vol 79 (1) ◽  
Author(s):  
Romana Haneef ◽  
Sofiane Kab ◽  
Rok Hrzic ◽  
Sonsoles Fuentes ◽  
Sandrine Fosse-Edorh ◽  
...  

Abstract Background The use of machine learning techniques is increasing in healthcare which allows to estimate and predict health outcomes from large administrative data sets more efficiently. The main objective of this study was to develop a generic machine learning (ML) algorithm to estimate the incidence of diabetes based on the number of reimbursements over the last 2 years. Methods We selected a final data set from a population-based epidemiological cohort (i.e., CONSTANCES) linked with French National Health Database (i.e., SNDS). To develop this algorithm, we adopted a supervised ML approach. Following steps were performed: i. selection of final data set, ii. target definition, iii. Coding variables for a given window of time, iv. split final data into training and test data sets, v. variables selection, vi. training model, vii. Validation of model with test data set and viii. Selection of the model. We used the area under the receiver operating characteristic curve (AUC) to select the best algorithm. Results The final data set used to develop the algorithm included 44,659 participants from CONSTANCES. Out of 3468 variables from SNDS linked to CONSTANCES cohort were coded, 23 variables were selected to train different algorithms. The final algorithm to estimate the incidence of diabetes was a Linear Discriminant Analysis model based on number of reimbursements of selected variables related to biological tests, drugs, medical acts and hospitalization without a procedure over the last 2 years. This algorithm has a sensitivity of 62%, a specificity of 67% and an accuracy of 67% [95% CI: 0.66–0.68]. Conclusions Supervised ML is an innovative tool for the development of new methods to exploit large health administrative databases. In context of InfAct project, we have developed and applied the first time a generic ML-algorithm to estimate the incidence of diabetes for public health surveillance. The ML-algorithm we have developed, has a moderate performance. The next step is to apply this algorithm on SNDS to estimate the incidence of type 2 diabetes cases. More research is needed to apply various MLTs to estimate the incidence of various health conditions.


Author(s):  
Du Zhang ◽  
Meiliu Lu

One of the long-term research goals in machine learning is how to build never-ending learners. The state-of-the-practice in the field of machine learning thus far is still dominated by the one-time learner paradigm: some learning algorithm is utilized on data sets to produce certain model or target function, and then the learner is put away and the model or function is put to work. Such a learn-once-apply-next (or LOAN) approach may not be adequate in dealing with many real world problems and is in sharp contrast with the human’s lifelong learning process. On the other hand, learning can often be brought on through overcoming some inconsistent circumstances. This paper proposes a framework for perpetual learning agents that are capable of continuously refining or augmenting their knowledge through overcoming inconsistencies encountered during their problem-solving episodes. The never-ending nature of a perpetual learning agent is embodied in the framework as the agent’s continuous inconsistency-induced belief revision process. The framework hinges on the agents recognizing inconsistency in data, information, knowledge, or meta-knowledge, identifying the cause of inconsistency, revising or augmenting beliefs to explain, resolve, or accommodate inconsistency. The authors believe that inconsistency can serve as one of the important learning stimuli toward building perpetual learning agents that incrementally improve their performance over time.


2021 ◽  
Author(s):  
Satchit Ramnath ◽  
Jiachen Ma ◽  
Jami J. Shah ◽  
Duane Detwiler

Abstract Automotive body structure design is critical to achieve lightweight and crash worthiness based on engineers’ experience. In the current design process, it frequently occurs that designers use a previous generation design to evolve the latest designs to meet certain targets. However, in this process the possibility of adapting design ideas from other models is unlikely. The uniqueness of each design and presence of non-uniform parameters further makes it difficult to compare two or more designs and extract useful feature information. There is a need for a method that will fill the missing gap in assisting designers with better design options. This paper aims to fill this gap by introducing an innovative approach to use a non-uniform parametric study with machine learning in order to make valuable suggestions to the designer. The proposed method uses data sets produced from experiment design to reduce the number of parameters, perform parameter correlation studies and run finite element analysis (FEA), for a given set of loads. The response data generated from this FEA is then used in a machine learning algorithm to make predictions on the ideal features to be used in the design. The method can be applied to any component that has a feature-based parametric design.


2020 ◽  
Author(s):  
Mareen Lösing ◽  
Jörg Ebbing ◽  
Wolfgang Szwillus

&lt;p&gt;Improving the understanding of geothermal heat flux in Antarctica is crucial for ice-sheet modelling and glacial isostatic adjustment. It affects the ice rheology and can lead to basal melting, thereby promoting ice flow. Direct measurements are sparse and models inferred from e.g. magnetic or seismological data differ immensely. By Bayesian inversion, we evaluated the uncertainties of some of these models and studied the interdependencies of the thermal parameters. In contrast to previous studies, our method allows the parameters to vary laterally, which leads to a heterogeneous West- and a slightly more homogeneous East Antarctica with overall lower surface heat flux. The Curie isotherm depth and radiogenic heat production have the strongest impact on our results but both parameters have a high uncertainty.&lt;/p&gt;&lt;p&gt;To overcome such shortcomings, we adopt a machine learning approach, more specifically a Gradient Boosted Regression Tree model, in order to find an optimal predictor for locations with sparse measurements. However, this approach largely relies on global data sets, which are notoriously unreliable in Antarctica. Therefore, validity and quality of the data sets is reviewed and discussed. Using regional and more detailed data sets of Antarctica&amp;#8217;s Gondwana neighbors might improve the predictions due to their similar tectonic history. The performance of the machine learning algorithm can then be examined by comparing the predictions to the existing measurements. From our study, we expect to get new insights in the geothermal structure of Antarctica, which will help with future studies on the coupling of Solid Earth and Cryosphere.&lt;/p&gt;


Author(s):  
Lakshmi Prayaga ◽  
Krishna Devulapalli ◽  
Chandra Prayaga

Wearable devices are contributing heavily towards the proliferation of data and creating a rich minefield for data analytics. Recent trends in the design of wearable devices include several embedded sensors which also provide useful data for many applications. This research presents results obtained from studying human-activity related data, collected from wearable devices. The activities considered for this study were working at the computer, standing and walking, standing, walking, walking up and down the stairs, and talking while walking. The research entails the use of a portion of the data to train machine learning algorithms and build a model. The rest of the data is used as test data for predicting the activity of an individual. Details of data collection, processing, and presentation are also discussed. After studying the literature and the data sets, a Random Forest machine learning algorithm was determined to be best applicable algorithm for analyzing data from wearable devices. The software used in this research includes the R statistical package and the SensorLog app.


2021 ◽  
Author(s):  
Diti Roy ◽  
Md. Ashiq Mahmood ◽  
Tamal Joyti Roy

<p>Heart Disease is the most dominating disease which is taking a large number of deaths every year. A report from WHO in 2016 portrayed that every year at least 17 million people die of heart disease. This number is gradually increasing day by day and WHO estimated that this death toll will reach the summit of 75 million by 2030. Despite having modern technology and health care system predicting heart disease is still beyond limitations. As the Machine Learning algorithm is a vital source predicting data from available data sets we have used a machine learning approach to predict heart disease. We have collected data from the UCI repository. In our study, we have used Random Forest, Zero R, Voted Perceptron, K star classifier. We have got the best result through the Random Forest classifier with an accuracy of 97.69.<i><b></b></i></p> <p><b> </b></p>


Author(s):  
Muhammad Sholih Fajri ◽  
Nizar Septian ◽  
Edy Sanjaya

Abstrak Pada artikel ini kami mengevaluasi bagaimana implementasi algoritma machine learning k-Nearest Neighbors (kNN) pada data spektroskopi gamma beresolusi rendah. Penelitian ini bertujuan untuk mengetahui bagaimana performa kNN dalam mempelajari data tersebut. Kami melakukan berbagai variasi, yaitu: jumlah data training, jumlah data tes, jenis metric, dan nilai k untuk memperoleh performa terbaik dari algoritma ini. Data spektroskopi gamma diambil menggunakan sintilator NaI(Tl) Leybold Didactic dengan resolusi energi sebesar 10.9 keV per channel. Hasil variasi menunjukkan bahwa algoritma kNN memberikan hasil prediksi klasifikasi radioisotop yang sangat fluktuatif.  Abstract In this paper we evaluate the implementation of a machine learning algorithm namely k-Nearest Neighbors (kNN) on low resolution gamma spectroscopy data. The aim is to provide the information of how well the algorithm performs on learning the data. We did the variation of number of training and test data, type of metric used, and values of k in order to see the best performance of the algorithm. The gamma spectroscopy data were taken using NaI(Tl) scintillator made by Leybold Didactic with resolution of 10.9 keV per channel. The variations show that the kNN algorithm produce significantly fluctuating accuracy to the prediction of radioisotope class.


Sign in / Sign up

Export Citation Format

Share Document