P6568Forecasting atrial fibrillation using machine learning techniques

2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
J.-M Gregoire ◽  
N Subramanian ◽  
D Papazian ◽  
H Bersini

Abstract Background Forecasting atrial fibrillation (AF) a few minutes before its onset has been studied, mainly based on heart rate variability parameters, derived from 24-hour ECG Holter monitorings. However, these studies have shown conflicting, non-clinically applicable results. Nowadays, machine learning algorithms have proven their ability to anticipate events. Therefore, forecasting AF before its onset should be (re)assessed using machine learning techniques. A reliable forecasting could improve results of preventive pacing in patients with cardiac electronic implanted devices (CEID). Purpose To forecast an oncoming AF episode in individual patients using machine learning techniques. To evaluate the effect if the onset of an AF episode can be forecasted on longer time frames. Methods The totality of the raw data of a data set of 10484 ECG Holter monitorings was retrospectively analyzed and all AF episodes were annotated. Onset of each AF episode was determined with a precision of 5 msec. We only took AF events into consideration if they lasted longer than 30 seconds. Of all patients in the dataset, 140 presented paroxysmal AF (286 recorded AF episodes). We only used RR intervals to predict the presence of AF. We developed two different types of machine learning algorithms with different computational power requirements: a “dynamic” deep and recurrent neural net (RNN) and a “static” decision-tree with adaboost (boosting trees) more suitable for embedded devices. These algorithms were trained on one set of patients (around 90%) and tested on the remaining set of patients (around 10%). Results The performance figures are summarized in the table. Both algorithms can be tuned to increase their specificity (at a loss of sensitivity) or vice versa, depending on the objective. Performance of forecasting algorithms RR-distance Boosting trees AUC RNN AUC 30–1 RR-Interval(s) before an AF event 97.1% 98.77% 60–31 RR-Intervals before an AF event 97.5% 99.1% 90–61 RR-Intervals before an AF event 96.9% 99.1% 120–91 RR-Inervals before an AF event 98.2% 98.9% AUC for Area Under ROC Curves. Conclusion Based upon this retrospective study, we show that AF can be forecasted on an individual level with high predictive power using machine learning algorithm, with little drop-off of predictive value within the studied distances (1–120 RR intervals before a potential AF episode). We believe that the embedding of our new algorithm(s) in CEID's could open the way to innovative therapies that significantly decrease AF burden in selected implanted patients.

Author(s):  
Jayashree M. Kudari

Developments in machine learning techniques for classification and regression exposed the access of detecting sophisticated patterns from various domain-penetrating data. In biomedical applications, enormous amounts of medical data are produced and collected to predict disease type and stage of the disease. Detection and prediction of diseases, such as diabetes, lung cancer, brain cancer, heart disease, and liver diseases, requires huge tests and that increases the size of patient medical data. Robust prediction of a patient's disease from the huge data set is an important agenda in in this chapter. The challenge of applying a machine learning method is to select the best algorithm within the disease prediction framework. This chapter opts for robust machine learning algorithms for various diseases by using case studies. This usually analyzes each dimension of disease, independently checking the identified value between the limits to monitor the condition of the disease.


2021 ◽  
Vol 266 ◽  
pp. 02001
Author(s):  
Li Eckart ◽  
Sven Eckart ◽  
Margit Enke

Machine learning is a popular way to find patterns and relationships in high complex datasets. With the nowadays advancements in storage and computational capabilities, some machine-learning techniques are becoming suitable for real-world applications. The aim of this work is to conduct a comparative analysis of machine learning algorithms and conventional statistical techniques. These methods have long been used for clustering large amounts of data and extracting knowledge in a wide variety of science fields. However, the central knowledge of the different methods and their specific requirements for the data set, as well as the limitations of the individual methods, are an obstacle for the correct use of these methods. New machine learning algorithms could be integrated even more strongly into the current evaluation if the right choice of methods were easier to make. In the present work, some different algorithms of machine learning are listed. Four methods (artificial neural network, regression method, self-organizing map, k-means al-algorithm) are compared in detail and possible selection criteria are pointed out. Finally, an estimation of the fields of work and application and possible limitations are provided, which should help to make choices for specific interdisciplinary analyses.


Materials ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1089
Author(s):  
Sung-Hee Kim ◽  
Chanyoung Jeong

This study aims to demonstrate the feasibility of applying eight machine learning algorithms to predict the classification of the surface characteristics of titanium oxide (TiO2) nanostructures with different anodization processes. We produced a total of 100 samples, and we assessed changes in TiO2 nanostructures’ thicknesses by performing anodization. We successfully grew TiO2 films with different thicknesses by one-step anodization in ethylene glycol containing NH4F and H2O at applied voltage differences ranging from 10 V to 100 V at various anodization durations. We found that the thicknesses of TiO2 nanostructures are dependent on anodization voltages under time differences. Therefore, we tested the feasibility of applying machine learning algorithms to predict the deformation of TiO2. As the characteristics of TiO2 changed based on the different experimental conditions, we classified its surface pore structure into two categories and four groups. For the classification based on granularity, we assessed layer creation, roughness, pore creation, and pore height. We applied eight machine learning techniques to predict classification for binary and multiclass classification. For binary classification, random forest and gradient boosting algorithm had relatively high performance. However, all eight algorithms had scores higher than 0.93, which signifies high prediction on estimating the presence of pore. In contrast, decision tree and three ensemble methods had a relatively higher performance for multiclass classification, with an accuracy rate greater than 0.79. The weakest algorithm used was k-nearest neighbors for both binary and multiclass classifications. We believe that these results show that we can apply machine learning techniques to predict surface quality improvement, leading to smart manufacturing technology to better control color appearance, super-hydrophobicity, super-hydrophilicity or batter efficiency.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 684 ◽  
Author(s):  
V V. Ramalingam ◽  
Ayantan Dandapath ◽  
M Karthik Raja

Heart related diseases or Cardiovascular Diseases (CVDs) are the main reason for a huge number of death in the world over the last few decades and has emerged as the most life-threatening disease, not only in India but in the whole world. So, there is a need of reliable, accurate and feasible system to diagnose such diseases in time for proper treatment. Machine Learning algorithms and techniques have been applied to various medical datasets to automate the analysis of large and complex data. Many researchers, in recent times, have been using several machine learning techniques to help the health care industry and the professionals in the diagnosis of heart related diseases. This paper presents a survey of various models based on such algorithms and techniques andanalyze their performance. Models based on supervised learning algorithms such as Support Vector Machines (SVM), K-Nearest Neighbour (KNN), NaïveBayes, Decision Trees (DT), Random Forest (RF) and ensemble models are found very popular among the researchers.


Author(s):  
M. M. Ata ◽  
K. M. Elgamily ◽  
M. A. Mohamed

The presented paper proposes an algorithm for palmprint recognition using seven different machine learning algorithms. First of all, we have proposed a region of interest (ROI) extraction methodology which is a two key points technique. Secondly, we have performed some image enhancement techniques such as edge detection and morphological operations in order to make the ROI image more suitable for the Hough transform. In addition, we have applied the Hough transform in order to extract all the possible principle lines on the ROI images. We have extracted the most salient morphological features of those lines; slope and length. Furthermore, we have applied the invariant moments algorithm in order to produce 7 appropriate hues of interest. Finally, after performing a complete hybrid feature vectors, we have applied different machine learning algorithms in order to recognize palmprints effectively. Recognition accuracy have been tested by calculating precision, sensitivity, specificity, accuracy, dice, Jaccard coefficients, correlation coefficients, and training time. Seven different supervised machine learning algorithms have been implemented and utilized. The effect of forming the proposed hybrid feature vectors between Hough transform and Invariant moment have been utilized and tested. Experimental results show that the feed forward neural network with back propagation has achieved about 99.99% recognition accuracy among all tested machine learning techniques.


2020 ◽  
Vol 7 (10) ◽  
pp. 380-389
Author(s):  
Asogwa D.C ◽  
Anigbogu S.O ◽  
Anigbogu G.N ◽  
Efozia F.N

Author's age prediction is the task of determining the author's age by studying the texts written by them. The prediction of author’s age can be enlightening about the different trends, opinions social and political views of an age group. Marketers always use this to encourage a product or a service to an age group following their conveyed interests and opinions. Methodologies in natural language processing have made it possible to predict author’s age from text by examining the variation of linguistic characteristics. Also, many machine learning algorithms have been used in author’s age prediction. However, in social networks, computational linguists are challenged with numerous issues just as machine learning techniques are performance driven with its own challenges in realistic scenarios. This work developed a model that can predict author's age from text with a machine learning algorithm (Naïve Bayes) using three types of features namely, content based, style based and topic based. The trained model gave a prediction accuracy of 80%.


Author(s):  
Virendra Tiwari ◽  
Balendra Garg ◽  
Uday Prakash Sharma

The machine learning algorithms are capable of managing multi-dimensional data under the dynamic environment. Despite its so many vital features, there are some challenges to overcome. The machine learning algorithms still requires some additional mechanisms or procedures for predicting a large number of new classes with managing privacy. The deficiencies show the reliable use of a machine learning algorithm relies on human experts because raw data may complicate the learning process which may generate inaccurate results. So the interpretation of outcomes with expertise in machine learning mechanisms is a significant challenge in the machine learning algorithm. The machine learning technique suffers from the issue of high dimensionality, adaptability, distributed computing, scalability, the streaming data, and the duplicity. The main issue of the machine learning algorithm is found its vulnerability to manage errors. Furthermore, machine learning techniques are also found to lack variability. This paper studies how can be reduced the computational complexity of machine learning algorithms by finding how to make predictions using an improved algorithm.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


2022 ◽  
pp. 123-145
Author(s):  
Pelin Yildirim Taser ◽  
Vahid Khalilpour Akram

The GPS signals are not available inside the buildings; hence, indoor localization systems rely on indoor technologies such as Bluetooth, WiFi, and RFID. These signals are used for estimating the distance between a target and available reference points. By combining the estimated distances, the location of the target nodes is determined. The wide spreading of the internet and the exponential increase in small hardware diversity allow the creation of the internet of things (IoT)-based indoor localization systems. This chapter reviews the traditional and machine learning-based methods for IoT-based positioning systems. The traditional methods include various distance estimation and localization approaches; however, these approaches have some limitations. Because of the high prediction performance, machine learning algorithms are used for indoor localization problems in recent years. The chapter focuses on presenting an overview of the application of machine learning algorithms in indoor localization problems where the traditional methods remain incapable.


Sign in / Sign up

Export Citation Format

Share Document