Abstract P437: Deep Learning-based Models For Complete Atrioventricular Block Heart Rhythm Analysis

2021 ◽  
Vol 129 (Suppl_1) ◽  
Author(s):  
Dahim Choi ◽  
Nam Kyun Kim ◽  
Young H Son ◽  
Yuming Gao ◽  
Christina Sheng ◽  
...  

Atrioventricular block (AVB), caused by impairment in the heart conduction system, presents extreme diversity and is associated with other complications. Only half of AVB patients require a permanent pacemaker, and the process determining the pacemaker implantation is associated with an increase in cost and patient morbidity and mortality. Thus, there is a need for models capable of accurately identifying transient or reversible causes for conduction disturbances and predicting the patient risks and the necessity of a pacemaker. Deep learning (DL) is brought to the forefront due to its prediction accuracy, and the DL-based electrocardiogram (ECG) analysis can be a breakthrough to analyze a massive amount of data. However, the current DL models are unsuitable for AVB-ECG, where the P waves are decoupled from the QRS/T waves, and a black-box nature of the DL-based model lowers the credibility of prediction models to physicians. Here, we present a real-time-capable DL-based algorithm that can identify AVB-ECG waves and automate AVB phenotyping for arrhythmogenic risk assessment. Our algorithm can analyze unformatted ECG records with abnormal patterns by integrating the two representative DL algorithms: convolutional neural networks (CNN) and recurrent neural networks (RNN). This hybrid CNN/RNN network can memorize local patterns, spatial hierarchies, and long-range temporal dependencies of ECG signals. Furthermore, by integrating parameters derived from dimension reduction analysis and heart rate variability into the hybrid layers, the algorithm can capture the P/QRS/T-specific morphological and temporal features in ECG waveforms. We evaluated the algorithm using the six AVB porcine models, where TBX18, a pacemaker transcription factor, was transduced into the ventricular myocardium to form a biological pacemaker, and an additional electronic pacemaker was transplanted as a backup pacemaker. We achieved high sensitivity (95% true positive rate) and quantified the potential risks of various pathological ECG patterns. This study may be a starting point in conducting both retrospective and prospective patient studies and will help physicians understand its decision-making workflow and find the incorrect recommendations for AVB patients.

Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 443
Author(s):  
Chyan-long Jan

Because of the financial information asymmetry, the stakeholders usually do not know a company’s real financial condition until financial distress occurs. Financial distress not only influences a company’s operational sustainability and damages the rights and interests of its stakeholders, it may also harm the national economy and society; hence, it is very important to build high-accuracy financial distress prediction models. The purpose of this study is to build high-accuracy and effective financial distress prediction models by two representative deep learning algorithms: Deep neural networks (DNN) and convolutional neural networks (CNN). In addition, important variables are selected by the chi-squared automatic interaction detector (CHAID). In this study, the data of Taiwan’s listed and OTC sample companies are taken from the Taiwan Economic Journal (TEJ) database during the period from 2000 to 2019, including 86 companies in financial distress and 258 not in financial distress, for a total of 344 companies. According to the empirical results, with the important variables selected by CHAID and modeling by CNN, the CHAID-CNN model has the highest financial distress prediction accuracy rate of 94.23%, and the lowest type I error rate and type II error rate, which are 0.96% and 4.81%, respectively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dipendra Jha ◽  
Vishu Gupta ◽  
Logan Ward ◽  
Zijiang Yang ◽  
Christopher Wolverton ◽  
...  

AbstractThe application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.


Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


Author(s):  
Tahani Aljohani ◽  
Alexandra I. Cristea

Massive Open Online Courses (MOOCs) have become universal learning resources, and the COVID-19 pandemic is rendering these platforms even more necessary. In this paper, we seek to improve Learner Profiling (LP), i.e. estimating the demographic characteristics of learners in MOOC platforms. We have focused on examining models which show promise elsewhere, but were never examined in the LP area (deep learning models) based on effective textual representations. As LP characteristics, we predict here the employment status of learners. We compare sequential and parallel ensemble deep learning architectures based on Convolutional Neural Networks and Recurrent Neural Networks, obtaining an average high accuracy of 96.3% for our best method. Next, we predict the gender of learners based on syntactic knowledge from the text. We compare different tree-structured Long-Short-Term Memory models (as state-of-the-art candidates) and provide our novel version of a Bi-directional composition function for existing architectures. In addition, we evaluate 18 different combinations of word-level encoding and sentence-level encoding functions. Based on these results, we show that our Bi-directional model outperforms all other models and the highest accuracy result among our models is the one based on the combination of FeedForward Neural Network and the Stack-augmented Parser-Interpreter Neural Network (82.60% prediction accuracy). We argue that our prediction models recommended for both demographics characteristics examined in this study can achieve high accuracy. This is additionally also the first time a sound methodological approach toward improving accuracy for learner demographics classification on MOOCs was proposed.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Moisés Lodeiro-Santiago ◽  
Pino Caballero-Gil ◽  
Ricardo Aguasca-Colomo ◽  
Cándido Caballero-Gil

This work presents a system to detect small boats (pateras) to help tackle the problem of this type of perilous immigration. The proposal makes extensive use of emerging technologies like Unmanned Aerial Vehicles (UAV) combined with a top-performing algorithm from the field of artificial intelligence known as Deep Learning through Convolutional Neural Networks. The use of this algorithm improves current detection systems based on image processing through the application of filters thanks to the fact that the network learns to distinguish the aforementioned objects through patterns without depending on where they are located. The main result of the proposal has been a classifier that works in real time, allowing the detection of pateras and people (who may need to be rescued), kilometres away from the coast. This could be very useful for Search and Rescue teams in order to plan a rescue before an emergency occurs. Given the high sensitivity of the managed information, the proposed system includes cryptographic protocols to protect the security of communications.


2020 ◽  
Author(s):  
Joshua Levy ◽  
Carly Bobak ◽  
Brock Christensen ◽  
Louis Vaickus ◽  
James O’Malley

AbstractNetwork analysis methods are useful to better understand and contextualize relationships between entities. While statistical and machine learning prediction models generally assume independence between actors, network-based statistical methods for social network data allow for dyadic dependence between actors. While numerous methods have been developed for the R statistical software to analyze such data, deep learning methods have not been implemented in this language. Here, we introduce GCN4R, an R library for fitting graph neural networks on independent networks to aggregate actor covariate information to yield meaningful embeddings for a variety of network-based tasks (e.g. community detection, peer effects models, social influence). We provide an extensive overview of insights and methods utilized by the deep learning community on learning on social and biological networks, followed by a tutorial that demonstrates some of the capabilities of the GCN4R framework to make these methods more accessible to the R research community.


2021 ◽  
Author(s):  
L Jakaite ◽  
M Ciemny ◽  
S Selitskiy ◽  
Vitaly Schetinin

Abstract A theory of Efficient Market Hypothesis (EMH) has been introduced by Fama to analyse financial markets. In particular the EMH theory has been proven in real cases under different conditions, including financial crises and frauds. The EMH assumes to examine the prediction accuracy of models designed on retrospective data. Such prediction models could be designed in different ways that motivated us to explore Machine Learning (ML) methods known for building models providing a high prediction performance. In this study we propose a ``deep'' learning method for building high-performance prediction models. The proposed method is based on the Group Method of Data Handling (GMDH) that is the deep learning paradigm capable of building multilayer neural-network models of a near-optimal complexity on given data. We show that the developed GMDH-type neural network has outperformed the models built by the conventional ML methods on the Warsaw Stock Exchange data. It is important that the complexity of the designed GMDH-type neural-networks is defined by the number of layers and connections between neurons. The performances of models were compared in terms of the prediction errors. We report a significantly smaller prediction error of the proposed method than that of the conventional autoregressive and "shallow’’ neural-network models. This finally allows us to conclude that traders will be advantaged by the proposed method.


Image is an important medium for monitoring the treatment responses of patient’s diseases by the physicians. There could be a tough task to organize and retrieve images in structured manner with respect to incredible increase of images in Hospitals. Text based image retrieval may prone to human error and may have large deviation across different images. Content-Based Medical Image Retrieval(CBMIR) system plays a major role to retrieve the required images from the huge database.Recent advances in Deep Learning (DL) have made greater achievements for solving complex problems in computer vision ,graphics and image processing. The deep architecture of Convolutional Neural Networks (CNN) can combine the low-level features into high-level features which could learn the semantic representation from images. Deep learning can help to extract, select and classify image features, measure the predictive target and gives prediction models to assist physician efficiently. The motivation of this paper is to provide the analysis of medical image retrieval system using CNN algorithm.


Artnodes ◽  
2020 ◽  
Author(s):  
Bruno Caldas Vianna

This article uses the exhibition “Infinite Skulls”, which happened in Paris in the beginning of 2019, as a starting point to discuss art created by artificial intelligence and, by extension, unique pieces of art generated by algorithms. We detail the development of DCGAN, the deep learning neural network used in the show, from its cybernetics origin. The show and its creation process are described, identifying elements of creativity and technique, as well as question of the authorship of works. Then it frames these works in the context of generative art, pointing affinities and differences, and the issues of representing through procedures and abstractions. It describes the major breakthrough of neural network for technical images as the ability to represent categories through an abstraction, rather than images themselves. Finally, it tries to understand neural networks more as a tool for artists than an autonomous art creator.


Computers ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 104
Author(s):  
Evgeny Ponomarev ◽  
Sergey Matveev ◽  
Ivan Oseledets ◽  
Valery Glukhov

A lot of deep learning applications are desired to be run on mobile devices. Both accuracy and inference time are meaningful for a lot of them. While the number of FLOPs is usually used as a proxy for neural network latency, it may not be the best choice. In order to obtain a better approximation of latency, the research community uses lookup tables of all possible layers for the calculation of the inference on a mobile CPU. It requires only a small number of experiments. Unfortunately, on a mobile GPU, this method is not applicable in a straightforward way and shows low precision. In this work, we consider latency approximation on a mobile GPU as a data- and hardware-specific problem. Our main goal is to construct a convenient Latency Estimation Tool for Investigation (LETI) of neural network inference and building robust and accurate latency prediction models for each specific task. To achieve this goal, we make tools that provide a convenient way to conduct massive experiments on different target devices focusing on a mobile GPU. After evaluation of the dataset, one can train the regression model on experimental data and use it for future latency prediction and analysis. We experimentally demonstrate the applicability of such an approach on a subset of the popular NAS-Benchmark 101 dataset for two different mobile GPU.


Sign in / Sign up

Export Citation Format

Share Document