scholarly journals ECGNET: Learning Where to Attend for Detection of Atrial Fibrillation (AF) with Deep Visual Attention

Author(s):  
Sajad Mousavi

The complexity of the patterns associated with Atrial Fibrillation (AF) and the high level of noise affecting these patterns have significantly limited the current signal processing and shallow machine learning approaches to get accurate AF detection results. Deep neural networks have shown to be very powerful to learn the non-linear patterns in the data. While a deep learning approach attempts to learn complex pattern related to the presence of AF in the ECG, they can benefit from knowing which parts of the signal is more important to focus during learning. In this paper, we introduce a two-channel deep neural network to more accurately detect AF presented in the ECG signal. The first channel takes in a preprocessed ECG signal and automatically learns where to attend for detection of AF. The second channel simultaneously takes in the preprocessed ECG signal to consider all features of entire signals. The model shows via visualization that what parts of the given ECG signal are important to attend while trying to detect atrial fibrillation. In addition, this combination significantly improves the performance of the atrial fibrillation detection (achieved a sensitivity of 99.53%, specificity of 99.26% and accuracy of 99.40% on the MIT-BIH atrial fibrillation database with 5-s ECG segments.)

2020 ◽  
Vol 44 (6) ◽  
Author(s):  
S. K. Ghosh ◽  
R. K. Tripathy ◽  
Mario R. A. Paternina ◽  
Juan J. Arrieta ◽  
Alejandro Zamora-Mendez ◽  
...  

2017 ◽  
Author(s):  
Michael F. Bonner ◽  
Russell A. Epstein

ABSTRACTBiologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the complex internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we developed a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes: that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that the CNN was highly predictive of OPA representations, and, importantly, that it accounted for the portion of OPA variance that reflected the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal computations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithmic implementations.AUTHOR SUMMARYHow does visual cortex compute behaviorally relevant properties of the local environment from sensory inputs? For decades, computational models have been able to explain only the earliest stages of biological vision, but recent advances in the engineering of deep neural networks have yielded a breakthrough in the modeling of high-level visual cortex. However, these models are not explicitly designed for testing neurobiological theories, and, like the brain itself, their complex internal operations remain poorly understood. Here we examined a deep neural network for insights into the cortical representation of the navigational affordances of visual scenes. In doing so, we developed a set of high-throughput techniques and statistical tools that are broadly useful for relating the internal operations of neural networks with the information processes of the brain. Our findings demonstrate that a deep neural network with purely feedforward computations can account for the processing of navigational layout in high-level visual cortex. We next performed a series of experiments and visualization analyses on this neural network, which characterized a set of stimulus input features that may be critical for computing navigationally related cortical representations and identified a set of high-level, complex scene features that may serve as a basis set for the cortical coding of navigational layout. These findings suggest a computational mechanism through which high-level visual cortex might encode the spatial structure of the local navigational environment, and they demonstrate an experimental approach for leveraging the power of deep neural networks to understand the visual computations of the brain.


Author(s):  
Ding Liu ◽  
Bihan Wen ◽  
Xianming Liu ◽  
Zhangyang Wang ◽  
Thomas Huang

Conventionally, image denoising and high-level vision tasks are handled separately in computer vision. In this paper, we cope with the two jointly and explore the mutual influence between them. First we propose a convolutional neural network for image denoising which achieves the state-of-the-art performance. Second we propose a deep neural network solution that cascades two modules for image denoising and various high-level tasks, respectively, and use the joint loss for updating only the denoising network via back-propagation. We demonstrate that on one hand, the proposed denoiser has the generality to overcome the performance degradation of different high-level vision tasks. On the other hand, with the guidance of high-level vision information, the denoising network can generate more visually appealing results. To the best of our knowledge, this is the first work investigating the benefit of exploiting image semantics simultaneously for image denoising and high-level vision tasks via deep learning.


Author(s):  
M. Madhumalini ◽  
T. Meera Devi

The article has been withdrawn on the request of the authors and the editor of the journal Current Signal Transduction Therapy. Bentham Science apologizes to the readers of the journal for any inconvenience this may have caused. BENTHAM SCIENCE DISCLAIMER: It is a condition of publication that manuscripts submitted to this journal have not been published and will not be simultaneously submitted or published elsewhere. Furthermore, any data, illustration, structure or table that has been published elsewhere must be reported, and copyright permission for reproduction must be obtained. Plagiarism is strictly forbidden, and by submitting the article for publication the authors agree that the publishers have the legal right to take appropriate action against the authors, if plagiarism or fabricated information is discovered. By submitting a manuscript the authors agree that the copyright of their article is transferred to the publishers, if and when the article is accepted for publication.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Florian Stelzer ◽  
André Röhm ◽  
Raul Vicente ◽  
Ingo Fischer ◽  
Serhiy Yanchuk

AbstractDeep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. The network states emerge in time as a temporal unfolding of the neuron’s dynamics. By adjusting the feedback-modulation within the loops, we adapt the network’s connection weights. These connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections must be taken into account. Our approach can fully represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Mohammed Aliy Mohammed ◽  
Fetulhak Abdurahman ◽  
Yodit Abebe Ayalew

Abstract Background Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy. Results Our experimental result demonstrated that from the selected top-ten pre-trained DCNN image classifiers DenseNet169 outperformed with an average accuracy, precision, recall, and F1-score of 0.990, 0.974, 0.974, and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions Even though the size of DenseNet169 is small compared to the experimented pre-trained DCNN image classifiers, yet, it is not suitable for mobile or edge devices. Further experimentation with mobile or small-size DCNN image classifiers is required to extend the applicability of the models in real-world demands. In addition, since all experiments used the SIPaKMeD dataset, additional experiments will be needed using new datasets to enhance the generalizability of the models.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
M Jacobsen ◽  
T.A Dembek ◽  
A.P Ziakos ◽  
G Kobbe ◽  
M Kollmann ◽  
...  

Abstract Background Atrial fibrillation (A-fib) is the most common arrhythmia; however, detection of A-fib is a challenge due to irregular occurrence. Purpose Evaluating feasibility and performance of a non-invasive medical wearable for detection of A-fib. Methods In the CoMMoD-A-fib trial admitted patients with a high risk for A-fib carried the wearable and an ECG Holter (control) in parallel over a period of 24 hours under not physically restricted conditions. The wearable with a tight-fit upper arm band employs a photoplethysmography (PPG) technology enabling a high sampling rate. Different algorithms (including a deep neural network) were applied to 5 min PPG datasets for detection of A-fib. Proportion of monitoring time automatically interpretable by algorithms (= interpretable time) was analyzed for influencing factors. Results In 102 inpatients (age 71.0±11.9 years; 52% male) 2306 hours of parallel recording time could be obtained; 1781 hours (77.2%) of these were automatically interpretable by an algorithm analyzing PPG derived intervals. Detection of A-Fib was possible with a sensitivity of 92.7% and specificity of 92.4% (AUC 0.96). Also during physical activity, detection of A-fib was sufficiently possible (sensitivity 90.1% and specificity 91.2%). Usage of the deep neural network improved detection of A-fib further (sensitivity 95.4% and specificity 96.2%). A higher prevalence of heart failure with reduced ejection fraction was observed in patients with a low interpretable time (p=0.080). Conclusion Detection of A-fib by means of an upper arm non-invasive medical wearable with a high resolution is reliably possible under inpatient conditions. Funding Acknowledgement Type of funding source: Public Institution(s). Main funding source(s): Internal grant program (PhD and Dr. rer. nat. Program Biomedicine) of the Faculty of Health at Witten/Herdecke University, Germany. HELIOS Kliniken GmbH (Grant-ID 047476), Germany


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 807
Author(s):  
Carlos M. Castorena ◽  
Itzel M. Abundez ◽  
Roberto Alejo ◽  
Everardo E. Granda-Gutiérrez ◽  
Eréndira Rendón ◽  
...  

The problem of gender-based violence in Mexico has been increased considerably. Many social associations and governmental institutions have addressed this problem in different ways. In the context of computer science, some effort has been developed to deal with this problem through the use of machine learning approaches to strengthen the strategic decision making. In this work, a deep learning neural network application to identify gender-based violence on Twitter messages is presented. A total of 1,857,450 messages (generated in Mexico) were downloaded from Twitter: 61,604 of them were manually tagged by human volunteers as negative, positive or neutral messages, to serve as training and test data sets. Results presented in this paper show the effectiveness of deep neural network (about 80% of the area under the receiver operating characteristic) in detection of gender violence on Twitter messages. The main contribution of this investigation is that the data set was minimally pre-processed (as a difference versus most state-of-the-art approaches). Thus, the original messages were converted into a numerical vector in accordance to the frequency of word’s appearance and only adverbs, conjunctions and prepositions were deleted (which occur very frequently in text and we think that these words do not contribute to discriminatory messages on Twitter). Finally, this work contributes to dealing with gender violence in Mexico, which is an issue that needs to be faced immediately.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaoling Wei ◽  
Jimin Li ◽  
Chenghao Zhang ◽  
Ming Liu ◽  
Peng Xiong ◽  
...  

In this paper, R wave peak interval independent atrial fibrillation detection algorithm is proposed based on the analysis of the synchronization feature of the electrocardiogram signal by a deep neural network. Firstly, the synchronization feature of each heartbeat of the electrocardiogram signal is constructed by a Recurrence Complex Network. Then, a convolution neural network is used to detect atrial fibrillation by analyzing the eigenvalues of the Recurrence Complex Network. Finally, a voting algorithm is developed to improve the performance of the beat-wise atrial fibrillation detection. The MIT-BIH atrial fibrillation database is used to evaluate the performance of the proposed method. Experimental results show that the sensitivity, specificity, and accuracy of the algorithm can achieve 94.28%, 94.91%, and 94.59%, respectively. Remarkably, the proposed method was more effective than the traditional algorithms to the problem of individual variation in the atrial fibrillation detection.


2021 ◽  
Author(s):  
Luke Gundry ◽  
Gareth Kennedy ◽  
Alan Bond ◽  
Jie Zhang

The use of Deep Neural Networks (DNNs) for the classification of electrochemical mechanisms based on training with simulations of the initial cycle of potential have been reported. In this paper,...


Sign in / Sign up

Export Citation Format

Share Document