scholarly journals HeartNet: Self Multi-Head Attention Mechanism via Convolutional Network with Adversarial Data Synthesis for ECG-based Arrhythmia Classification

Author(s):  
Taki Hasan Rafi ◽  
Young Woong-Ko

Cardiovascular disease is now one of the leading causes of morbidity and mortality in humans. Electrocardiogram (ECG) is a reliable tool for monitoring the health of the cardiovascular system. Currently, there has been a lot of focus on accurately categorizing heartbeats. There is a high demand on automatic ECG classification systems to assist medical professionals. In this paper we proposed a new deep learning method called HeartNet for developing an automatic ECG classifier. The proposed deep learning method is compressed by multi-head attention mechanism on top of CNN model. The main challenge of insufficient data label is solved by adversarial data synthesis adopting generative adversarial network (GAN) with generating additional training samples. It drastically improves the overall performance of the proposed method by 5-10% on each insufficient data label category. We evaluated our proposed method utilizing MIT-BIH dataset. Our proposed method has shown 99.67 ± 0.11 accuracy and 89.24 ± 1.71 MCC trained with adversarial data synthesized dataset. However, we have also utilized two individual datasets such as Atrial Fibrillation Detection Database and PTB Diagnostic Database to see the performance of our proposed model on ECG classification. The effectiveness and robustness of proposed method are validated by extensive experiments, comparison and analysis. Later on, we also highlighted some limitations of this work.

Author(s):  
Oleksii Prykhodko ◽  
Simon Viet Johansson ◽  
Panagiotis-Christos Kotsias ◽  
Esben Jannik Bjerrum ◽  
Ola Engkvist ◽  
...  

<p>Recently deep learning method has been used for generating novel structures. In the current study, we proposed a new deep learning method, LatentGAN, which combine an autoencoder and a generative adversarial neural network for doing de novo molecule design. We applied the method for structure generation in two scenarios, one is to generate random drug-like compounds and the other is to generate target biased compounds. Our results show that the method works well in both cases, in which sampled compounds from the trained model can largely occupy the same chemical space of the training set and still a substantial fraction of the generated compound are novel. The distribution of drug-likeness score for compounds sampled from LatentGAN is also similar to that of the training set.</p>


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2158
Author(s):  
Juan Du ◽  
Kuanhong Cheng ◽  
Yue Yu ◽  
Dabao Wang ◽  
Huixin Zhou

Panchromatic (PAN) images contain abundant spatial information that is useful for earth observation, but always suffer from low-resolution ( LR) due to the sensor limitation and large-scale view field. The current super-resolution (SR) methods based on traditional attention mechanism have shown remarkable advantages but remain imperfect to reconstruct the edge details of SR images. To address this problem, an improved SR model which involves the self-attention augmented Wasserstein generative adversarial network ( SAA-WGAN) is designed to dig out the reference information among multiple features for detail enhancement. We use an encoder-decoder network followed by a fully convolutional network (FCN) as the backbone to extract multi-scale features and reconstruct the High-resolution (HR) results. To exploit the relevance between multi-layer feature maps, we first integrate a convolutional block attention module (CBAM) into each skip-connection of the encoder-decoder subnet, generating weighted maps to enhance both channel-wise and spatial-wise feature representation automatically. Besides, considering that the HR results and LR inputs are highly similar in structure, yet cannot be fully reflected in traditional attention mechanism, we, therefore, designed a self augmented attention (SAA) module, where the attention weights are produced dynamically via a similarity function between hidden features; this design allows the network to flexibly adjust the fraction relevance among multi-layer features and keep the long-range inter information, which is helpful to preserve details. In addition, the pixel-wise loss is combined with perceptual and gradient loss to achieve comprehensive supervision. Experiments on benchmark datasets demonstrate that the proposed method outperforms other SR methods in terms of both objective evaluation and visual effect.


Author(s):  
Oleksii Prykhodko ◽  
Simon Viet Johansson ◽  
Panagiotis-Christos Kotsias ◽  
Esben Jannik Bjerrum ◽  
Ola Engkvist ◽  
...  

<p>Recently deep learning method has been used for generating novel structures. In the current study, we proposed a new deep learning method, LatentGAN, which combine an autoencoder and a generative adversarial neural network for doing de novo molecule design. We applied the method for structure generation in two scenarios, one is to generate random drug-like compounds and the other is to generate target biased compounds. Our results show that the method works well in both cases, in which sampled compounds from the trained model can largely occupy the same chemical space of the training set and still a substantial fraction of the generated compound are novel. The distribution of drug-likeness score for compounds sampled from LatentGAN is also similar to that of the training set.</p>


2021 ◽  
Author(s):  
Wenxiang Deng ◽  
Adam Hedberg-Buenz ◽  
Dana A Soukup ◽  
Sima Taghizadeh ◽  
Michael G Anderson ◽  
...  

Purpose: Optic nerve damage is the principal feature of glaucoma and contributes to vision loss in many diseases. In animal models, nerve health has traditionally been assessed by human experts that grade damage qualitatively or manually quantify axons from sampling limited areas from histologic cross sections of nerve. Both approaches are prone to variability and time consuming. Automated approaches have begun to emerge, but shortcomings have limited wide-spread application. Here, we seek improvements through use of deep-learning approaches for segmenting and quantifying axons from cross sections of mouse optic nerve. Methods: Two deep-learning approaches were developed and evaluated: (1) a traditional supervised approach using a fully convolutional network trained with only labeled data and (2) a semi-supervised approach trained with both labeled and unlabeled data using a generative-adversarial-network framework. Results: From comparisons with an independent test set of images with manually marked axon centers and boundaries, both deep-learning approaches performed above an existing baseline automated approach and similarly to two independent experts. Performance of the semi-supervised approach was superior and implemented into AxonDeep. Conclusions: AxonDeep performs automated quantification and segmentation of axons similar to that of experts without the time- and labor-constraints associated with manual performance. The quantitative and objective nature of AxonDeep reduces variability arising from differences in model, methodology, and user that often compromise manual performance of these tasks. Translational Relevance: Use of deep learning for axon quantification provides rapid, objective, and higher throughput analysis of optic nerve that would otherwise not be possible.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 5007
Author(s):  
Yuan He ◽  
Xinyu Li ◽  
Runlong Li ◽  
Jianping Wang ◽  
Xiaojun Jing

Radio frequency interference, which makes it difficult to produce high-quality radar spectrograms, is a major issue for micro-Doppler-based human activity recognition (HAR). In this paper, we propose a deep-learning-based method to detect and cut out the interference in spectrograms. Then, we restore the spectrograms in the cut-out region. First, a fully convolutional neural network (FCN) is employed to detect and remove the interference. Then, a coarse-to-fine generative adversarial network (GAN) is proposed to restore the part of the spectrogram that is affected by the interferences. The simulated motion capture (MOCAP) spectrograms and the measured radar spectrograms with interference are used to verify the proposed method. Experimental results from both qualitative and quantitative perspectives show that the proposed method can mitigate the interference and restore high-quality radar spectrograms. Furthermore, the comparison experiments also demonstrate the efficiency of the proposed approach.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2021 ◽  
Author(s):  
Tham Vo

Abstract In abstractive summarization task, most of proposed models adopt the deep recurrent neural network (RNN)-based encoder-decoder architecture to learn and generate meaningful summary for a given input document. However, most of recent RNN-based models always suffer the challenges related to the involvement of much capturing high-frequency/reparative phrases in long documents during the training process which leads to the outcome of trivial and generic summaries are generated. Moreover, the lack of thorough analysis on the sequential and long-range dependency relationships between words within different contexts while learning the textual representation also make the generated summaries unnatural and incoherent. To deal with these challenges, in this paper we proposed a novel semantic-enhanced generative adversarial network (GAN)-based approach for abstractive text summarization task, called as: SGAN4AbSum. We use an adversarial training strategy for our text summarization model in which train the generator and discriminator to simultaneously handle the summary generation and distinguishing the generated summary with the ground-truth one. The input of generator is the jointed rich-semantic and global structural latent representations of training documents which are achieved by applying a combined BERT and graph convolutional network (GCN) textual embedding mechanism. Extensive experiments in benchmark datasets demonstrate the effectiveness of our proposed SGAN4AbSum which achieve the competitive ROUGE-based scores in comparing with state-of-the-art abstractive text summarization baselines.


Sign in / Sign up

Export Citation Format

Share Document