scholarly journals Multichannel Multiscale Two-Stage Convolutional Neural Network for the Detection and Localization of Myocardial Infarction Using Vectorcardiogram Signal

2021 ◽  
Vol 11 (17) ◽  
pp. 7920
Author(s):  
Jay Karhade ◽  
Samit Kumar Ghosh ◽  
Pranjali Gajbhiye ◽  
Rajesh Kumar Tripathy ◽  
U. Rajendra Acharya

Myocardial infarction (MI) occurs due to the decrease in the blood flow into one part of the heart, and it further causes damage to the heart muscle. The 12-channel electrocardiogram (ECG) has been widely used to detect and localize MI pathology in clinical studies. The vectorcardiogram (VCG) is a 3-channel recording system used to measure the heart’s electrical activity in sagittal, transverse, and frontal planes. The VCG signals have advantages over the 12-channel ECG to localize posterior MI pathology. Detection and localization of MI using VCG signals are vital in clinical practice. This paper proposes a multi-channel multi-scale two-stage deep-learning-based approach to detect and localize MI using VCG signals. In the first stage, the multivariate variational mode decomposition (MVMD) decomposes the three-channel-based VCG signal beat into five components along each channel. The multi-channel multi-scale VCG tensor is formulated using the modes of each channel of VCG data, and it is used as the input to the deep convolutional neural network (CNN) to classify MI and normal sinus rhythm (NSR) classes. In the second stage, the multi-class deep CNN is used for the categorization of anterior MI (AMI), anterior-lateral MI (ALMI), anterior-septal MI (ASMI), inferior MI (IMI), inferior-lateral MI (ILMI), inferior-posterior-lateral (IPLMI) classes using MI detected multi-channel multi-scale VCG instances from the first stage. The proposed approach is developed using the VCG data obtained from a public database. The results reveal that the approach has obtained the accuracy, sensitivity, and specificity values of 99.58%, 99.18%, and 99.87%, respectively, for MI detection. Moreover, for MI localization, we have obtained the overall accuracy value of 99.86% in the second stage for our proposed network. The proposed approach has demonstrated superior classification performance compared to the existing VCG signal-based MI detection and localization techniques.

2019 ◽  
Vol 11 (14) ◽  
pp. 1678 ◽  
Author(s):  
Yongyong Fu ◽  
Ziran Ye ◽  
Jinsong Deng ◽  
Xinyu Zheng ◽  
Yibo Huang ◽  
...  

Marine aquaculture plays an important role in seafood supplement, economic development, and coastal ecosystem service provision. The precise delineation of marine aquaculture areas from high spatial resolution (HSR) imagery is vital for the sustainable development and management of coastal marine resources. However, various sizes and detailed structures of marine objects make it difficult for accurate mapping from HSR images by using conventional methods. Therefore, this study attempts to extract marine aquaculture areas by using an automatic labeling method based on the convolutional neural network (CNN), i.e., an end-to-end hierarchical cascade network (HCNet). Specifically, for marine objects of various sizes, we propose to improve the classification performance by utilizing multi-scale contextual information. Technically, based on the output of a CNN encoder, we employ atrous convolutions to capture multi-scale contextual information and aggregate them in a hierarchical cascade way. Meanwhile, for marine objects with detailed structures, we propose to refine the detailed information gradually by using a series of long-span connections with fine resolution features from the shallow layers. In addition, to decrease the semantic gaps between features in different levels, we propose to refine the feature space (i.e., channel and spatial dimensions) using an attention-based module. Experimental results show that our proposed HCNet can effectively identify and distinguish different kinds of marine aquaculture, with 98% of overall accuracy. It also achieves better classification performance compared with object-based support vector machine and state-of-the-art CNN-based methods, such as FCN-32s, U-Net, and DeeplabV2. Our developed method lays a solid foundation for the intelligent monitoring and management of coastal marine resources.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ricardo Salinas-Martínez ◽  
Johannes de Bie ◽  
Nicoletta Marzocchi ◽  
Frida Sandberg

Background: Brief episodes of atrial fibrillation (AF) may evolve into longer AF episodes increasing the chances of thrombus formation, stroke, and death. Classical methods for AF detection investigate rhythm irregularity or P-wave absence in the ECG, while deep learning approaches profit from the availability of annotated ECG databases to learn discriminatory features linked to different diagnosis. However, some deep learning approaches do not provide analysis of the features used for classification. This paper introduces a convolutional neural network (CNN) approach for automatic detection of brief AF episodes based on electrocardiomatrix-images (ECM-images) aiming to link deep learning to features with clinical meaning.Materials and Methods: The CNN is trained using two databases: the Long-Term Atrial Fibrillation and the MIT-BIH Normal Sinus Rhythm, and tested on three databases: the MIT-BIH Atrial Fibrillation, the MIT-BIH Arrhythmia, and the Monzino-AF. Detection of AF is done using a sliding window of 10 beats plus 3 s. Performance is quantified using both standard classification metrics and the EC57 standard for arrhythmia detection. Layer-wise relevance propagation analysis was applied to link the decisions made by the CNN to clinical characteristics in the ECG.Results: For all three testing databases, episode sensitivity was greater than 80.22, 89.66, and 97.45% for AF episodes shorter than 15, 30 s, and for all episodes, respectively.Conclusions: Rhythm and morphological characteristics of the electrocardiogram can be learned by a CNN from ECM-images for the detection of brief episodes of AF.


Author(s):  
Yao-Mei Chen ◽  
Yenming J. Chen ◽  
Yun-Kai Tsai ◽  
Wen-Hsien Ho ◽  
Jinn-Tsong Tsai

A multi-layer convolutional neural network (MCNN) with hyperparameter optimization (HyperMCNN) is proposed for classifying human electrocardiograms (ECGs). For performance tests of the HyperMCNN, ECG recordings for patients with cardiac arrhythmia (ARR), congestive heart failure (CHF), and normal sinus rhythm (NSR) were obtained from three PhysioNet databases: MIT-BIH Arrhythmia Database, BIDMC Congestive Heart Failure Database, and MIT-BIH Normal Sinus Rhythm Database, respectively. The MCNN hyperparameters in convolutional layers included number of filters, filter size, padding, and filter stride. The hyperparameters in max-pooling layers were pooling size and pooling stride. Gradient method was also a hyperparameter used to train the MCNN model. Uniform experimental design approach was used to optimize the hyperparameter combination for the MCNN. In performance tests, the resulting 16-layer CNN with an appropriate hyperparameter combination (16-layer HyperMCNN) was used to distinguish among ARR, CHF, and NSR. The experimental results showed that the average correct rate and standard deviation obtained by the 16-layer HyperMCNN were superior to those obtained by a 16-layer CNN with a hyperparameter combination given by Matlab examples. Furthermore, in terms of performance in distinguishing among ARR, CHF, and NSR, the 16-layer HyperMCNN was superior to the 25-layer AlexNet, which was the neural network that had the best image identification performance in the ImageNet Large Scale Visual Recognition Challenge in 2012.


2021 ◽  
Vol 30 (5) ◽  
pp. 833-842
Author(s):  
LIU Jikui ◽  
WANG Ruxin ◽  
WEN Bo ◽  
LIU Zengding ◽  
MIAO Fen ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2022
Author(s):  
Yongmei Ren ◽  
Jie Yang ◽  
Zhiqiang Guo ◽  
Qingnian Zhang ◽  
Hui Cao

Visible image quality is very susceptible to changes in illumination, and there are limitations in ship classification using images acquired by a single sensor. This study proposes a ship classification method based on an attention mechanism and multi-scale convolutional neural network (MSCNN) for visible and infrared images. First, the features of visible and infrared images are extracted by a two-stream symmetric multi-scale convolutional neural network module, and then concatenated to make full use of the complementary features present in multi-modal images. After that, the attention mechanism is applied to the concatenated fusion features to emphasize local details areas in the feature map, aiming to further improve feature representation capability of the model. Lastly, attention weights and the original concatenated fusion features are added element by element and fed into fully connected layers and Softmax output layer for final classification output. Effectiveness of the proposed method is verified on a visible and infrared spectra (VAIS) dataset, which shows 93.81% accuracy in classification results. Compared with other state-of-the-art methods, the proposed method could extract features more effectively and has better overall classification performance.


2019 ◽  
Vol 9 (9) ◽  
pp. 1879 ◽  
Author(s):  
Kai Feng ◽  
Xitian Pi ◽  
Hongying Liu ◽  
Kai Sun

Myocardial infarction is one of the most threatening cardiovascular diseases for human beings. With the rapid development of wearable devices and portable electrocardiogram (ECG) medical devices, it is possible and conceivable to detect and monitor myocardial infarction ECG signals in time. This paper proposed a multi-channel automatic classification algorithm combining a 16-layer convolutional neural network (CNN) and long-short term memory network (LSTM) for I-lead myocardial infarction ECG. The algorithm preprocessed the raw data to first extract the heartbeat segments; then it was trained in the multi-channel CNN and LSTM to automatically learn the acquired features and complete the myocardial infarction ECG classification. We utilized the Physikalisch-Technische Bundesanstalt (PTB) database for algorithm verification, and obtained an accuracy rate of 95.4%, a sensitivity of 98.2%, a specificity of 86.5%, and an F1 score of 96.8%, indicating that the model can achieve good classification performance without complex handcrafted features.


Sign in / Sign up

Export Citation Format

Share Document