P-wave Detection Using a Parallel Convolutional Neural Network in Electrocardiogram

Author(s):  
Jinlei Liu ◽  
Yunqing Liu ◽  
Yanrui Jin ◽  
Xiaojun Chen ◽  
Liqun Zhao ◽  
...  
2020 ◽  
Vol 10 (3) ◽  
pp. 976
Author(s):  
Rana N. Costandy ◽  
Safa M. Gasser ◽  
Mohamed S. El-Mahallawy ◽  
Mohamed W. Fakhr ◽  
Samir Y. Marzouk

Electrocardiogram (ECG) signal analysis is a critical task in diagnosing the presence of any cardiac disorder. There are limited studies on detecting P-waves in various atrial arrhythmias, such as atrial fibrillation (AFIB), atrial flutter, junctional rhythm, and other arrhythmias due to P-wave variability and absence in various cases. Thus, there is a growing need to develop an efficient automated algorithm that annotates a 2D printed version of P-waves in the well-known ECG signal databases for validation purposes. To our knowledge, no one has annotated P-waves in the MIT-BIH atrial fibrillation database. Therefore, it is a challenge to manually annotate P-waves in the MIT-BIH AF database and to develop an automated algorithm to detect the absence and presence of different shapes of P-waves. In this paper, we present the manual annotation of P-waves in the well-known MIT-BIH AF database with the aid of a cardiologist. In addition, we provide an automatic P-wave segmentation for the same database using a fully convolutional neural network model (U-Net). This algorithm works on 2D imagery of printed ECG signals, as this type of imagery is the most commonly used in developing countries. The proposed automatic P-wave detection method obtained an accuracy and sensitivity of 98.56% and 98.78%, respectively, over the first 5 min of the second lead of the MIT-BIH AF database (a total of 8280 beats). Moreover, the proposed method is validated using the well-known automatically and manually annotated QT database (a total of 11,201 and 3194 automatically and manually annotated beats, respectively). This results in accuracies of 98.98 and 98.9%, and sensitivities of 98.97 and 97.24% for the automatically and manually annotated QT databases, respectively. Thus, these results indicate that the proposed automatic method can be used for analyzing long-printed ECG signals on mobile battery-driven devices using only images of the ECG signals, without the need for a cardiologist.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ricardo Salinas-Martínez ◽  
Johannes de Bie ◽  
Nicoletta Marzocchi ◽  
Frida Sandberg

Background: Brief episodes of atrial fibrillation (AF) may evolve into longer AF episodes increasing the chances of thrombus formation, stroke, and death. Classical methods for AF detection investigate rhythm irregularity or P-wave absence in the ECG, while deep learning approaches profit from the availability of annotated ECG databases to learn discriminatory features linked to different diagnosis. However, some deep learning approaches do not provide analysis of the features used for classification. This paper introduces a convolutional neural network (CNN) approach for automatic detection of brief AF episodes based on electrocardiomatrix-images (ECM-images) aiming to link deep learning to features with clinical meaning.Materials and Methods: The CNN is trained using two databases: the Long-Term Atrial Fibrillation and the MIT-BIH Normal Sinus Rhythm, and tested on three databases: the MIT-BIH Atrial Fibrillation, the MIT-BIH Arrhythmia, and the Monzino-AF. Detection of AF is done using a sliding window of 10 beats plus 3 s. Performance is quantified using both standard classification metrics and the EC57 standard for arrhythmia detection. Layer-wise relevance propagation analysis was applied to link the decisions made by the CNN to clinical characteristics in the ECG.Results: For all three testing databases, episode sensitivity was greater than 80.22, 89.66, and 97.45% for AF episodes shorter than 15, 30 s, and for all episodes, respectively.Conclusions: Rhythm and morphological characteristics of the electrocardiogram can be learned by a CNN from ECM-images for the detection of brief episodes of AF.


Author(s):  
C. Vasquez ◽  
A.I. Hernandez ◽  
G. Carrault ◽  
F.A. Mora ◽  
G. Passariello
Keyword(s):  
P Wave ◽  

2019 ◽  
Vol 7 (3) ◽  
pp. SE161-SE174 ◽  
Author(s):  
Reetam Biswas ◽  
Mrinal K. Sen ◽  
Vishal Das ◽  
Tapan Mukerji

An inversion algorithm is commonly used to estimate the elastic properties, such as P-wave velocity ([Formula: see text]), S-wave velocity ([Formula: see text]), and density ([Formula: see text]) of the earth’s subsurface. Generally, the seismic inversion problem is solved using one of the traditional optimization algorithms. These algorithms start with a given model and update the model at each iteration, following a physics-based rule. The algorithm is applied at each common depth point (CDP) independently to estimate the elastic parameters. Here, we have developed a technique using the convolutional neural network (CNN) to solve the same problem. We perform two critical steps to take advantage of the generalization capability of CNN and the physics to generate synthetic data for a meaningful representation of the subsurface. First, rather than using CNN as in a classification type of problem, which is the standard approach, we modified the CNN to solve a regression problem to estimate the elastic properties. Second, again unlike the conventional CNN, which is trained by supervised learning with predetermined label (elastic parameter) values, we use the physics of our forward problem to train the weights. There are two parts of the network: The first is the convolution network, which takes the input as seismic data to predict the elastic parameters, which is the desired intermediate result. In the second part of the network, we use wave-propagation physics and we use the output of the CNN to generate the predicted seismic data for comparison with the actual data and calculation of the error. This error between the true and predicted seismograms is then used to calculate gradients, and update the weights in the CNN. After the network is trained, only the first part of the network can be used to estimate elastic properties at remaining CDPs directly. We determine the application of physics-guided CNN on prestack and poststack inversion problems. To explain how the algorithm works, we examine it using a conventional CNN workflow without any physics guidance. We first implement the algorithm on a synthetic data set for prestack and poststack data and then apply it to a real data set from the Cana field. In all the training examples, we use a maximum of 20% of data. Our approach offers a distinct advantage over a conventional machine-learning approach in that we circumvent the need for labeled data sets for training.


2021 ◽  
Vol 9 ◽  
Author(s):  
Jingbao Zhu ◽  
Shanyou Li ◽  
Jindong Song ◽  
Yuan Wang

Magnitude estimation is a vital task within earthquake early warning (EEW) systems (EEWSs). To improve the magnitude determination accuracy after P-wave arrival, we introduce an advanced magnitude prediction model that uses a deep convolutional neural network for earthquake magnitude estimation (DCNN-M). In this paper, we use the inland strong-motion data obtained from the Japan Kyoshin Network (K-NET) to calculate the input parameters of the DCNN-M model. The DCNN-M model uses 12 parameters extracted from 3 s of seismic data recorded after P-wave arrival as the input, four convolutional layers, four pooling layers, four batch normalization layers, three fully connected layers, the Adam optimizer, and an output. Our results show that the standard deviation of the magnitude estimation error of the DCNN-M model is 0.31, which is significantly less than the values of 1.56 and 0.42 for the τc method and Pd method, respectively. In addition, the magnitude prediction error of the DCNN-M model is not affected by variations in the epicentral distance. The DCNN-M model has considerable potential application in EEWSs in Japan.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document