scholarly journals Contribution of Different Subbands of ECG in Sleep Apnea Detection Evaluated Using Filter Bank Decomposition and a Convolutional Neural Network

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 510
Author(s):  
Cheng-Yu Yeh ◽  
Hung-Yu Chang ◽  
Jiy-Yao Hu ◽  
Chun-Cheng Lin

A variety of feature extraction and classification approaches have been proposed using electrocardiogram (ECG) and ECG-derived signals for improving the performance of detecting apnea events and diagnosing patients with obstructive sleep apnea (OSA). The purpose of this study is to further evaluate whether the reduction of lower frequency P and T waves can increase the accuracy of the detection of apnea events. This study proposed filter bank decomposition to decompose the ECG signal into 15 subband signals, and a one-dimensional (1D) convolutional neural network (CNN) model independently cooperating with each subband to extract and classify the features of the given subband signal. One-minute ECG signals obtained from the MIT PhysioNet Apnea-ECG database were used to train the CNN models and test the accuracy of detecting apnea events for different subbands. The results show that the use of the newly selected subject-independent datasets can avoid the overestimation of the accuracy of the apnea event detection and can test the difference in the accuracy of different subbands. The frequency band of 31.25–37.5 Hz can achieve 100% per-recording accuracy with 85.8% per-minute accuracy using the newly selected subject-independent datasets and is recommended as a promising subband of ECG signals that can cooperate with the proposed 1D CNN model for the diagnosis of OSA.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 129586-129599 ◽  
Author(s):  
Sheikh Shanawaz Mostafa ◽  
Fabio Mendonca ◽  
Antonio G. Ravelo-Garcia ◽  
Gabriel Julia-Serda ◽  
Fernando Morgado-Dias

Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


SLEEP ◽  
2020 ◽  
Vol 43 (12) ◽  
Author(s):  
Sami Nikkonen ◽  
Henri Korkalainen ◽  
Samu Kainulainen ◽  
Sami Myllymaa ◽  
Akseli Leino ◽  
...  

Abstract A common symptom of obstructive sleep apnea (OSA) is excessive daytime sleepiness (EDS). The gold standard test for EDS is the multiple sleep latency test (MSLT). However, due to its high cost, MSLT is not routinely conducted for OSA patients and EDS is instead evaluated using sleep questionnaires. This is problematic however, since sleep questionnaires are subjective and correlate poorly with the MSLT. Therefore, new objective tools are needed for reliable evaluation of EDS. The aim of this study was to test our hypothesis that EDS can be estimated with neural network analysis of previous night polysomnographic signals. We trained a convolutional neural network (CNN) classifier using electroencephalography, electrooculography, and chin electromyography signals from 2,014 patients with suspected OSA. The CNN was trained to classify the patients into four sleepiness categories based on their mean sleep latency (MSL); severe (MSL &lt; 5min), moderate (5 ≤ MSL &lt; 10), mild (10 ≤ MSL &lt; 15), and normal (MSL ≥ 15). The CNN classified patients to the four sleepiness categories with an overall accuracy of 60.6% and Cohen’s kappa value of 0.464. In two-group classification scheme with sleepy (MSL &lt; 10 min) and non-sleepy (MSL ≥ 10) patients, the CNN achieved an accuracy of 77.2%, with sensitivity of 76.5%, and specificity of 77.9%. Our results show that previous night’s polysomnographic signals can be used for objective estimation of EDS with at least moderate accuracy. Since the diagnosis of OSA is currently confirmed by polysomnography, the classifier could be used simultaneously to get an objective estimate of the daytime sleepiness with minimal extra workload.


2020 ◽  
Vol 10 (6) ◽  
pp. 1265-1273
Author(s):  
Lili Chen ◽  
Huoyao Xu

Sleep apnea (SA) is a common sleep disorders affecting the sleep quality. Therefore the automatic SA detection has far-reaching implications for patients and physicians. In this paper, a novel approach is developed based on deep neural network (DNN) for automatic diagnosis SA. To this end, five features are extracted from electrocardiogram (ECG) signals through wavelet decomposition and sample entropy. The deep neural network is constructed by two-layer stacked sparse autoencoder (SSAE) network and one softmax layer. The softmax layer is added at the top of the SSAE network for diagnosing SA. Afterwards, the SSAE network can get more effective high-level features from raw features. The experimental results reveal that the performance of deep neural network can accomplish an accuracy of 96.66%, a sensitivity of 96.25%, and a specificity of 97%. In addition, the performance of deep neural network outperforms the comparison models including support vector machine (SVM), random forest (RF), and extreme learning machine (ELM). Finally, the experimental results reveal that the proposed method can be valid applied to automatic SA event detection.


2020 ◽  
Vol 10 (3) ◽  
pp. 976
Author(s):  
Rana N. Costandy ◽  
Safa M. Gasser ◽  
Mohamed S. El-Mahallawy ◽  
Mohamed W. Fakhr ◽  
Samir Y. Marzouk

Electrocardiogram (ECG) signal analysis is a critical task in diagnosing the presence of any cardiac disorder. There are limited studies on detecting P-waves in various atrial arrhythmias, such as atrial fibrillation (AFIB), atrial flutter, junctional rhythm, and other arrhythmias due to P-wave variability and absence in various cases. Thus, there is a growing need to develop an efficient automated algorithm that annotates a 2D printed version of P-waves in the well-known ECG signal databases for validation purposes. To our knowledge, no one has annotated P-waves in the MIT-BIH atrial fibrillation database. Therefore, it is a challenge to manually annotate P-waves in the MIT-BIH AF database and to develop an automated algorithm to detect the absence and presence of different shapes of P-waves. In this paper, we present the manual annotation of P-waves in the well-known MIT-BIH AF database with the aid of a cardiologist. In addition, we provide an automatic P-wave segmentation for the same database using a fully convolutional neural network model (U-Net). This algorithm works on 2D imagery of printed ECG signals, as this type of imagery is the most commonly used in developing countries. The proposed automatic P-wave detection method obtained an accuracy and sensitivity of 98.56% and 98.78%, respectively, over the first 5 min of the second lead of the MIT-BIH AF database (a total of 8280 beats). Moreover, the proposed method is validated using the well-known automatically and manually annotated QT database (a total of 11,201 and 3194 automatically and manually annotated beats, respectively). This results in accuracies of 98.98 and 98.9%, and sensitivities of 98.97 and 97.24% for the automatically and manually annotated QT databases, respectively. Thus, these results indicate that the proposed automatic method can be used for analyzing long-printed ECG signals on mobile battery-driven devices using only images of the ECG signals, without the need for a cardiologist.


Sign in / Sign up

Export Citation Format

Share Document