Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network

2018 ◽  
Vol 42 (6) ◽  
Author(s):  
Erdenebayar Urtnasan ◽  
Jong-Uk Park ◽  
Eun-Yeon Joo ◽  
Kyoung-Joung Lee
Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 129586-129599 ◽  
Author(s):  
Sheikh Shanawaz Mostafa ◽  
Fabio Mendonca ◽  
Antonio G. Ravelo-Garcia ◽  
Gabriel Julia-Serda ◽  
Fernando Morgado-Dias

Author(s):  
Fernando Vaquerizo-Villar ◽  
Daniel Alvarez ◽  
Leila Kheirandish-Gozal ◽  
Gonzalo Cesar Gutierrez-Tobal ◽  
Veronica Barroso-Garcia ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250618
Author(s):  
S. M. Isuru Niroshana ◽  
Xin Zhu ◽  
Keijiro Nakamura ◽  
Wenxi Chen

Obstructive sleep apnea (OSA) is a common chronic sleep disorder that disrupts breathing during sleep and is associated with many other medical conditions, including hypertension, coronary heart disease, and depression. Clinically, the standard for diagnosing OSA involves nocturnal polysomnography (PSG). However, this requires expert human intervention and considerable time, which limits the availability of OSA diagnosis in public health sectors. Therefore, electrocardiogram (ECG)-based methods for OSA detection have been proposed to automate the polysomnography procedure and reduce its discomfort. So far, most of the proposed approaches rely on feature engineering, which calls for advanced expert knowledge and experience. This paper proposes a novel fused-image-based technique that detects OSA using only a single-lead ECG signal. In the proposed approach, a convolutional neural network extracts features automatically from images created with one-minute ECG segments. The proposed network comprises 37 layers, including four residual blocks, a dense layer, a dropout layer, and a soft-max layer. In this study, three time–frequency representations, namely the scalogram, the spectrogram, and the Wigner–Ville distribution, were used to investigate the effectiveness of the fused-image-based approach. We found that blending scalogram and spectrogram images further improved the system’s discriminative characteristics. Seventy ECG recordings from the PhysioNet Apnea-ECG database were used to train and evaluate the proposed model using 10-fold cross validation. The results of this study demonstrated that the proposed classifier can perform OSA detection with an average accuracy, recall, and specificity of 92.4%, 92.3%, and 92.6%, respectively, for the fused spectral images.


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A227-A227
Author(s):  
S Tsuiki ◽  
T Nagaoka ◽  
T Fukuda ◽  
Y Sakamoto ◽  
F R Almeida ◽  
...  

Abstract Introduction Lateral cephalometric radiography is a simple way to provide craniofacial soft/hard tissue profiles specific for patients with obstructive sleep apnea (OSA) and may thus offer diagnostic information on the disease. We hypothesized that a machine learning technology, a deep convolutional neural network (DCNN), could make it possible to detect OSA based solely on lateral cephalometric radiographs without the need for either large amounts of subjective/laboratory data or skilled analyses. Methods In this diagnostic study, a DCNN was developed (n=1,258) and tested (n=131) using data from 1,389 lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n=867; apnea hypopnea index &gt;30/hour) or non-OSA (n=522; apnea hypopnea index &lt; 5) at a single center for sleep disorders from March, 2006 to February, 2017. Three kinds of data sets were prepared by changing the area of interest using a single image; original image without any modification (Full Image), image containing a facial profile, upper airway, craniofacial soft/hard tissues, and image containing part of the occipital region (upper left corner of the image; Head Only). A radiologist and an orthodontist also performed a manual cephalometric analysis of the Full Image for comparison. Observers were blinded to the patient groupings. Data analysis was performed from April, 2018 to August, 2019. When the predictive score obtained from the DCNN analysis exceeded the threshold (0.50), the patient was judged to have OSA. The primary outcome was diagnostic accuracy in terms of area under the receiver-operating characteristic curve. Results The sensitivity/specificity was 0.87/0.82 for Full Image, 0.88/0.75 for Main Region, 0.71/0.63 for Head Only, and 0.54/0.80 for the manual analysis. The area under the curve was the highest for Main Region (0.92): 0.89 for Full Image, 0.70 for Head Only, and 0.75 for the manual analysis. Conclusion A DCNN identified individuals with OSA with high accuracy. This is a useful approach that does not require any laborious analyses in a primary care setting or in remote areas where an initial specialized OSA diagnosis is not feasible. Support This study was supported in part by the Japan Society for the Promotion of Science (grant numbers 17K11793, 19K10236).


Sign in / Sign up

Export Citation Format

Share Document