Sleep Apnea Event Detection from Sub-frame Based Feature Variation in EEG Signal Using Deep Convolutional Neural Network

Author(s):  
Tanvir Mahmud ◽  
Ishtiaque Ahmed Khan ◽  
Talha Ibn Mahmud ◽  
Shaikh Anowarul Fattah ◽  
Wei-Ping Zhu ◽  
...  
Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


2019 ◽  
Vol 9 (11) ◽  
pp. 2302 ◽  
Author(s):  
Inkyu Choi ◽  
Soo Hyun Bae ◽  
Nam Soo Kim

Audio event detection (AED) is a task of recognizing the types of audio events in an audio stream and estimating their temporal positions. AED is typically based on fully supervised approaches, requiring strong labels including both the presence and temporal position of each audio event. However, fully supervised datasets are not easily available due to the heavy cost of human annotation. Recently, weakly supervised approaches for AED have been proposed, utilizing large scale datasets with weak labels including only the occurrence of events in recordings. In this work, we introduce a deep convolutional neural network (CNN) model called DSNet based on densely connected convolution networks (DenseNets) and squeeze-and-excitation networks (SENets) for weakly supervised training of AED. DSNet alleviates the vanishing-gradient problem and strengthens feature propagation and models interdependencies between channels. We also propose a structured prediction method for weakly supervised AED. We apply a recurrent neural network (RNN) based framework and a prediction smoothness cost function to consider long-term contextual information with reduced error propagation. In post-processing, conditional random fields (CRFs) are applied to take into account the dependency between segments and delineate the borders of audio events precisely. We evaluated our proposed models on the DCASE 2017 task 4 dataset and obtained state-of-the-art results on both audio tagging and event detection tasks.


2019 ◽  
Vol 13 (4) ◽  
pp. 261-266 ◽  
Author(s):  
Hnin Thiri Chaw ◽  
Sinchai Kamolphiwong ◽  
Krongthong Wongsritrang

Sleep apnea is the cessation of airflow at least 10 seconds and it is the type of breathing disorder in which breathing stops at the time of sleeping. The proposed model uses type 4 sleep study which focuses more on portability and the reduction of the signals. The main limitations of type 1 full night polysomnography are time consuming and it requires much space for sleep recording such as sleep lab comparing to type 4 sleep studies. The detection of sleep apnea using deep convolutional neural network model based on SPO2 sensor is the valid alternative for efficient polysomnography and it is portable and cost effective. The total number of samples from SPO2 sensors of 50 patients that is used in this study is 190,000. The performance of the overall accuracy of sleep apnea detection is 91.3085% with the loss rate of 2.3 using cross entropy cost function using deep convolutional neural network.


Sign in / Sign up

Export Citation Format

Share Document