scholarly journals SalientSleepNet: Multimodal Salient Wave Detection Network for Sleep Staging

Author(s):  
Ziyu Jia ◽  
Youfang Lin ◽  
Jing Wang ◽  
Xuehui Wang ◽  
Peiyi Xie ◽  
...  

Sleep staging is fundamental for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to effectively extract salient waves in multimodal sleep data; 2) How to capture the multi-scale transition rules among sleep stages; 3) How to adaptively seize the key role of specific modality for sleep staging. To address these challenges, we propose SalientSleepNet, a multimodal salient wave detection network for sleep staging. Specifically, SalientSleepNet is a temporal fully convolutional network based on the $U^2$-Net architecture that is originally proposed for salient object detection in computer vision. It is mainly composed of two independent $U^2$-like streams to extract the salient features from multimodal data, respectively. Meanwhile, the multi-scale extraction module is designed to capture multi-scale transition rules among sleep stages. Besides, the multimodal attention module is proposed to adaptively capture valuable information from multimodal data for the specific sleep stage. Experiments on the two datasets demonstrate that SalientSleepNet outperforms the state-of-the-art baselines. It is worth noting that this model has the least amount of parameters compared with the existing deep neural network models.

2010 ◽  
Vol 49 (05) ◽  
pp. 467-472 ◽  
Author(s):  
V. C. Helland ◽  
A. Gapelyuk ◽  
A. Suhrbier ◽  
M. Riedl ◽  
T. Penzel ◽  
...  

Summary Objectives: Scoring sleep visually based on polysomnography is an important but time-consuming element of sleep medicine. Whereas computer software assists human experts in the assignment of sleep stages to polysomnogram epochs, their performance is usually insufficient. This study evaluates the possibility to fully automatize sleep staging considering the reliability of the sleep stages available from human expert sleep scorers. Methods: We obtain features from EEG, ECG and respiratory signals of polysomnograms from ten healthy subjects. Using the sleep stages provided by three human experts, we evaluate the performance of linear discriminant analysis on the entire polysomnogram and only on epochs where the three experts agree in their sleep stage scoring. Results: We show that in polysomnogram intervals, to which all three scorers assign the same sleep stage, our algorithm achieves 90% accuracy. This high rate of agreement with the human experts is accomplished with only a small set of three frequency features from the EEG. We increase the performance to 93% by including ECG and respiration features. In contrast, on intervals of ambiguous sleep stage, the sleep stage classification obtained from our algorithm, agrees with the human consensus scorer in approximately 61%. Conclusions: These findings suggest that machine classification is highly consistent with human sleep staging and that error in the algorithm’s assignments is rather a problem of lack of well-defined criteria for human experts to judge certain polysomnogram epochs than an insufficiency of computational procedures.


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A171-A171
Author(s):  
S Æ Jónsson ◽  
E Gunnlaugsson ◽  
E Finssonn ◽  
D L Loftsdóttir ◽  
G H Ólafsdóttir ◽  
...  

Abstract Introduction Sleep stage classifications are of central importance when diagnosing various sleep-related diseases. Performing a full PSG recording can be time-consuming and expensive, and often requires an overnight stay at a sleep clinic. Furthermore, the manual sleep staging process is tedious and subject to scorer variability. Here we present an end-to-end deep learning approach to robustly classify sleep stages from Self Applied Somnography (SAS) studies with frontal EEG and EOG signals. This setup allows patients to self-administer EEG and EOG leads in a home sleep study, which reduces cost and is more convenient for the patients. However, self-administration of the leads increases the risk of loose electrodes, which the algorithm must be robust to. The model structure was inspired by ResNet (He, Zhang, Ren, Sun, 2015), which has been highly successful in image recognition tasks. The ResTNet is comprised of the characteristic Residual blocks with an added Temporal component. Methods The ResTNet classifies sleep stages from the raw signals using convolutional neural network (CNN) layers, which avoids manual feature extraction, residual blocks, and a gated recurrent unit (GRU). This significantly reduces sleep stage prediction time and allows the model to learn more complex relations as the size of the training data increases. The model was developed and validated on over 400 manually scored sleep studies using the novel SAS setup. In developing the model, we used data augmentation techniques to simulate loose electrodes and distorted signals to increase model robustness with regards to missing signals and low quality data. Results The study shows that applying the robust ResTNet model to SAS studies gives accuracy > 0.80 and F1-score > 0.80. It outperforms our previous model which used hand-crafted features and achieves similar performance to a human scorer. Conclusion The ResTNet is fast, gives accurate predictions, and is robust to loose electrodes. The end-to-end model furthermore promises better performance with more data. Combined with the simplicity of the SAS setup, it is an attractive option for large-scale sleep studies. Support This work was supported by the Icelandic Centre for Research RANNÍS (175256-0611).


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A460-A461
Author(s):  
E P Pollet ◽  
D P Pollet ◽  
B Long ◽  
A A Qutub

Abstract Introduction Fitness-based wearables and other emerging sensor technologies have the potential to track sleep across large populations longitudinally in at-home environments. To understand how these devices can inform research studies, limitations of available trackers need to be compared to traditional polysomnography (PSG). Here we assessed discrepancies in sleep staging in activity trackers vs. PSG in subjects with various sleep disorders. Methods Twelve subjects (age 41-78, 7f, 5m) wore a Fitbit Charge 3 while undergoing a scheduled sleep study. Six subjects had been previously diagnosed with a sleep disorder (5 OSA, 1 CSA). 4 subjects used CPAP throughout the night, 2 had a split night (CPAP 2nd half of the night), and 6 had a PSG only. Activity tracker staging was compared to 2 RPSGTs staging. Results Of the 12 subjects, eight subjects’ sleep was detected in the activity tracker, and compared across sleep stages to the PSG (7 female, 1 male, ages 41-78, AHI 0.3-87, RDI 0.5-94.4, sleep efficiency 74%+/-18, 4 PSG, 1 split, 3 CPAP). The activity tracker matched either tech 52% (+/- 13). The average difference in score tech and activity tracker staging for sleep onset (SO) was 16 +/- 15 minutes and wake after sleep onset was 43.5 +/- 44 minutes. Sensitivity, specificity, and balanced accuracy were found for each sleep stage. Respectively, Wake: 0.45+/-0.27, 0.97+/-0.03, 0.71+/-0.12, REM: 0.41+/-0.30, 0.90+/-0.06, 0.60+/-0.28, Light: 0.71+/-0.09, 0.58+/-0.19, 0.65+/-0.10, Deep: 0.63+/-0.52, 0.88+/-0.05, 0.59+/-0.49. Conclusion From this study of 12 subjects seen at a sleep clinic for suspected sleep disorders, activity trackers performed best in wake, REM and deep sleep specificity (>=88%), while they lacked sensitivity to REM and wake (<=45%) stages. The tracker did not detect sleep in 4 subjects who had elevated AHI or low sleep efficiency. Further analysis can identify whether discrepancies between the Fitbit and PSG can be predicted by distinct patterns in sleep staging and/or identify subject exclusion criteria for activity tracking studies. Support This project in on-going with the support of Academy Diagnostics Sleep and EEG Center and staff.


Author(s):  
Ziyu Jia ◽  
Youfang Lin ◽  
Jing Wang ◽  
Ronghao Zhou ◽  
Xiaojun Ning ◽  
...  

Sleep stage classification is essential for sleep assessment and disease diagnosis. However, how to effectively utilize brain spatial features and transition information among sleep stages continues to be challenging. In particular, owing to the limited knowledge of the human brain, predefining a suitable spatial brain connection structure for sleep stage classification remains an open question. In this paper, we propose a novel deep graph neural network, named GraphSleepNet, for automatic sleep stage classification. The main advantage of the GraphSleepNet is to adaptively learn the intrinsic connection among different electroencephalogram (EEG) channels, represented by an adjacency matrix, thereby best serving the spatial-temporal graph convolution network (ST-GCN) for sleep stage classification. Meanwhile, the ST-GCN consists of graph convolutions for extracting spatial features and temporal convolutions for capturing the transition rules among sleep stages. Experiments on the Montreal Archive of Sleep Studies (MASS) dataset demonstrate that the GraphSleepNet outperforms the state-of-the-art baselines.


2011 ◽  
Vol 138-139 ◽  
pp. 1096-1101
Author(s):  
Xue Li Shen ◽  
Ying Le Fan

Research on automatic sleep staging based on EEG signals has a significant meaning for objective evaluation of sleep quality. An improved Hilbert-Huang transform method was applied to time-frequency analysis of non-stable EEG signals for the sleep staging in this paper. In order to settle the frequency overlapping problem of intrinsic mode function obtained from traditional HHT, wavelet package transform was introduced to bandwidth refinement of EEG before the empirical mode decomposition was conducted. This method improved the time-frequency resolution effectively. Then the intrinsic mode functions and their marginal spectrums would be calculated. Six common spectrum energies (or spectral energy ratios) were selected as characteristic parameters. Finally, a probabilistic nearest neighbor method for statistical pattern recognition was applied to optimal decision. The experiment data was from the Sleep-EDF database of MIT-BIH. The classification results showed that the automatic sleep staging decisions based on this method conformed roughly with the manual staging results and were better than those obtained from traditional HHT obviously. Therefore, the method in this paper could be applied to extract features of sleep stages and provided necessary dependence for automatic sleep staging.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Xiangwei Zheng ◽  
Xiaochun Yin ◽  
Xuexiao Shao ◽  
Yalin Li ◽  
Xiaomei Yu

Sleep-related diseases seriously affect the life quality of patients. Sleep stage classification (or sleep staging), which studies the human sleep process and classifies the sleep stages, is an important reference to the diagnosis and study of sleep disorders. Many scholars have conducted a series of sleep staging studies, but the correlation between different sleep stages and the accuracy of classification still needs to be improved. Therefore, this paper proposes an automatic sleep stage classification based on EEG. By constructing an improved empirical mode decomposition and K-means experimental model, the concept of “frequency-domain correlation coefficient” is defined. In the process of feature extraction, the feature vector with the best correlation in the time-frequency domain is selected. Extraction and classification of EEG features are realized based on the K-means clustering algorithm. Experimental results demonstrate that the classification accuracy is significantly improved, and our proposed algorithm has a positive impact on sleep staging compared with other algorithms.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Yu Zhang ◽  
Bei Wang ◽  
Jin Jing ◽  
Jian Zhang ◽  
Junzhong Zou ◽  
...  

Feature extraction from physiological signals of EEG (electroencephalogram) is an essential part for sleep staging. In this study, multidomain feature extraction was investigated based on time domain analysis, nonlinear analysis, and frequency domain analysis. Unlike the traditional feature calculation in time domain, a sequence merging method was developed as a preprocessing procedure. The objective is to eliminate the clutter waveform and highlight the characteristic waveform for further analysis. The numbers of the characteristic activities were extracted as the features from time domain. The contributions of features from different domains to the sleep stages were compared. The effectiveness was further analyzed by automatic sleep stage classification and compared with the visual inspection. The overnight clinical sleep EEG recordings of 3 patients after the treatment of Continuous Positive Airway Pressure (CPAP) were tested. The obtained results showed that the developed method can highlight the characteristic activity which is useful for both automatic sleep staging and visual inspection. Furthermore, it can be a training tool for better understanding the appearance of characteristic waveforms from raw sleep EEG which is mixed and complex in time domain.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Niranjan Sridhar ◽  
Ali Shoeb ◽  
Philip Stephens ◽  
Alaa Kharbouch ◽  
David Ben Shimol ◽  
...  

Abstract Clinical sleep evaluations currently require multimodal data collection and manual review by human experts, making them expensive and unsuitable for longer term studies. Sleep staging using cardiac rhythm is an active area of research because it can be measured much more easily using a wide variety of both medical and consumer-grade devices. In this study, we applied deep learning methods to create an algorithm for automated sleep stage scoring using the instantaneous heart rate (IHR) time series extracted from the electrocardiogram (ECG). We trained and validated an algorithm on over 10,000 nights of data from the Sleep Heart Health Study (SHHS) and Multi-Ethnic Study of Atherosclerosis (MESA). The algorithm has an overall performance of 0.77 accuracy and 0.66 kappa against the reference stages on a held-out portion of the SHHS dataset for classifying every 30 s of sleep into four classes: wake, light sleep, deep sleep, and rapid eye movement (REM). Moreover, we demonstrate that the algorithm generalizes well to an independent dataset of 993 subjects labeled by American Academy of Sleep Medicine (AASM) licensed clinical staff at Massachusetts General Hospital that was not used for training or validation. Finally, we demonstrate that the stages predicted by our algorithm can reproduce previous clinical studies correlating sleep stages with comorbidities such as sleep apnea and hypertension as well as demographics such as age and gender.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1988
Author(s):  
Hui Huang ◽  
Jianhai Zhang ◽  
Li Zhu ◽  
Jiajia Tang ◽  
Guang Lin ◽  
...  

Sleep staging is important in sleep research since it is the basis for sleep evaluation and disease diagnosis. Related works have acquired many desirable outcomes. However, most of current studies focus on time-domain or frequency-domain measures as classification features using single or very few channels, which only obtain the local features but ignore the global information exchanging between different brain regions. Meanwhile, brain functional connectivity is considered to be closely related to brain activity and can be used to study the interaction relationship between brain areas. To explore the electroencephalography (EEG)-based brain mechanisms of sleep stages through functional connectivity, especially from different frequency bands, we applied phase-locked value (PLV) to build the functional connectivity network and analyze the brain interaction during sleep stages for different frequency bands. Then, we performed the feature-level, decision-level and hybrid fusion methods to discuss the performance of different frequency bands for sleep stages. The results show that (1) PLV increases in the lower frequency band (delta and alpha bands) and vice versa during different stages of non-rapid eye movement (NREM); (2) alpha band shows a better discriminative ability for sleeping stages; (3) the classification accuracy of feature-level fusion (six frequency bands) reaches 96.91% and 96.14% for intra-subject and inter-subjects respectively, which outperforms decision-level and hybrid fusion methods.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


Sign in / Sign up

Export Citation Format

Share Document