scholarly journals SCL-SSC: Supervised Contrastive Learning for Sleep Stage Classification

Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>

2022 ◽  
Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


Author(s):  
Asma Salamatian ◽  
Ali Khadem

Purpose: Sleep is one of the necessities of the body, such as eating, drinking, etc., that affects different aspects of human life. Sleep monitoring and sleep stage classification play an important role in the diagnosis of sleeprelated diseases and neurological disorders. Empirically, classification of sleep stages is a time-consuming, tedious, and complex task, which heavily depends on the experience of the experts. As a result, there is a crucial need for an automatic efficient sleep staging system. Materials and Methods: This study develops a 13-layer 1D Convolutional Neural Network (CNN) using singlechannel Electroencephalogram (EEG) signal for extracting features automatically and classifying the sleep stages. To overcome the negative effect of an imbalance dataset, we have used the Synthetic Minority Oversampling Technique (SMOTE). In our study, the single-channel EEG signal is given to a 1D CNN, without any feature extraction/selection processes. This deep network can self-learn the discriminative features from the EEG signal. Results: Applying the proposed method to sleep-EDF dataset resulted in overall accuracy, sensitivity, specificity, and Precision of 94.09%, 74.73%, 96.43%, and 71.02%, respectively, for classifying five sleep stages. Using single-channel EEG and providing a network with fewer trainable parameters than most of the available deep learning-based methods are the main advantages of the proposed method. Conclusion: In this study, a 13-layer 1D CNN model was proposed for sleep stage classification. This model has an end-to-end complete architecture and does not require any separate feature extraction/selection and classification stages. Having a low number of network parameters and layers while still having high classification accuracy, is the main advantage of the proposed method over most of the previous deep learning-based approaches.


2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince Calhoun

The frequency domain of electroencephalography (EEG) data has developed as a particularly important area of EEG analysis. EEG spectra have been analyzed with explainable machine learning and deep learning methods. However, as deep learning has developed, most studies use raw EEG data, which is not well-suited for traditional explainability methods. Several studies have introduced methods for spectral insight into classifiers trained on raw EEG data. These studies have provided global insight into the frequency bands that are generally important to a classifier but do not provide local insight into the frequency bands important for the classification of individual samples. This local explainability could be particularly helpful for EEG analysis domains like sleep stage classification that feature multiple evolving states. We present a novel local spectral explainability approach and use it to explain a convolutional neural network trained for automated sleep stage classification. We use our approach to show how the relative importance of different frequency bands varies over time and even within the same sleep stages. Furthermore, to better understand how our approach compares to existing methods, we compare a global estimate of spectral importance generated from our local results with an existing global spectral importance approach. We find that the δ band is most important for most sleep stages, though β is most important for the non-rapid eye movement 2 (NREM2) sleep stage. Additionally, θ is particularly important for identifying Awake and NREM1 samples. Our study represents the first approach developed for local spectral insight into deep learning classifiers trained on raw EEG time series.


2019 ◽  
Vol 20 (S16) ◽  
Author(s):  
Ye Yuan ◽  
Kebin Jia ◽  
Fenglong Ma ◽  
Guangxu Xun ◽  
Yaqing Wang ◽  
...  

Abstract Background Sleep is a complex and dynamic biological process characterized by different sleep patterns. Comprehensive sleep monitoring and analysis using multivariate polysomnography (PSG) records has achieved significant efforts to prevent sleep-related disorders. To alleviate the time consumption caused by manual visual inspection of PSG, automatic multivariate sleep stage classification has become an important research topic in medical and bioinformatics. Results We present a unified hybrid self-attention deep learning framework, namely HybridAtt, to automatically classify sleep stages by capturing channel and temporal correlations from multivariate PSG records. We construct a new multi-view convolutional representation module to learn channel-specific and global view features from the heterogeneous PSG inputs. The hybrid attention mechanism is designed to further fuse the multi-view features by inferring their dependencies without any additional supervision. The learned attentional representation is subsequently fed through a softmax layer to train an end-to-end deep learning model. Conclusions We empirically evaluate our proposed HybridAtt model on a benchmark PSG dataset in two feature domains, referred to as the time and frequency domains. Experimental results show that HybridAtt consistently outperforms ten baseline methods in both feature spaces, demonstrating the effectiveness of HybridAtt in the task of sleep stage classification.


2013 ◽  
Vol 23 (03) ◽  
pp. 1350012 ◽  
Author(s):  
L. J. HERRERA ◽  
C. M. FERNANDES ◽  
A. M. MORA ◽  
D. MIGOTINA ◽  
R. LARGO ◽  
...  

This work proposes a methodology for sleep stage classification based on two main approaches: the combination of features extracted from electroencephalogram (EEG) signal by different extraction methods, and the use of stacked sequential learning to incorporate predicted information from nearby sleep stages in the final classifier. The feature extraction methods used in this work include three representative ways of extracting information from EEG signals: Hjorth features, wavelet transformation and symbolic representation. Feature selection was then used to evaluate the relevance of individual features from this set of methods. Stacked sequential learning uses a second-layer classifier to improve the classification by using previous and posterior first-layer predicted stages as additional features providing information to the model. Results show that both approaches enhance the sleep stage classification accuracy rate, thus leading to a closer approximation to the experts' opinion.


2017 ◽  
Vol 29 (01) ◽  
pp. 1750007 ◽  
Author(s):  
Malihe Hassani ◽  
Mohammad-Reza Karami

This paper presents a new method for sleep scoring based on nonlinear Volterra features of EEG signals by using only one single EEG channel. The Volterra features are extracted from characteristic waves of EEG signals which can characterize different sleep stages individually. The recurrent neural classifier takes all the features extracted on 30s epochs from EEG signals and assigns them to one of the five possible stages: Wakefulness, NREM 1, NREM 2, SWS, and REM. Eight sleep recordings obtained from Caucasian males and females without any medication are utilized to validate the proposed method. Moreover, the performance of the proposed classifier in comparison with other classifiers is presented. The classification rate of the proposed classifier is better than that of the other classifier that does not use nonlinear Volterra feature. The results demonstrate that the proposed classifier with nonlinear Volterra features of the characteristic waves of EEG signals can classify sleep stages more efficiently and accurately using only a single EEG channel.


2020 ◽  
Vol 10 (24) ◽  
pp. 8963
Author(s):  
Hui Wen Loh ◽  
Chui Ping Ooi ◽  
Jahmunah Vicnesh ◽  
Shu Lih Oh ◽  
Oliver Faust ◽  
...  

Sleep is vital for one’s general well-being, but it is often neglected, which has led to an increase in sleep disorders worldwide. Indicators of sleep disorders, such as sleep interruptions, extreme daytime drowsiness, or snoring, can be detected with sleep analysis. However, sleep analysis relies on visuals conducted by experts, and is susceptible to inter- and intra-observer variabilities. One way to overcome these limitations is to support experts with a programmed diagnostic tool (PDT) based on artificial intelligence for timely detection of sleep disturbances. Artificial intelligence technology, such as deep learning (DL), ensures that data are fully utilized with low to no information loss during training. This paper provides a comprehensive review of 36 studies, published between March 2013 and August 2020, which employed DL models to analyze overnight polysomnogram (PSG) recordings for the classification of sleep stages. Our analysis shows that more than half of the studies employed convolutional neural networks (CNNs) on electroencephalography (EEG) recordings for sleep stage classification and achieved high performance. Our study also underscores that CNN models, particularly one-dimensional CNN models, are advantageous in yielding higher accuracies for classification. More importantly, we noticed that EEG alone is not sufficient to achieve robust classification results. Future automated detection systems should consider other PSG recordings, such as electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) signals, along with input from human experts, to achieve the required sleep stage classification robustness. Hence, for DL methods to be fully realized as a practical PDT for sleep stage scoring in clinical applications, inclusion of other PSG recordings, besides EEG recordings, is necessary. In this respect, our report includes methods published in the last decade, underscoring the use of DL models with other PSG recordings, for scoring of sleep stages.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Sarun Paisarnsrisomsuk ◽  
Carolina Ruiz ◽  
Sergio A. Alvarez

AbstractDeep neural networks can provide accurate automated classification of human sleep signals into sleep stages that enables more effective diagnosis and treatment of sleep disorders. We develop a deep convolutional neural network (CNN) that attains state-of-the-art sleep stage classification performance on input data consisting of human sleep EEG and EOG signals. Nested cross-validation is used for optimal model selection and reliable estimation of out-of-sample classification performance. The resulting network attains a classification accuracy of $$84.50 \pm 0.13\%$$ 84.50 ± 0.13 % ; its performance exceeds human expert inter-scorer agreement, even on single-channel EEG input data, therefore providing more objective and consistent labeling than human experts demonstrate as a group. We focus on analyzing the learned internal data representations of our network, with the aim of understanding the development of class differentiation ability across the layers of processing units, as a function of layer depth. We approach this problem visually, using t-Stochastic Neighbor Embedding (t-SNE), and propose a pooling variant of Centered Kernel Alignment (CKA) that provides an objective quantitative measure of the development of sleep stage specialization and differentiation with layer depth. The results reveal a monotonic progression of both of these sleep stage modeling abilities as layer depth increases.


2021 ◽  
Vol 11 (4) ◽  
pp. 456
Author(s):  
Wenpeng Neng ◽  
Jun Lu ◽  
Lei Xu

In the inference process of existing deep learning models, it is usually necessary to process the input data level-wise, and impose a corresponding relational inductive bias on each level. This kind of relational inductive bias determines the theoretical performance upper limit of the deep learning method. In the field of sleep stage classification, only a single relational inductive bias is adopted at the same level in the mainstream methods based on deep learning. This will make the feature extraction method of deep learning incomplete and limit the performance of the method. In view of the above problems, a novel deep learning model based on hybrid relational inductive biases is proposed in this paper. It is called CCRRSleepNet. The model divides the single channel Electroencephalogram (EEG) data into three levels: frame, epoch, and sequence. It applies hybrid relational inductive biases from many aspects based on three levels. Meanwhile, multiscale atrous convolution block (MSACB) is adopted in CCRRSleepNet to learn the features of different attributes. However, in practice, the actual performance of the deep learning model depends on the nonrelational inductive biases, so a variety of matching nonrelational inductive biases are adopted in this paper to optimize CCRRSleepNet. The CCRRSleepNet is tested on the Fpz-Cz and Pz-Oz channel data of the Sleep-EDF dataset. The experimental results show that the method proposed in this paper is superior to many existing methods.


Sign in / Sign up

Export Citation Format

Share Document