sleep stage classification
Recently Published Documents


TOTAL DOCUMENTS

293
(FIVE YEARS 144)

H-INDEX

23
(FIVE YEARS 7)

2022 ◽  
Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>


2022 ◽  
Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>


2022 ◽  
Author(s):  
Charles A Ellis ◽  
Mohammad SE Sendi ◽  
Rongen Zhang ◽  
Darwin A Carbajal ◽  
May D Wang ◽  
...  

Multimodal classification is increasingly common in biomedical informatics studies. Many such studies use deep learning classifiers with raw data, which makes explainability difficult. As such, only a few studies have applied explainability methods, and new methods are needed. In this study, we propose sleep stage classification as a testbed for method development and train a convolutional neural network with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global approach that is uniquely adapted for electrophysiology analysis. We further present two local approaches that can identify subject-level differences in explanations that would be obscured by global methods and that can provide insight into the effects of clinical and demographic variables upon the patterns learned by the classifier. We find that EEG is globally the most important modality for all sleep stages, except non-rapid eye movement stage 1 and that local subject-level differences in importance arise. We further show that sex, followed by medication and age had significant effects upon the patterns learned by the classifier. Our novel methods enhance explainability for the growing field of multimodal classification, provide avenues for the advancement of personalized medicine, and yield novel insights into the effects of demographic and clinical variables upon classifiers.


2022 ◽  
Vol 70 (3) ◽  
pp. 4619-4633
Author(s):  
Saadullah Farooq Abbasi ◽  
Harun Jamil ◽  
Wei Chen

2021 ◽  
Author(s):  
Nicolò Pini ◽  
Ju Lynn Ong ◽  
Gizem Yilmaz ◽  
Nicholas I. Y. N. Chee ◽  
Zhao Siting ◽  
...  

Study Objectives: Validate a HR-based deep-learning algorithm for sleep staging named Neurobit-HRV (Neurobit Inc., New York, USA). Methods: The algorithm can perform classification at 2-levels (Wake; Sleep), 3-levels (Wake; NREM; REM) or 4- levels (Wake; Light; Deep; REM) in 30-second epochs. The algorithm was validated using an open-source dataset of PSG recordings (Physionet CinC dataset, n=994 participants) and a proprietary dataset (Z3Pulse, n=52 participants), composed of HR recordings collected with a chest-worn, wireless sensor. A simultaneous PSG was collected using SOMNOtouch. We evaluated the performance of the models in both datasets using Accuracy (A), Cohen's kappa (K), Sensitivity (SE), Specificity (SP). Results: CinC - The highest value of accuracy was achieved by the 2-levels model (0.8797), while the 3-levels model obtained the best value of K (0.6025). The 4-levels model obtained the lowest SE (0.3812) and the highest SP (0.9744) for the classification of Deep sleep segments. AHI and biological sex did not affect sleep scoring, while a significant decrease of performance by age was reported across the models. Z3Pulse - The highest value of accuracy was achieved by the 2-levels model (0.8812), whereas the 3-levels model obtained the best value of K (0.611). For classification of the sleep states, the lowest SE (0.6163) and the highest SP (0.9606) were obtained for the classification of Deep sleep segment. Conclusions: Results demonstrate the feasibility of accurate HR-based sleep staging. The combination of the illustrated sleep staging algorithm with an inexpensive HR device, provides a cost-effective and non-invasive solution easily deployable in the home.


2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince D Calhoun

Recent years have shown a growth in the application of deep learning architectures such as convolutional neural networks (CNNs), to electrophysiology analysis. However, using neural networks with raw time-series data makes explainability a significant challenge. Multiple explainability approaches have been developed for insight into the spectral features learned by CNNs from EEG. However, across electrophysiology modalities, and even within EEG, there are many unique waveforms of clinical relevance. Existing methods that provide insight into waveforms learned by CNNs are of questionable utility. In this study, we present a novel model visualization-based approach that analyzes the filters in the first convolutional layer of the network. To our knowledge, this is the first method focused on extracting explainable information from EEG waveforms learned by CNNs while also providing insight into the learned spectral features. We demonstrate the viability of our approach within the context of automated sleep stage classification, a well-characterized domain that can help validate our approach. We identify 3 subgroups of filters with distinct spectral properties, determine the relative importance of each group of filters, and identify several unique waveforms learned by the classifier that were vital to the classifier performance. Our approach represents a significant step forward in explainability for electrophysiology classifiers, which we also hope will be useful for providing insights in future studies.


Processes ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 2265
Author(s):  
Cheng-Hua Su ◽  
Li-Wei Ko ◽  
Jia-Chi Juang ◽  
Chung-Yao Hsu

Automatic bio-signal processing and scoring have been a popular topic in recent years. This includes sleep stage classification, which is time-consuming when carried out by hand. Multiple sleep stage classification has been proposed in recent years. While effective, most of these processes are trained and validated against a singular set of data in uniformed pre-processing, whilst in a clinical environment, polysomnography (PSG) may come from different PSG systems that use different signal processing methods. In this study, we present a generalized sleep stage classification method that uses power spectra and entropy. To test its generality, we first trained our system using a uniform dataset and then validated it against another dataset with PSGs from different PSG systems. We found that the system achieved an accuracy of 0.80 and that it is highly consistent across most PSG records. A few samples of NREM3 sleep were classified poorly, and further inspection showed that these samples lost crucial NREM3 features due to aggressive filtering. This implies that the system’s effectiveness can be evaluated by human knowledge. Overall, our classification system shows consistent performance against PSG records that have been collected from different PSG systems, which gives it high potential in a clinical environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Qinghua Zhong ◽  
Haibo Lei ◽  
Qianru Chen ◽  
Guofu Zhou

Sleep disorder is a serious public health problem. Unobtrusive home sleep quality monitoring system can better open the way of sleep disorder-related diseases screening and health monitoring. In this work, a sleep stage classification algorithm based on multiscale residual convolutional neural network (MRCNN) was proposed to detect the characteristics of electroencephalogram (EEG) signals detected by wearable systems and classify sleep stages. EEG signals were analyzed in each epoch of every 30 seconds, and then 5-class sleep stage classification, wake (W), rapid eye movement sleep (REM), and nonrapid eye movement sleep (NREM) including N1, N2, and N3 stages was outputted. Good results (accuracy rate of 92.06% and 91.13%, Cohen’s kappa of 0.7360 and 0.7001) were achieved with 5-fold cross-validation and independent subject cross-validation, respectively, which performed on European Data Format (EDF) dataset containing 197 whole-night polysomnographic sleep recordings. Compared with several representative deep learning methods, this method can easily obtain sleep stage information from single-channel EEG signals without specialized feature extraction, which is closer to clinical application. Experiments based on CinC2018 dataset also proved that the method has a good performance on large dataset and can provide support for sleep disorder-related diseases screening and health surveillance based on automatic sleep staging.


2021 ◽  
Author(s):  
Nikhil Vyas ◽  
Kelly Ryoo ◽  
Hosanna Tesfaye ◽  
Ruhan Yi ◽  
Marjorie Skubic

2021 ◽  
Vol 72 ◽  
pp. 1163-1214
Author(s):  
Konstantinos Nikolaidis ◽  
Stein Kristiansen ◽  
Thomas Plagemann ◽  
Vera Goebel ◽  
Knut Liestøl ◽  
...  

Good training data is a prerequisite to develop useful Machine Learning applications. However, in many domains existing data sets cannot be shared due to privacy regulations (e.g., from medical studies). This work investigates a simple yet unconventional approach for anonymized data synthesis to enable third parties to benefit from such anonymized data. We explore the feasibility of learning implicitly from visually unrealistic, task-relevant stimuli, which are synthesized by exciting the neurons of a trained deep neural network. As such, neuronal excitation can be used to generate synthetic stimuli. The stimuli data is used to train new classification models. Furthermore, we extend this framework to inhibit representations that are associated with specific individuals. We use sleep monitoring data from both an open and a large closed clinical study, and Electroencephalogram sleep stage classification data, to evaluate whether (1) end-users can create and successfully use customized classification models, and (2) the identity of participants in the study is protected. Extensive comparative empirical investigation shows that different algorithms trained on the stimuli are able to generalize successfully on the same task as the original model. Architectural and algorithmic similarity between new and original models play an important role in performance. For similar architectures, the performance is close to that of using the original data (e.g., Accuracy difference of 0.56%-3.82%, Kappa coefficient difference of 0.02-0.08). Further experiments show that the stimuli can provide state-ofthe-art resilience against adversarial association and membership inference attacks.


Sign in / Sign up

Export Citation Format

Share Document