sleep staging
Recently Published Documents


TOTAL DOCUMENTS

362
(FIVE YEARS 190)

H-INDEX

26
(FIVE YEARS 4)

2022 ◽  
Vol 74 ◽  
pp. 103486
Author(s):  
Huafeng Wang ◽  
Chonggang Lu ◽  
Qi Zhang ◽  
Zhimin Hu ◽  
Xiaodong Yuan ◽  
...  
Keyword(s):  

2022 ◽  
Author(s):  
Jiahao Fan ◽  
Hangyu Zhu ◽  
Xinyu Jiang ◽  
Long Meng ◽  
Chen Chen ◽  
...  

Deep sleep staging networks have reached top performance on large-scale datasets. However, these models perform poorer when training and testing on small sleep cohorts due to data inefficiency. Transferring well-trained models from large-scale datasets (source domain) to small sleep cohorts (target domain) is a promising solution but still remains challenging due to the domain-shift issue. In this work, an unsupervised domain adaptation approach, domain statistics alignment (DSA), is developed to bridge the gap between the data distribution of source and target domains. DSA adapts the source models on the target domain by modulating the domain-specific statistics of deep features stored in the Batch Normalization (BN) layers. Furthermore, we have extended DSA by introducing cross-domain statistics in each BN layer to perform DSA adaptively (AdaDSA). The proposed methods merely need the well-trained source model without access to the source data, which may be proprietary and inaccessible. DSA and AdaDSA are universally applicable to various deep sleep staging networks that have BN layers. We have validated the proposed methods by extensive experiments on two state-of-the-art deep sleep staging networks, DeepSleepNet+ and U-time. The performance was evaluated by conducting various transfer tasks on six sleep databases, including two large-scale databases, MASS and SHHS, as the source domain, four small sleep databases as the target domain. Thereinto, clinical sleep records acquired in Huashan Hospital, Shanghai, were used. The results show that both DSA and AdaDSA could significantly improve the performance of source models on target domains, providing novel insights into the domain generalization problem in sleep staging tasks.<br>


2022 ◽  
Vol 71 ◽  
pp. 103047
Author(s):  
Miriam Goldammer ◽  
Sebastian Zaunseder ◽  
Moritz D. Brandt ◽  
Hagen Malberg ◽  
Felix Gräßer

2021 ◽  
Author(s):  
Nicolò Pini ◽  
Ju Lynn Ong ◽  
Gizem Yilmaz ◽  
Nicholas I. Y. N. Chee ◽  
Zhao Siting ◽  
...  

Study Objectives: Validate a HR-based deep-learning algorithm for sleep staging named Neurobit-HRV (Neurobit Inc., New York, USA). Methods: The algorithm can perform classification at 2-levels (Wake; Sleep), 3-levels (Wake; NREM; REM) or 4- levels (Wake; Light; Deep; REM) in 30-second epochs. The algorithm was validated using an open-source dataset of PSG recordings (Physionet CinC dataset, n=994 participants) and a proprietary dataset (Z3Pulse, n=52 participants), composed of HR recordings collected with a chest-worn, wireless sensor. A simultaneous PSG was collected using SOMNOtouch. We evaluated the performance of the models in both datasets using Accuracy (A), Cohen's kappa (K), Sensitivity (SE), Specificity (SP). Results: CinC - The highest value of accuracy was achieved by the 2-levels model (0.8797), while the 3-levels model obtained the best value of K (0.6025). The 4-levels model obtained the lowest SE (0.3812) and the highest SP (0.9744) for the classification of Deep sleep segments. AHI and biological sex did not affect sleep scoring, while a significant decrease of performance by age was reported across the models. Z3Pulse - The highest value of accuracy was achieved by the 2-levels model (0.8812), whereas the 3-levels model obtained the best value of K (0.611). For classification of the sleep states, the lowest SE (0.6163) and the highest SP (0.9606) were obtained for the classification of Deep sleep segment. Conclusions: Results demonstrate the feasibility of accurate HR-based sleep staging. The combination of the illustrated sleep staging algorithm with an inexpensive HR device, provides a cost-effective and non-invasive solution easily deployable in the home.


2021 ◽  
Author(s):  
Jiahao Fan ◽  
Hangyu Zhu ◽  
Xinyu Jiang ◽  
Long Meng ◽  
Cong Fu ◽  
...  

Deep sleep staging networks have reached top performance on large-scale datasets. However, these models perform poorer when training and testing on small sleep cohorts due to data inefficiency. Transferring well-trained models from large-scale datasets (source domain) to small sleep cohorts (target domain) is a promising solution but still remains challenging due to the domain-shift issue. In this work, an unsupervised domain adaptation approach, domain statistics alignment (DSA), is developed to bridge the gap between the data distribution of source and target domains. DSA adapts the source models on the target domain by modulating the domain-specific statistics of deep features stored in the Batch Normalization (BN) layers. Furthermore, we have extended DSA by introducing cross-domain statistics in each BN layer to perform DSA adaptively (AdaDSA). The proposed methods merely need the well-trained source model without access to the source data, which may be proprietary and inaccessible. DSA and AdaDSA are universally applicable to various deep sleep staging networks that have BN layers. We have validated the proposed methods by extensive experiments on two state-of-the-art deep sleep staging networks, DeepSleepNet+ and U-time. The performance was evaluated by conducting various transfer tasks on six sleep databases, including two large-scale databases, MASS and SHHS, as the source domain, four small sleep databases as the target domain. Thereinto, clinical sleep records acquired in Huashan Hospital, Shanghai, were used. The results show that both DSA and AdaDSA could significantly improve the performance of source models on target domains, providing novel insights into the domain generalization problem in sleep staging tasks.<br>


2021 ◽  
Author(s):  
Jiahao Fan ◽  
Hangyu Zhu ◽  
Xinyu Jiang ◽  
Long Meng ◽  
Cong Fu ◽  
...  

Deep sleep staging networks have reached top performance on large-scale datasets. However, these models perform poorer when training and testing on small sleep cohorts due to data inefficiency. Transferring well-trained models from large-scale datasets (source domain) to small sleep cohorts (target domain) is a promising solution but still remains challenging due to the domain-shift issue. In this work, an unsupervised domain adaptation approach, domain statistics alignment (DSA), is developed to bridge the gap between the data distribution of source and target domains. DSA adapts the source models on the target domain by modulating the domain-specific statistics of deep features stored in the Batch Normalization (BN) layers. Furthermore, we have extended DSA by introducing cross-domain statistics in each BN layer to perform DSA adaptively (AdaDSA). The proposed methods merely need the well-trained source model without access to the source data, which may be proprietary and inaccessible. DSA and AdaDSA are universally applicable to various deep sleep staging networks that have BN layers. We have validated the proposed methods by extensive experiments on two state-of-the-art deep sleep staging networks, DeepSleepNet+ and U-time. The performance was evaluated by conducting various transfer tasks on six sleep databases, including two large-scale databases, MASS and SHHS, as the source domain, four small sleep databases as the target domain. Thereinto, clinical sleep records acquired in Huashan Hospital, Shanghai, were used. The results show that both DSA and AdaDSA could significantly improve the performance of source models on target domains, providing novel insights into the domain generalization problem in sleep staging tasks.<br>


2021 ◽  
Author(s):  
Ziwei Yang ◽  
Dong Wang ◽  
Zheng Chen ◽  
Ming Huang ◽  
Naoaki Ono ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document