scholarly journals Sleep Apnea Detection with Polysomnography and Depth Sensors

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1360 ◽  
Author(s):  
Martin Schätz ◽  
Aleš Procházka ◽  
Jiří Kuchyňka ◽  
Oldřich Vyšata

This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep apnea events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20–35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep apneas by a sleep specialist. The resulting classifier can mark all apnea events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. When compared to the classification of polysomnographic breathing signal segments by a sleep specialist, which is used for calculating length of the event, the classifier has an F 1 score of 92.2% Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep apnea events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.

2021 ◽  
Vol 11 (15) ◽  
pp. 6888
Author(s):  
Georgia Korompili ◽  
Lampros Kokkalas ◽  
Stelios A. Mitilineos ◽  
Nicolas-Alexander Tatlas ◽  
Stelios M. Potirakis

The most common index for diagnosing Sleep Apnea Syndrome (SAS) is the Apnea-Hypopnea Index (AHI), defined as the average count of apnea/hypopnea events per sleeping hour. Despite its broad use in automated systems for SAS severity estimation, researchers now focus on individual event time detection rather than the insufficient classification of the patient in SAS severity groups. Towards this direction, in this work, we aim at the detection of the exact time location of apnea/hypopnea events. We particularly examine the hypothesis of employing a standard Voice Activity Detection (VAD) algorithm to extract breathing segments during sleep and identify the respiratory events from severely altered breathing amplitude within the event. The algorithm, which is tested only in severe and moderate patients, is applied to recordings from a tracheal and an ambient microphone. It proves good sensitivity for apneas, reaching 81% and 70.4% for the two microphones, respectively, and moderate sensitivity to hypopneas—approx. 50% were identified. The algorithm also presents an adequate estimator of the Mean Apnea Duration index—defined as the average duration of the detected events—for patients with severe or moderate apnea, with mean error 1.7 s and 3.2 s for the two microphones, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Jun Jiang ◽  
Lianping Guo ◽  
Kuojun Yang ◽  
Huiqing Pan

Vertical resolution is an essential indicator of digital storage oscilloscope (DSO) and the key to improving resolution is to increase digitalizing bits and lower noise. Averaging is a typical method to improve signal to noise ratio (SNR) and the effective number of bits (ENOB). The existing averaging algorithm is apt to be restricted by the repetitiveness of signal and be influenced by gross error in quantization, and therefore its effect on restricting noise and improving resolution is limited. An information entropy-based data fusion and average-based decimation filtering algorithm, proceeding from improving average algorithm and in combination with relevant theories of information entropy, are proposed in this paper to improve the resolution of oscilloscope. For single acquiring signal, resolution is improved through eliminating gross error in quantization by utilizing the maximum entropy of sample data with further noise filtering via average-based decimation after data fusion of efficient sample data under the premise of oversampling. No subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analog bandwidth of oscilloscope under actual sampling rate.


Author(s):  
Hehe Fan ◽  
Zhongwen Xu ◽  
Linchao Zhu ◽  
Chenggang Yan ◽  
Jianjun Ge ◽  
...  

We aim to significantly reduce the computational cost for classification of temporally untrimmed videos while retaining similar accuracy. Existing video classification methods sample frames with a predefined frequency over entire video. Differently, we propose an end-to-end deep reinforcement approach which enables an agent to classify videos by watching a very small portion of frames like what we do. We make two main contributions. First, information is not equally distributed in video frames along time. An agent needs to watch more carefully when a clip is informative and skip the frames if they are redundant or irrelevant. The proposed approach enables the agent to adapt sampling rate to video content and skip most of the frames without the loss of information. Second, in order to have a confident decision, the number of frames that should be watched by an agent varies greatly from one video to another. We incorporate an adaptive stop network to measure confidence score and generate timely trigger to stop the agent watching videos, which improves efficiency without loss of accuracy. Our approach reduces the computational cost significantly for the large-scale YouTube-8M dataset, while the accuracy remains the same.


2019 ◽  
pp. 29-80
Author(s):  
Nancy Foldvary-Schaefer ◽  
Madeleine Grigg-Damberger ◽  
Reena Mehra

This chapter provides an overview of sleep testing performed inside or outside the sleep laboratory. The chapter reviews the classification of sleep studies and methodology of in-lab polysomnography and home sleep apnea testing. Specifically, the indications for and relative contraindications and limitations of both procedures are discussed. Positive airway pressure (PAP) titration procedures are reviewed and the PAP-NAP, an abbreviated daytime study for patients with sleep apnea and PAP intolerance or hesitancy, is described. The authors also discuss the methodology of, indications for, and interpretation of the multiple sleep latency test and the maintenance of wakefulness test, which are daytime studies performed to evaluate excessive daytime sleepiness. Finally, the role of actigraphy in the evaluation of sleep disorders is discussed.


2019 ◽  
pp. 418-434
Author(s):  
Maha Alattar

This chapter covers the relationship between sleep-related headaches and sleep disorders such as obstructive sleep apnea (OSA). Sleep apnea headache (SAH), a type of sleep-related headache that is classified in the International Classification of Headache Disorders, is a distinct subset of headache that is caused by OSA and occurs distinctly on awakening. Once recognized, treatment of OSA is associated with significant improvement in, and often resolution of, SAH. Given the high prevalence of headaches in the general population, sleep disorders must be considered in the evaluation of patients with headaches. A comprehensive sleep evaluation should be an integral part of the assessment of headache disorders. Sleep apnea headache and other types of headaches associated with sleep are reviewed in this chapter.


Author(s):  
Fernando Merchan ◽  
Martin Poveda ◽  
Danilo E. Cáceres-Hernández ◽  
Javier E. Sanchez-Galan

This chapter focuses on the contributions made in the development of assistive technologies for the navigation of blind and visually impaired (BVI) individuals. A special interest is placed on vision-based systems that make use of image (RGB) and depth (D) information to assist their indoor navigation. Many commercial RGB-D cameras exist on the market, but for many years the Microsoft Kinect has been used as a tool for research in this field. Therefore, first-hand experience and advances on the use of Kinect for the development of an indoor navigation aid system for BVI individuals is presented. Limitations that can be encountered in building such a system are addressed at length. Finally, an overview of novel avenues of research in indoor navigation for BVI individuals such as integration of computer vision algorithms, deep learning for the classification of objects, and recent developments with stereo depth vision are discussed.


Sign in / Sign up

Export Citation Format

Share Document