Proposing a Three-Stage Model to Quantify Bradykinesia on a Symptom Severity Level Using Deep Learning

2021 ◽  
pp. 428-438
Author(s):  
R. Jaber ◽  
Rami Qahwaji ◽  
Amr Abdullatif ◽  
J. Buckley ◽  
R. Abd-Alhameed
Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5207
Author(s):  
Febryan Setiawan ◽  
Che-Wei Lin

Conventional approaches to diagnosing Parkinson’s disease (PD) and rating its severity level are based on medical specialists’ clinical assessment of symptoms, which are subjective and can be inaccurate. These techniques are not very reliable, particularly in the early stages of the disease. A novel detection and severity classification algorithm using deep learning approaches was developed in this research to classify the PD severity level based on vertical ground reaction force (vGRF) signals. Different variations in force patterns generated by the irregularity in vGRF signals due to the gait abnormalities of PD patients can indicate their severity. The main purpose of this research is to aid physicians in detecting early stages of PD, planning efficient treatment, and monitoring disease progression. The detection algorithm comprises preprocessing, feature transformation, and classification processes. In preprocessing, the vGRF signal is divided into 10, 15, and 30 s successive time windows. In the feature transformation process, the time domain vGRF signal in windows with varying time lengths is modified into a time–frequency spectrogram using a continuous wavelet transform (CWT). Then, principal component analysis (PCA) is used for feature enhancement. Finally, different types of convolutional neural networks (CNNs) are employed as deep learning classifiers for classification. The algorithm performance was evaluated using k-fold cross-validation (kfoldCV). The best average accuracy of the proposed detection algorithm in classifying the PD severity stage classification was 96.52% using ResNet-50 with vGRF data from the PhysioNet database. The proposed detection algorithm can effectively differentiate gait patterns based on time–frequency spectrograms of vGRF signals associated with different PD severity levels.


2021 ◽  
pp. 398-410
Author(s):  
Lov Kumar ◽  
Triyasha Ghosh Dastidar ◽  
Lalitha Bhanu Murthy Neti ◽  
Shashank Mouli Satapathy ◽  
Sanjay Misra ◽  
...  

2018 ◽  
Vol 19 (2) ◽  
pp. 393-408 ◽  
Author(s):  
Yumeng Tao ◽  
Kuolin Hsu ◽  
Alexander Ihler ◽  
Xiaogang Gao ◽  
Soroosh Sorooshian

Abstract Compared to ground precipitation measurements, satellite-based precipitation estimation products have the advantage of global coverage and high spatiotemporal resolutions. However, the accuracy of satellite-based precipitation products is still insufficient to serve many weather, climate, and hydrologic applications at high resolutions. In this paper, the authors develop a state-of-the-art deep learning framework for precipitation estimation using bispectral satellite information, infrared (IR), and water vapor (WV) channels. Specifically, a two-stage framework for precipitation estimation from bispectral information is designed, consisting of an initial rain/no-rain (R/NR) binary classification, followed by a second stage estimating the nonzero precipitation amount. In the first stage, the model aims to eliminate the large fraction of NR pixels and to delineate precipitation regions precisely. In the second stage, the model aims to estimate the pointwise precipitation amount accurately while preserving its heavily skewed distribution. Stacked denoising autoencoders (SDAEs), a commonly used deep learning method, are applied in both stages. Performance is evaluated along a number of common performance measures, including both R/NR and real-valued precipitation accuracy, and compared with an operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS). For R/NR binary classification, the proposed two-stage model outperforms PERSIANN-CCS by 32.56% in the critical success index (CSI). For real-valued precipitation estimation, the two-stage model is 23.40% lower in average bias, is 44.52% lower in average mean squared error, and has a 27.21% higher correlation coefficient. Hence, the two-stage deep learning framework has the potential to serve as a more accurate and more reliable satellite-based precipitation estimation product. The authors also provide some future directions for development of satellite-based precipitation estimation products in both incorporating auxiliary information and improving retrieval algorithms.


2021 ◽  
pp. 744-751
Author(s):  
Lov Kumar ◽  
Triyasha Ghosh Dastidar ◽  
Anjali Goyal ◽  
Lalita Bhanu Murthy ◽  
Sanjay Misra ◽  
...  

SLEEP ◽  
2020 ◽  
Vol 43 (11) ◽  
Author(s):  
Henri Korkalainen ◽  
Juhani Aakko ◽  
Brett Duce ◽  
Samu Kainulainen ◽  
Akseli Leino ◽  
...  

Abstract Study Objectives Accurate identification of sleep stages is essential in the diagnosis of sleep disorders (e.g. obstructive sleep apnea [OSA]) but relies on labor-intensive electroencephalogram (EEG)-based manual scoring. Furthermore, long-term assessment of sleep relies on actigraphy differentiating only between wake and sleep periods without identifying specific sleep stages and having low reliability in identifying wake periods after sleep onset. To address these issues, we aimed to develop an automatic method for identifying the sleep stages from the photoplethysmogram (PPG) signal obtained with a simple finger pulse oximeter. Methods PPG signals from the diagnostic polysomnographies of susptected OSA patients (n = 894) were utilized to develop a combined convolutional and recurrent neural network. The deep learning model was trained individually for three-stage (wake/NREM/REM), four-stage (wake/N1+N2/N3/REM), and five-stage (wake/N1/N2/N3/REM) classification of sleep. Results The three-stage model achieved an epoch-by-epoch accuracy of 80.1% with Cohen’s κ of 0.65. The four- and five-stage models achieved 68.5% (κ = 0.54), and 64.1% (κ = 0.51) accuracies, respectively. With the five-stage model, the total sleep time was underestimated with a mean bias error (SD) of of 7.5 (55.2) minutes. Conclusion The PPG-based deep learning model enabled accurate estimation of sleep time and differentiation between sleep stages with a moderate agreement to manual EEG-based scoring. As PPG is already included in ambulatory polygraphic recordings, applying the PPG-based sleep staging could improve their diagnostic value by enabling simple, low-cost, and reliable monitoring of sleep and help assess otherwise overlooked conditions such as REM-related OSA.


2021 ◽  
Vol 2071 (1) ◽  
pp. 012003
Author(s):  
M A Markom ◽  
S Mohd Taha ◽  
A H Adom ◽  
A S Abdull Sukor ◽  
A S Abdul Nasir ◽  
...  

Abstract COVID19 chest X-ray has been used as supplementary tools to support COVID19 severity level diagnosis. However, there are challenges that required to face by researchers around the world in order to implement these chest X-ray samples to be very helpful to detect the disease. Here, this paper presents a review of COVID19 chest X-ray classification using deep learning approach. This study is conducted to discuss the source of images and deep learning models as well as its performances. At the end of this paper, the challenges and future work on COVID19 chest X-ray are discussed and proposed.


Author(s):  
Dmytro Tkachenko ◽  
Ihor Krush ◽  
Vitalii Mykhalko ◽  
Anatolii Petrenko

This paper contains a review and analysis of applications of modern ma-chine learning approaches to solve sleep apnea severity level detection by localization of apnea episodes and prediction of the subsequent apnea episodes. We demonstrate that signals provided by cheap wearable devices can be used to solve typical tasks of sleep apnea detection. We review major publicly available datasets that can be used for training respective deep learning models, and we analyze the usage options of these datasets. In particular, we prove that deep learning could improve the accuracy of sleep apnea classification, sleep apnea localization, and sleep apnea prediction, especially using more complex models with multimodal data from several sensors.


Sign in / Sign up

Export Citation Format

Share Document