scholarly journals Parkinson's Detection Based On Combined CNN And LSTM Using Enhanced Speech Signals With Variational Mode Decomposition

Author(s):  
Mehmet Bilal ER ◽  
Esme ISIK ◽  
Ibrahim ISIK

Abstract The dysfunction of the cells in the brain that contain the substance known as dopamine, which enables brain cells to interact with each other, results in Parkinson's disease (PD). PD can cause many non-motor and motor symptoms such as speech and smell. One of the difficulties that Parkinson’s patients can experience is a change in speech or speaking difficulties. Therefore, the right diagnosis in the early period is important in reducing the possible effects of speech disorders caused by the disease. Speech signal of Parkinson patients shows major differences compared to normal people. In this study, a new approach based on pre-trained deep networks and Long short-term memory (LSTM) by using mel-spectrograms obtained from denoised speech signals with Variational Mode Decomposition (VMD) for detecting PD from speech sounds is proposed. The proposed model consists of four stages. In the first step, the noise is removed by applying VMD to the signals. In the second stage, Mel-spectrograms are extracted from the enhanced sound signals with VMD. In the third stage, pre-trained deep networks are preferred to extract deep features from the Mel-spectrograms. For this purpose, ResNet-18, ResNet-50 and ResNet-101 models are used as pre-trained deep network architecture. In the last step, the classification process is occured by giving these features as input to the LSTM model, which is designed to define sequential information from the extracted features. Experiments are performed with the PC-GITA dataset, which consists of two classes and is widely used in the literature. The results obtained from the proposed method are compared with the latest methods in the literature, it is seen that it has a better performance in terms of classification performance.

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Xiaomei Sun ◽  
Haiou Zhang ◽  
Jian Wang ◽  
Chendi Shi ◽  
Dongwen Hua ◽  
...  

AbstractReliable and accurate streamflow forecasting plays a vital role in the optimal management of water resources. To improve the stability and accuracy of streamflow forecasting, a hybrid decomposition-ensemble model named VMD-LSTM-GBRT, which is sensitive to sampling, noise and long historical changes of streamflow, was established. The variational mode decomposition (VMD) algorithm was first applied to extract features, which were then learned by several long short-term memory (LSTM) networks. Simultaneously, an ensemble tree, a gradient boosting tree for regression (GBRT), was trained to model the relationships between the extracted features and the original streamflow. The outputs of these LSTMs were finally reconstructed by the GBRT model to obtain the forecasting streamflow results. A historical daily streamflow series (from 1/1/1997 to 31/12/2014) for Yangxian station, Han River, China, was investigated by the proposed model. VMD-LSTM-GBRT was compared with respect to three aspects: (1) feature extraction algorithm; ensemble empirical mode decomposition (EEMD) was used. (2) Feature learning techniques; deep neural networks (DNNs) and support vector machines for regression (SVRs) were exploited. (3) Ensemble strategy; the summation strategy was used. The results indicate that the VMD-LSTM-GBRT model overwhelms all other peer models in terms of the root mean square error (RMSE = 36.3692), determination coefficient (R2 = 0.9890), mean absolute error (MAE = 9.5246) and peak percentage threshold statistics (PPTS(5) = 0.0391%). The addressed approach based on the memory of long historical changes with deep feature representations had good stability and high prediction precision.


2021 ◽  
Vol 19 (2) ◽  
pp. 1633-1648
Author(s):  
Xin Jing ◽  
◽  
Jungang Luo ◽  
Shangyao Zhang ◽  
Na Wei

<abstract> <p>Accurate runoff forecasting plays a vital role in water resource management. Therefore, various forecasting models have been proposed in the literature. Among them, the decomposition-based models have proved their superiority in runoff series forecasting. However, most of the models simulate each decomposition sub-signals separately without considering the potential correlation information. A neoteric hybrid runoff forecasting model based on variational mode decomposition (VMD), convolution neural networks (CNN), and long short-term memory (LSTM) called VMD-CNN-LSTM, is proposed to improve the runoff forecasting performance further. The two-dimensional matrix containing both the time delay and correlation information among sub-signals decomposing by VMD is firstly applied to the CNN. The feature of the input matrix is then extracted by CNN and delivered to LSTM with more potential information. The experiment performed on monthly runoff data investigated from Huaxian and Xianyang hydrological stations at Wei River, China, demonstrates the VMD-superiority of CNN-LSTM to the baseline models, and robustness and stability of the forecasting of the VMD-CNN-LSTM for different leading times.</p> </abstract>


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1704 ◽  
Author(s):  
Alghannai Aghnaiya ◽  
Yaser Dalveren ◽  
Ali Kara

Radio frequency fingerprinting (RFF) is one of the communication network’s security techniques based on the identification of the unique features of RF transient signals. However, extracting these features could be burdensome, due to the nonstationary nature of transient signals. This may then adversely affect the accuracy of the identification of devices. Recently, it has been shown that the use of variational mode decomposition (VMD) in extracting features from Bluetooth (BT) transient signals offers an efficient way to improve the classification accuracy. To do this, VMD has been used to decompose transient signals into a series of band-limited modes, and higher order statistical (HOS) features are extracted from reconstructed transient signals. In this study, the performance bounds of VMD in RFF implementation are scrutinized. Firstly, HOS features are extracted from the band-limited modes, and then from the reconstructed transient signals directly. Performance comparison due to both HOS feature sets is presented. Moreover, the lower SNR bound within which the VMD can achieve acceptable accuracy in the classification of BT devices is determined. The approach has been tested experimentally with BT devices by employing a Linear Support Vector Machine (LSVM) classifier. According to the classification results, a higher classification performance is achieved (~4% higher) at lower SNR levels (−5–5 dB) when HOS features are extracted from band-limited modes in the implementation of VMD in RFF of BT devices.


Sign in / Sign up

Export Citation Format

Share Document