Depression Diagnosis Modeling With Advanced Computational Methods: Frequency-Domain eMVAR and Deep Learning

2021 ◽  
pp. 155005942110185
Author(s):  
Caglar Uyulan ◽  
Sara de la Salle ◽  
Turker T. Erguzel ◽  
Emma Lynn ◽  
Pierre Blier ◽  
...  

Electroencephalogram (EEG)-based automated depression diagnosis systems have been suggested for early and accurate detection of mood disorders. EEG signals are highly irregular, nonlinear, and nonstationary in nature and are traditionally studied from a linear viewpoint by means of statistical and frequency features. Since, linear metrics present certain limitations and nonlinear methods have proven to be an efficient tool in understanding the complexities of the brain in the identification of underlying behavior of biological signals, such as electrocardiogram, EEG and magnetoencephalogram and thus, can be applied to all nonstationary signals. Various nonlinear algorithms can be used in the analysis of EEG signals. In this research paper, we aim to develop a novel methodology for EEG-based depression diagnosis utilizing 2 advanced computational techniques: frequency-domain extended multivariate autoregressive (eMVAR) and deep learning (DL). We proposed a hybrid method comprising a pretrained ResNet-50 and long-short term memory (LSTM) to capture depression-specific information and compared with a strong conventional machine learning (ML) framework having eMVAR connectivity features. The following 8 causality measures, which interpret the interaction mechanisms among spectrally decomposed oscillations, were used to extract features from multivariate EEG time series: directed coherence (DC), directed transfer function (DTF), partial DC (PDC), generalized PDC (gPDC), extended DC (eDC), delayed DC (dDC), extended PDC (ePDC), and delayed PDC (dPDC). The classification accuracies were 84% with DC, 85% with DTF, 95.3% with PDC, 95.1% with gPDC, 84.8% with eDC, 84.6% with dDC, 84.2% with ePDC, and 95.9% with dPDC for the eMVAR framework. Through a DL framework (ResNet-50 + LSTM), the classification accuracy was achieved as 90.22%. The results demonstrate that our DL methodology is a competitive alternative to the strong feature extraction-based ML methods in depression classification.

2019 ◽  
Vol 9 (11) ◽  
pp. 326 ◽  
Author(s):  
Hong Zeng ◽  
Zhenhua Wu ◽  
Jiaming Zhang ◽  
Chen Yang ◽  
Hua Zhang ◽  
...  

Deep learning (DL) methods have been used increasingly widely, such as in the fields of speech and image recognition. However, how to design an appropriate DL model to accurately and efficiently classify electroencephalogram (EEG) signals is still a challenge, mainly because EEG signals are characterized by significant differences between two different subjects or vary over time within a single subject, non-stability, strong randomness, low signal-to-noise ratio. SincNet is an efficient classifier for speaker recognition, but it has some drawbacks in dealing with EEG signals classification. In this paper, we improve and propose a SincNet-based classifier, SincNet-R, which consists of three convolutional layers, and three deep neural network (DNN) layers. We then make use of SincNet-R to test the classification accuracy and robustness by emotional EEG signals. The comparable results with original SincNet model and other traditional classifiers such as CNN, LSTM and SVM, show that our proposed SincNet-R model has higher classification accuracy and better algorithm robustness.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


2018 ◽  
Vol 99 ◽  
pp. 24-37 ◽  
Author(s):  
Κostas Μ. Tsiouris ◽  
Vasileios C. Pezoulas ◽  
Michalis Zervakis ◽  
Spiros Konitsiotis ◽  
Dimitrios D. Koutsouris ◽  
...  

Author(s):  
Kuldeep Singh ◽  
Sukhjeet Singh ◽  
Jyoteesh Malhotra

Schizophrenia is a fatal mental disorder, which affects millions of people globally by the disturbance in their thinking, feeling and behaviour. In the age of the internet of things assisted with cloud computing and machine learning techniques, the computer-aided diagnosis of schizophrenia is essentially required to provide its patients with an opportunity to own a better quality of life. In this context, the present paper proposes a spectral features based convolutional neural network (CNN) model for accurate identification of schizophrenic patients using spectral analysis of multichannel EEG signals in real-time. This model processes acquired EEG signals with filtering, segmentation and conversion into frequency domain. Then, given frequency domain segments are divided into six distinct spectral bands like delta, theta-1, theta-2, alpha, beta and gamma. The spectral features including mean spectral amplitude, spectral power and Hjorth descriptors (Activity, Mobility and Complexity) are extracted from each band. These features are independently fed to the proposed spectral features-based CNN and long short-term memory network (LSTM) models for classification. This work also makes use of raw time-domain and frequency-domain EEG segments for classification using temporal CNN and spectral CNN models of same architectures respectively. The overall analysis of simulation results of all models exhibits that the proposed spectral features based CNN model is an efficient technique for accurate and prompt identification of schizophrenic patients among healthy individuals with average classification accuracies of 94.08% and 98.56% for two different datasets with optimally small classification time.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2854 ◽  
Author(s):  
Kwon-Woo Ha ◽  
Jin-Woo Jeong

Various convolutional neural network (CNN)-based approaches have been recently proposed to improve the performance of motor imagery based-brain-computer interfaces (BCIs). However, the classification accuracy of CNNs is compromised when target data are distorted. Specifically for motor imagery electroencephalogram (EEG), the measured signals, even from the same person, are not consistent and can be significantly distorted. To overcome these limitations, we propose to apply a capsule network (CapsNet) for learning various properties of EEG signals, thereby achieving better and more robust performance than previous CNN methods. The proposed CapsNet-based framework classifies the two-class motor imagery, namely right-hand and left-hand movements. The motor imagery EEG signals are first transformed into 2D images using the short-time Fourier transform (STFT) algorithm and then used for training and testing the capsule network. The performance of the proposed framework was evaluated on the BCI competition IV 2b dataset. The proposed framework outperformed state-of-the-art CNN-based methods and various conventional machine learning approaches. The experimental results demonstrate the feasibility of the proposed approach for classification of motor imagery EEG signals.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alexander Malafeev ◽  
Anneke Hertig-Godeschalk ◽  
David R. Schreier ◽  
Jelena Skorucak ◽  
Johannes Mathis ◽  
...  

Brief fragments of sleep shorter than 15 s are defined as microsleep episodes (MSEs), often subjectively perceived as sleepiness. Their main characteristic is a slowing in frequency in the electroencephalogram (EEG), similar to stage N1 sleep according to standard criteria. The maintenance of wakefulness test (MWT) is often used in a clinical setting to assess vigilance. Scoring of the MWT in most sleep-wake centers is limited to classical definition of sleep (30 s epochs), and MSEs are mostly not considered in the absence of established scoring criteria defining MSEs but also because of the laborious work. We aimed for automatic detection of MSEs with machine learning, i.e., with deep learning based on raw EEG and EOG data as input. We analyzed MWT data of 76 patients. Experts visually scored wakefulness, and according to recently developed scoring criteria MSEs, microsleep episode candidates (MSEc), and episodes of drowsiness (ED). We implemented segmentation algorithms based on convolutional neural networks (CNNs) and a combination of a CNN with a long-short term memory (LSTM) network. A LSTM network is a type of a recurrent neural network which has a memory for past events and takes them into account. Data of 53 patients were used for training of the classifiers, 12 for validation and 11 for testing. Our algorithms showed a good performance close to human experts. The detection was very good for wakefulness and MSEs and poor for MSEc and ED, similar to the low inter-expert reliability for these borderline segments. We performed a visualization of the internal representation of the data by the artificial neuronal network performing best using t-distributed stochastic neighbor embedding (t-SNE). Visualization revealed that MSEs and wakefulness were mostly separable, though not entirely, and MSEc and ED largely intersected with the two main classes. We provide a proof of principle that it is feasible to reliably detect MSEs with deep neuronal networks based on raw EEG and EOG data with a performance close to that of human experts. The code of the algorithms (https://github.com/alexander-malafeev/microsleep-detection) and data (https://zenodo.org/record/3251716) are available.


Author(s):  
Muhammad Fawaz Saputra ◽  
Noor Akhmad Setiawan ◽  
Igi Ardiyanto

EEG signals are obtained from an EEG device after recording the user's brain signals. EEG signals can be generated by the user after performing motor movements or imagery tasks. Motor Imagery (MI) is the task of imagining motor movements that resemble the original motor movements. Brain Computer Interface (BCI) bridges interactions between users and applications in performing tasks. Brain Computer Interface (BCI) Competition IV 2a was used in this study. A fully automated correction method of EOG artifacts in EEG recordings was applied in order to remove artifacts and Common Spatial Pattern (CSP) to get features that can distinguish motor imagery tasks. In this study, a comparative studies between two deep learning methods was explored, namely Deep Belief Network (DBN) and Long Short Term Memory (LSTM). Usability of both deep learning methods was evaluated using the BCI Competition IV-2a dataset. The experimental results of these two deep learning methods show average accuracy of 50.35% for DBN and 49.65% for LSTM.


2019 ◽  
Vol 19 (01) ◽  
pp. 1940005 ◽  
Author(s):  
ULAS BARAN BALOGLU ◽  
ÖZAL YILDIRIM

Background and objective: Deep learning structures have recently achieved remarkable success in the field of machine learning. Convolutional neural networks (CNN) in image processing and long-short term memory (LSTM) in the time-series analysis are commonly used deep learning algorithms. Healthcare applications of deep learning algorithms provide important contributions for computer-aided diagnosis research. In this study, convolutional long-short term memory (CLSTM) network was used for automatic classification of EEG signals and automatic seizure detection. Methods: A new nine-layer deep network model consisting of convolutional and LSTM layers was designed. The signals processed in the convolutional layers were given as an input to the LSTM network whose outputs were processed in densely connected neural network layers. The EEG data is appropriate for a model having 1-D convolution layers. A bidirectional model was employed in the LSTM layer. Results: Bonn University EEG database with five different datasets was used for experimental studies. In this database, each dataset contains 23.6[Formula: see text]s duration 100 single channel EEG segments which consist of 4097 dimensional samples (173.61[Formula: see text]Hz). Eight two-class and three three-class clinical scenarios were examined. When the experimental results were evaluated, it was seen that the proposed model had high accuracy on both binary and ternary classification tasks. Conclusions: The proposed end-to-end learning structure showed a good performance without using any hand-crafted feature extraction or shallow classifiers to detect the seizures. The model does not require filtering, and also automatically learns to filter the input as well. As a result, the proposed model can process long duration EEG signals without applying segmentation, and can detect epileptic seizures automatically by using the correlation of ictal and interictal signals of raw data.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7103
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


Sign in / Sign up

Export Citation Format

Share Document