Deep Learning Classification for Epilepsy Detection Using a Single Channel Electroencephalography (EEG)

Author(s):  
Jianguo Liu ◽  
Blake Woodson
Keyword(s):  
2021 ◽  
Vol 11 (4) ◽  
pp. 456
Author(s):  
Wenpeng Neng ◽  
Jun Lu ◽  
Lei Xu

In the inference process of existing deep learning models, it is usually necessary to process the input data level-wise, and impose a corresponding relational inductive bias on each level. This kind of relational inductive bias determines the theoretical performance upper limit of the deep learning method. In the field of sleep stage classification, only a single relational inductive bias is adopted at the same level in the mainstream methods based on deep learning. This will make the feature extraction method of deep learning incomplete and limit the performance of the method. In view of the above problems, a novel deep learning model based on hybrid relational inductive biases is proposed in this paper. It is called CCRRSleepNet. The model divides the single channel Electroencephalogram (EEG) data into three levels: frame, epoch, and sequence. It applies hybrid relational inductive biases from many aspects based on three levels. Meanwhile, multiscale atrous convolution block (MSACB) is adopted in CCRRSleepNet to learn the features of different attributes. However, in practice, the actual performance of the deep learning model depends on the nonrelational inductive biases, so a variety of matching nonrelational inductive biases are adopted in this paper to optimize CCRRSleepNet. The CCRRSleepNet is tested on the Fpz-Cz and Pz-Oz channel data of the Sleep-EDF dataset. The experimental results show that the method proposed in this paper is superior to many existing methods.


Author(s):  
Yantao Chen ◽  
Binhong Dong ◽  
Xiaoxue Zhang ◽  
Pengyu Gao ◽  
Shaoqian Li

Author(s):  
Asma Salamatian ◽  
Ali Khadem

Purpose: Sleep is one of the necessities of the body, such as eating, drinking, etc., that affects different aspects of human life. Sleep monitoring and sleep stage classification play an important role in the diagnosis of sleeprelated diseases and neurological disorders. Empirically, classification of sleep stages is a time-consuming, tedious, and complex task, which heavily depends on the experience of the experts. As a result, there is a crucial need for an automatic efficient sleep staging system. Materials and Methods: This study develops a 13-layer 1D Convolutional Neural Network (CNN) using singlechannel Electroencephalogram (EEG) signal for extracting features automatically and classifying the sleep stages. To overcome the negative effect of an imbalance dataset, we have used the Synthetic Minority Oversampling Technique (SMOTE). In our study, the single-channel EEG signal is given to a 1D CNN, without any feature extraction/selection processes. This deep network can self-learn the discriminative features from the EEG signal. Results: Applying the proposed method to sleep-EDF dataset resulted in overall accuracy, sensitivity, specificity, and Precision of 94.09%, 74.73%, 96.43%, and 71.02%, respectively, for classifying five sleep stages. Using single-channel EEG and providing a network with fewer trainable parameters than most of the available deep learning-based methods are the main advantages of the proposed method. Conclusion: In this study, a 13-layer 1D CNN model was proposed for sleep stage classification. This model has an end-to-end complete architecture and does not require any separate feature extraction/selection and classification stages. Having a low number of network parameters and layers while still having high classification accuracy, is the main advantage of the proposed method over most of the previous deep learning-based approaches.


2021 ◽  
Author(s):  
Joseph Caffarini ◽  
Klevest Gjini ◽  
Brinda Sevak ◽  
Roger Waleffe ◽  
Mariel Kalkach-Aparicio ◽  
...  

Abstract In this study we designed two deep neural networks to encode 16 feature latent spaces for early seizure detection in intracranial EEG and compared them to 16 widely used engineered metrics: Epileptogenicity Index (EI), Phase Locked High Gamma (PLHG), Time and Frequency Domain Cho Gaines Distance (TDCG, FDCG), relative band powers, and log absolute band powers (from alpha, beta, theta, delta, gamma, and high gamma bands. The deep learning models were pretrained for seizure identification on the time and frequency domains of one second single channel clips of 127 seizures (from 25 different subjects) using “leave-one-out” (LOO) cross validation. Each neural network extracted unique feature spaces that were used to train a Random Forest Classifier (RFC) for seizure identification and latency tasks. The Gini Importance of each feature was calculated from the pretrained RFC, enabling the most significant features (MSFs) for each task to be identified. The MSFs were extracted from the UPenn and Mayo Clinic's Seizure Detection Challenge to train another RFC for the contest. They obtained an AUC score of 0.93, demonstrating a transferable method to identify interpretable biomarkers for seizure detection.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexander C. Constantino ◽  
Nathaniel D. Sisterson ◽  
Naoir Zaher ◽  
Alexandra Urban ◽  
R. Mark Richardson ◽  
...  

Background: Decision-making in epilepsy surgery is strongly connected to the interpretation of the intracranial EEG (iEEG). Although deep learning approaches have demonstrated efficiency in processing extracranial EEG, few studies have addressed iEEG seizure detection, in part due to the small number of seizures per patient typically available from intracranial investigations. This study aims to evaluate the efficiency of deep learning methodology in detecting iEEG seizures using a large dataset of ictal patterns collected from epilepsy patients implanted with a responsive neurostimulation system (RNS).Methods: Five thousand two hundred and twenty-six ictal events were collected from 22 patients implanted with RNS. A convolutional neural network (CNN) architecture was created to provide personalized seizure annotations for each patient. Accuracy of seizure identification was tested in two scenarios: patients with seizures occurring following a period of chronic recording (scenario 1) and patients with seizures occurring immediately following implantation (scenario 2). The accuracy of the CNN in identifying RNS-recorded iEEG ictal patterns was evaluated against human neurophysiology expertise. Statistical performance was assessed via the area-under-precision-recall curve (AUPRC).Results: In scenario 1, the CNN achieved a maximum mean binary classification AUPRC of 0.84 ± 0.19 (95%CI, 0.72–0.93) and mean regression accuracy of 6.3 ± 1.0 s (95%CI, 4.3–8.5 s) at 30 seed samples. In scenario 2, maximum mean AUPRC was 0.80 ± 0.19 (95%CI, 0.68–0.91) and mean regression accuracy was 6.3 ± 0.9 s (95%CI, 4.8–8.3 s) at 20 seed samples. We obtained near-maximum accuracies at seed size of 10 in both scenarios. CNN classification failures can be explained by ictal electro-decrements, brief seizures, single-channel ictal patterns, highly concentrated interictal activity, changes in the sleep-wake cycle, and progressive modulation of electrographic ictal features.Conclusions: We developed a deep learning neural network that performs personalized detection of RNS-derived ictal patterns with expert-level accuracy. These results suggest the potential for automated techniques to significantly improve the management of closed-loop brain stimulation, including during the initial period of recording when the device is otherwise naïve to a given patient's seizures.


2021 ◽  
Author(s):  
Sujan Kumar Roy ◽  
Aaron Nicolson ◽  
Kuldip K. Paliwal

Current deep learning approaches to linear prediction coefficient (LPC) estimation for the augmented Kalman filter (AKF) produce bias estimates, due to the use of a whitening filter. This severely degrades the perceived quality and intelligibility of enhanced speech produced by the AKF. In this paper, we propose a deep learning framework that produces clean speech and noise LPC estimates with significantly less bias than previous methods, by avoiding the use of a whitening filter. The proposed framework, called DeepLPC, jointly estimates the clean speech and noise LPC power spectra. The estimated clean speech and noise LPC power spectra are passed through the inverse Fourier transform to form autocorrelation matrices, which are then solved by the Levinson-Durbin recursion to form the LPCs and prediction error variances of the speech and noise for the AKF. The performance of DeepLPC is evaluated on the NOIZEUS and DEMAND Voice Bank datasets using subjective AB listening tests, as well as seven different objective measures (CSIG, CBAK, COVL, PESQ, STOI, SegSNR, and SI-SDR). DeepLPC is compared to six existing deep learning-based methods. Compared to other deep learning approaches to clean speech LPC estimation, DeepLPC produces a lower spectral distortion (SD) level than existing methods, confirming that it exhibits less bias. DeepLPC also produced higher objective scores than any of the competing methods (with an improvement of 0.11 for CSIG, 0.15 for CBAK, 0.14 for COVL, 0.13 for PESQ, 2.66\% for STOI, 1.11 dB for SegSNR, and 1.05 dB for SI-SDR, over the next best method). The enhanced speech produced by DeepLPC was also the most preferred by listeners. By producing less biased clean speech and noise LPC estimates, DeepLPC enables the AKF to produce enhanced speech at a higher quality and intelligibility.


Sign in / Sign up

Export Citation Format

Share Document