scholarly journals Engineering Nonlinear Epileptic Biomarkers Using Deep Learning and Benford’s Law

Author(s):  
Joseph Caffarini ◽  
Klevest Gjini ◽  
Brinda Sevak ◽  
Roger Waleffe ◽  
Mariel Kalkach-Aparicio ◽  
...  

Abstract In this study we designed two deep neural networks to encode 16 feature latent spaces for early seizure detection in intracranial EEG and compared them to 16 widely used engineered metrics: Epileptogenicity Index (EI), Phase Locked High Gamma (PLHG), Time and Frequency Domain Cho Gaines Distance (TDCG, FDCG), relative band powers, and log absolute band powers (from alpha, beta, theta, delta, gamma, and high gamma bands. The deep learning models were pretrained for seizure identification on the time and frequency domains of one second single channel clips of 127 seizures (from 25 different subjects) using “leave-one-out” (LOO) cross validation. Each neural network extracted unique feature spaces that were used to train a Random Forest Classifier (RFC) for seizure identification and latency tasks. The Gini Importance of each feature was calculated from the pretrained RFC, enabling the most significant features (MSFs) for each task to be identified. The MSFs were extracted from the UPenn and Mayo Clinic's Seizure Detection Challenge to train another RFC for the contest. They obtained an AUC score of 0.93, demonstrating a transferable method to identify interpretable biomarkers for seizure detection.

2021 ◽  
Vol 12 ◽  
Author(s):  
Alexander C. Constantino ◽  
Nathaniel D. Sisterson ◽  
Naoir Zaher ◽  
Alexandra Urban ◽  
R. Mark Richardson ◽  
...  

Background: Decision-making in epilepsy surgery is strongly connected to the interpretation of the intracranial EEG (iEEG). Although deep learning approaches have demonstrated efficiency in processing extracranial EEG, few studies have addressed iEEG seizure detection, in part due to the small number of seizures per patient typically available from intracranial investigations. This study aims to evaluate the efficiency of deep learning methodology in detecting iEEG seizures using a large dataset of ictal patterns collected from epilepsy patients implanted with a responsive neurostimulation system (RNS).Methods: Five thousand two hundred and twenty-six ictal events were collected from 22 patients implanted with RNS. A convolutional neural network (CNN) architecture was created to provide personalized seizure annotations for each patient. Accuracy of seizure identification was tested in two scenarios: patients with seizures occurring following a period of chronic recording (scenario 1) and patients with seizures occurring immediately following implantation (scenario 2). The accuracy of the CNN in identifying RNS-recorded iEEG ictal patterns was evaluated against human neurophysiology expertise. Statistical performance was assessed via the area-under-precision-recall curve (AUPRC).Results: In scenario 1, the CNN achieved a maximum mean binary classification AUPRC of 0.84 ± 0.19 (95%CI, 0.72–0.93) and mean regression accuracy of 6.3 ± 1.0 s (95%CI, 4.3–8.5 s) at 30 seed samples. In scenario 2, maximum mean AUPRC was 0.80 ± 0.19 (95%CI, 0.68–0.91) and mean regression accuracy was 6.3 ± 0.9 s (95%CI, 4.8–8.3 s) at 20 seed samples. We obtained near-maximum accuracies at seed size of 10 in both scenarios. CNN classification failures can be explained by ictal electro-decrements, brief seizures, single-channel ictal patterns, highly concentrated interictal activity, changes in the sleep-wake cycle, and progressive modulation of electrographic ictal features.Conclusions: We developed a deep learning neural network that performs personalized detection of RNS-derived ictal patterns with expert-level accuracy. These results suggest the potential for automated techniques to significantly improve the management of closed-loop brain stimulation, including during the initial period of recording when the device is otherwise naïve to a given patient's seizures.


2019 ◽  
Vol 20 (S16) ◽  
Author(s):  
Ye Yuan ◽  
Kebin Jia ◽  
Fenglong Ma ◽  
Guangxu Xun ◽  
Yaqing Wang ◽  
...  

Abstract Background Sleep is a complex and dynamic biological process characterized by different sleep patterns. Comprehensive sleep monitoring and analysis using multivariate polysomnography (PSG) records has achieved significant efforts to prevent sleep-related disorders. To alleviate the time consumption caused by manual visual inspection of PSG, automatic multivariate sleep stage classification has become an important research topic in medical and bioinformatics. Results We present a unified hybrid self-attention deep learning framework, namely HybridAtt, to automatically classify sleep stages by capturing channel and temporal correlations from multivariate PSG records. We construct a new multi-view convolutional representation module to learn channel-specific and global view features from the heterogeneous PSG inputs. The hybrid attention mechanism is designed to further fuse the multi-view features by inferring their dependencies without any additional supervision. The learned attentional representation is subsequently fed through a softmax layer to train an end-to-end deep learning model. Conclusions We empirically evaluate our proposed HybridAtt model on a benchmark PSG dataset in two feature domains, referred to as the time and frequency domains. Experimental results show that HybridAtt consistently outperforms ten baseline methods in both feature spaces, demonstrating the effectiveness of HybridAtt in the task of sleep stage classification.


2015 ◽  
Vol 25 (05) ◽  
pp. 1550023 ◽  
Author(s):  
Cristian Donos ◽  
Matthias Dümpelmann ◽  
Andreas Schulze-Bonhage

The goal of this study is to provide a seizure detection algorithm that is relatively simple to implement on a microcontroller, so it can be used for an implantable closed loop stimulation device. We propose a set of 11 simple time domain and power bands features, computed from one intracranial EEG contact located in the seizure onset zone. The classification of the features is performed using a random forest classifier. Depending on the training datasets and the optimization preferences, the performance of the algorithm were: 93.84% mean sensitivity (100% median sensitivity), 3.03 s mean (1.75 s median) detection delays and 0.33/h mean (0.07/h median) false detections per hour.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


2021 ◽  
Vol 11 (5) ◽  
pp. 668
Author(s):  
Sani Saminu ◽  
Guizhi Xu ◽  
Zhang Shuai ◽  
Isselmou Abd El Kader ◽  
Adamu Halilu Jabire ◽  
...  

The benefits of early detection and classification of epileptic seizures in analysis, monitoring and diagnosis for the realization and actualization of computer-aided devices and recent internet of medical things (IoMT) devices can never be overemphasized. The success of these applications largely depends on the accuracy of the detection and classification techniques employed. Several methods have been investigated, proposed and developed over the years. This paper investigates various seizure detection algorithms and classifications in the last decade, including conventional techniques and recent deep learning algorithms. It also discusses epileptiform detection as one of the steps towards advanced diagnoses of disorders of consciousness (DOCs) and their understanding. A performance comparison was carried out on the different algorithms investigated, and their advantages and disadvantages were explored. From our survey, much attention has recently been paid to exploring the efficacy of deep learning algorithms in seizure detection and classification, which are employed in other areas such as image processing and classification. Hybrid deep learning has also been explored, with CNN-RNN being the most popular.


Author(s):  
Seungjun Ryu ◽  
Seunghyeok Back ◽  
Seongju Lee ◽  
Hyeon Seo ◽  
Chanki Park ◽  
...  

2021 ◽  
Vol 11 (4) ◽  
pp. 456
Author(s):  
Wenpeng Neng ◽  
Jun Lu ◽  
Lei Xu

In the inference process of existing deep learning models, it is usually necessary to process the input data level-wise, and impose a corresponding relational inductive bias on each level. This kind of relational inductive bias determines the theoretical performance upper limit of the deep learning method. In the field of sleep stage classification, only a single relational inductive bias is adopted at the same level in the mainstream methods based on deep learning. This will make the feature extraction method of deep learning incomplete and limit the performance of the method. In view of the above problems, a novel deep learning model based on hybrid relational inductive biases is proposed in this paper. It is called CCRRSleepNet. The model divides the single channel Electroencephalogram (EEG) data into three levels: frame, epoch, and sequence. It applies hybrid relational inductive biases from many aspects based on three levels. Meanwhile, multiscale atrous convolution block (MSACB) is adopted in CCRRSleepNet to learn the features of different attributes. However, in practice, the actual performance of the deep learning model depends on the nonrelational inductive biases, so a variety of matching nonrelational inductive biases are adopted in this paper to optimize CCRRSleepNet. The CCRRSleepNet is tested on the Fpz-Cz and Pz-Oz channel data of the Sleep-EDF dataset. The experimental results show that the method proposed in this paper is superior to many existing methods.


Epilepsy ◽  
2010 ◽  
pp. 573-588
Author(s):  
Christophe Jouny ◽  
Piotr Franaszczuk ◽  
Gregory Bergey

2019 ◽  
Vol 23 (1) ◽  
pp. 83-94 ◽  
Author(s):  
Ye Yuan ◽  
Guangxu Xun ◽  
Kebin Jia ◽  
Aidong Zhang

Sign in / Sign up

Export Citation Format

Share Document