Deep learning for robust detection of interictal epileptiform discharges

Author(s):  
David Geng ◽  
Ayham Alkhachroum ◽  
Manuel Melo Bicchi ◽  
Jonathan Jagid ◽  
Iahn Cajigas ◽  
...  
2015 ◽  
Vol 55 (2) ◽  
pp. 122-132
Author(s):  
Adetayo Adeleye ◽  
Alice W. Ho ◽  
Alberto Nettel-Aguirre ◽  
Valerie Kirk ◽  
Jeffrey Buchhalter

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jan Pyrzowski ◽  
Jean- Eudes Le Douget ◽  
Amal Fouad ◽  
Mariusz Siemiński ◽  
Joanna Jędrzejczak ◽  
...  

AbstractClinical diagnosis of epilepsy depends heavily on the detection of interictal epileptiform discharges (IEDs) from scalp electroencephalographic (EEG) signals, which by purely visual means is far from straightforward. Here, we introduce a simple signal analysis procedure based on scalp EEG zero-crossing patterns which can extract the spatiotemporal structure of scalp voltage fluctuations. We analyzed simultaneous scalp and intracranial EEG recordings from patients with pharmacoresistant temporal lobe epilepsy. Our data show that a large proportion of intracranial IEDs manifest only as subtle, low-amplitude waveforms below scalp EEG background and could, therefore, not be detected visually. We found that scalp zero-crossing patterns allow detection of these intracranial IEDs on a single-trial level with millisecond temporal precision and including some mesial temporal discharges that do not propagate to the neocortex. Applied to an independent dataset, our method discriminated accurately between patients with epilepsy and normal subjects, confirming its practical applicability.


2016 ◽  
Vol 26 (04) ◽  
pp. 1650016 ◽  
Author(s):  
Loukianos Spyrou ◽  
David Martín-Lopez ◽  
Antonio Valentín ◽  
Gonzalo Alarcón ◽  
Saeid Sanei

Interictal epileptiform discharges (IEDs) are transient neural electrical activities that occur in the brain of patients with epilepsy. A problem with the inspection of IEDs from the scalp electroencephalogram (sEEG) is that for a subset of epileptic patients, there are no visually discernible IEDs on the scalp, rendering the above procedures ineffective, both for detection purposes and algorithm evaluation. On the other hand, intracranially placed electrodes yield a much higher incidence of visible IEDs as compared to concurrent scalp electrodes. In this work, we utilize concurrent scalp and intracranial EEG (iEEG) from a group of temporal lobe epilepsy (TLE) patients with low number of scalp-visible IEDs. The aim is to determine whether by considering the timing information of the IEDs from iEEG, the resulting concurrent sEEG contains enough information for the IEDs to be reliably distinguished from non-IED segments. We develop an automatic detection algorithm which is tested in a leave-subject-out fashion, where each test subject’s detection algorithm is based on the other patients’ data. The algorithm obtained a [Formula: see text] accuracy in recognizing scalp IED from non-IED segments with [Formula: see text] accuracy when trained and tested on the same subject. Also, it was able to identify nonscalp-visible IED events for most patients with a low number of false positive detections. Our results represent a proof of concept that IED information for TLE patients is contained in scalp EEG even if they are not visually identifiable and also that between subject differences in the IED topology and shape are small enough such that a generic algorithm can be used.


Epilepsia ◽  
2021 ◽  
Author(s):  
Robert J. Quon ◽  
Edward J. Camp ◽  
Stephen Meisenhelter ◽  
Yinchen Song ◽  
Sarah A. Steimel ◽  
...  

Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


Sign in / Sign up

Export Citation Format

Share Document