scholarly journals A Novel Activation Maximization-based Approach for Insight into Electrophysiology Classifiers

2021 ◽  
Author(s):  
Charles A Ellis ◽  
Mohammad S.E. Sendi ◽  
Robyn L Miller ◽  
Vince D Calhoun

Spectral analysis remains a hallmark approach for gaining insight into electrophysiology modalities like electroencephalography (EEG). As the field of deep learning has progressed, more studies have begun to train deep learning classifiers on raw EEG data, which presents unique problems for explainability. A growing number of studies have presented explainability approaches that provide insight into the spectral features learned by deep learning classifiers. However, existing approaches only attribute importance to different frequency bands. Most of the methods cannot provide insight into the actual spectral values or the relationship between spectral features that models have learned. Here, we present a novel adaptation of activation maximization for electrophysiology time-series that generates samples that indicate the features learned by classifiers by optimizing their spectral content. We evaluate our approach within the context of EEG sleep stage classification with a convolutional neural network, and we find that our approach is able to identify spectral patterns known to be associated with each sleep stage. We also find surprising results suggesting that our classifier may have prioritized the use of eye and motion artifact when identifying Awake samples. Our approach is the first adaptation of activation maximization to the domain of raw electrophysiology classification. Additionally, our approach has implications for explaining any classifier trained on highly dynamic, long time-series.

2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince Calhoun

The frequency domain of electroencephalography (EEG) data has developed as a particularly important area of EEG analysis. EEG spectra have been analyzed with explainable machine learning and deep learning methods. However, as deep learning has developed, most studies use raw EEG data, which is not well-suited for traditional explainability methods. Several studies have introduced methods for spectral insight into classifiers trained on raw EEG data. These studies have provided global insight into the frequency bands that are generally important to a classifier but do not provide local insight into the frequency bands important for the classification of individual samples. This local explainability could be particularly helpful for EEG analysis domains like sleep stage classification that feature multiple evolving states. We present a novel local spectral explainability approach and use it to explain a convolutional neural network trained for automated sleep stage classification. We use our approach to show how the relative importance of different frequency bands varies over time and even within the same sleep stages. Furthermore, to better understand how our approach compares to existing methods, we compare a global estimate of spectral importance generated from our local results with an existing global spectral importance approach. We find that the δ band is most important for most sleep stages, though β is most important for the non-rapid eye movement 2 (NREM2) sleep stage. Additionally, θ is particularly important for identifying Awake and NREM1 samples. Our study represents the first approach developed for local spectral insight into deep learning classifiers trained on raw EEG time series.


2021 ◽  
Author(s):  
Charles A. Ellis ◽  
Mohammad S.E. Sendi ◽  
Robyn L Miller ◽  
Vince D. Calhoun

The automated feature extraction capabilities of deep learning classifiers have promoted their broader application to EEG analysis. In contrast to earlier machine learning studies that used extracted features and traditional explainability approaches, explainability for classifiers trained on raw data is particularly challenging. As such, studies have begun to present methods that provide insight into the spectral features learned by deep learning classifiers trained on raw EEG. These approaches have two key shortcomings. (1) They involve perturbation, which can create out-of-distribution samples that cause inaccurate explanations. (2) They are global, not local. Local explainability approaches can be used to examine how demographic and clinical variables affected the patterns learned by the classifier. In our study, we present a novel local spectral explainability approach. We apply it to a convolutional neural network trained for automated sleep stage classification. We apply layer-wise relevance propagation to identify the relative importance of the features in the raw EEG and subsequently examine the frequency domain of the explanations to determine the importance of each canonical frequency band locally and globally. We then perform a statistical analysis to determine whether age and sex affected the patterns learned by the classifier for each frequency band and sleep stage. Results showed that δ, β, and γ were the overall most important frequency bands. In addition, age and sex significantly affected the patterns learned by the classifier for most sleep stages and frequency bands. Our study presents a novel spectral explainability approach that could substantially increase the level of insight into classifiers trained on raw EEG.


Author(s):  
Stanislas Chambon ◽  
Mathieu N. Galtier ◽  
Pierrick J. Arnal ◽  
Gilles Wainrib ◽  
Alexandre Gramfort

2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince D Calhoun

Recent years have shown a growth in the application of deep learning architectures such as convolutional neural networks (CNNs), to electrophysiology analysis. However, using neural networks with raw time-series data makes explainability a significant challenge. Multiple explainability approaches have been developed for insight into the spectral features learned by CNNs from EEG. However, across electrophysiology modalities, and even within EEG, there are many unique waveforms of clinical relevance. Existing methods that provide insight into waveforms learned by CNNs are of questionable utility. In this study, we present a novel model visualization-based approach that analyzes the filters in the first convolutional layer of the network. To our knowledge, this is the first method focused on extracting explainable information from EEG waveforms learned by CNNs while also providing insight into the learned spectral features. We demonstrate the viability of our approach within the context of automated sleep stage classification, a well-characterized domain that can help validate our approach. We identify 3 subgroups of filters with distinct spectral properties, determine the relative importance of each group of filters, and identify several unique waveforms learned by the classifier that were vital to the classifier performance. Our approach represents a significant step forward in explainability for electrophysiology classifiers, which we also hope will be useful for providing insights in future studies.


2021 ◽  
Vol 11 (4) ◽  
pp. 456
Author(s):  
Wenpeng Neng ◽  
Jun Lu ◽  
Lei Xu

In the inference process of existing deep learning models, it is usually necessary to process the input data level-wise, and impose a corresponding relational inductive bias on each level. This kind of relational inductive bias determines the theoretical performance upper limit of the deep learning method. In the field of sleep stage classification, only a single relational inductive bias is adopted at the same level in the mainstream methods based on deep learning. This will make the feature extraction method of deep learning incomplete and limit the performance of the method. In view of the above problems, a novel deep learning model based on hybrid relational inductive biases is proposed in this paper. It is called CCRRSleepNet. The model divides the single channel Electroencephalogram (EEG) data into three levels: frame, epoch, and sequence. It applies hybrid relational inductive biases from many aspects based on three levels. Meanwhile, multiscale atrous convolution block (MSACB) is adopted in CCRRSleepNet to learn the features of different attributes. However, in practice, the actual performance of the deep learning model depends on the nonrelational inductive biases, so a variety of matching nonrelational inductive biases are adopted in this paper to optimize CCRRSleepNet. The CCRRSleepNet is tested on the Fpz-Cz and Pz-Oz channel data of the Sleep-EDF dataset. The experimental results show that the method proposed in this paper is superior to many existing methods.


Author(s):  
Asma Salamatian ◽  
Ali Khadem

Purpose: Sleep is one of the necessities of the body, such as eating, drinking, etc., that affects different aspects of human life. Sleep monitoring and sleep stage classification play an important role in the diagnosis of sleeprelated diseases and neurological disorders. Empirically, classification of sleep stages is a time-consuming, tedious, and complex task, which heavily depends on the experience of the experts. As a result, there is a crucial need for an automatic efficient sleep staging system. Materials and Methods: This study develops a 13-layer 1D Convolutional Neural Network (CNN) using singlechannel Electroencephalogram (EEG) signal for extracting features automatically and classifying the sleep stages. To overcome the negative effect of an imbalance dataset, we have used the Synthetic Minority Oversampling Technique (SMOTE). In our study, the single-channel EEG signal is given to a 1D CNN, without any feature extraction/selection processes. This deep network can self-learn the discriminative features from the EEG signal. Results: Applying the proposed method to sleep-EDF dataset resulted in overall accuracy, sensitivity, specificity, and Precision of 94.09%, 74.73%, 96.43%, and 71.02%, respectively, for classifying five sleep stages. Using single-channel EEG and providing a network with fewer trainable parameters than most of the available deep learning-based methods are the main advantages of the proposed method. Conclusion: In this study, a 13-layer 1D CNN model was proposed for sleep stage classification. This model has an end-to-end complete architecture and does not require any separate feature extraction/selection and classification stages. Having a low number of network parameters and layers while still having high classification accuracy, is the main advantage of the proposed method over most of the previous deep learning-based approaches.


2021 ◽  
Author(s):  
Tian Xiang Gao ◽  
Jia Yi Li ◽  
Yuji Watanabe ◽  
Chi Jung Hung ◽  
Akihiro Yamanaka ◽  
...  

Abstract Sleep-stage classification is essential for sleep research. Various automatic judgment programs including deep learning algorithms using artificial intelligence (AI) have been developed, but with limitations in data format compatibility, human interpretability, cost, and technical requirements. We developed a novel program called GI-SleepNet, generative adversarial network (GAN)-assisted image-based sleep staging for mice that is accurate, versatile, compact, and easy to use. In this program, electroencephalogram and electromyography data are first visualized as images and then classified into three stages (wake, NREM, and REM) by a supervised image learning algorithm. To increase the accuracy, we adopted GAN and artificially generated fake REM sleep data to equalize the number of stages. This resulted in improved accuracy, and as few as one mouse data yielded significant accuracy. Because of its image-based nature, it is easy to apply to data of different formats, of different species of animals, and even outside of sleep research. Image data can be easily understood by humans, thus especially confirmation by experts is easy when there are some anomalies of prediction. Because deep learning of images is one of the leading fields in AI, numerous algorithms are also available.


2022 ◽  
Author(s):  
Charles A Ellis ◽  
Mohammad SE Sendi ◽  
Rongen Zhang ◽  
Darwin A Carbajal ◽  
May D Wang ◽  
...  

Multimodal classification is increasingly common in biomedical informatics studies. Many such studies use deep learning classifiers with raw data, which makes explainability difficult. As such, only a few studies have applied explainability methods, and new methods are needed. In this study, we propose sleep stage classification as a testbed for method development and train a convolutional neural network with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global approach that is uniquely adapted for electrophysiology analysis. We further present two local approaches that can identify subject-level differences in explanations that would be obscured by global methods and that can provide insight into the effects of clinical and demographic variables upon the patterns learned by the classifier. We find that EEG is globally the most important modality for all sleep stages, except non-rapid eye movement stage 1 and that local subject-level differences in importance arise. We further show that sex, followed by medication and age had significant effects upon the patterns learned by the classifier. Our novel methods enhance explainability for the growing field of multimodal classification, provide avenues for the advancement of personalized medicine, and yield novel insights into the effects of demographic and clinical variables upon classifiers.


Sign in / Sign up

Export Citation Format

Share Document