scholarly journals Novel Methods for Elucidating Modality Importance in Multimodal Electrophysiology Classifiers

2022 ◽  
Author(s):  
Charles A Ellis ◽  
Mohammad SE Sendi ◽  
Rongen Zhang ◽  
Darwin A Carbajal ◽  
May D Wang ◽  
...  

Multimodal classification is increasingly common in biomedical informatics studies. Many such studies use deep learning classifiers with raw data, which makes explainability difficult. As such, only a few studies have applied explainability methods, and new methods are needed. In this study, we propose sleep stage classification as a testbed for method development and train a convolutional neural network with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global approach that is uniquely adapted for electrophysiology analysis. We further present two local approaches that can identify subject-level differences in explanations that would be obscured by global methods and that can provide insight into the effects of clinical and demographic variables upon the patterns learned by the classifier. We find that EEG is globally the most important modality for all sleep stages, except non-rapid eye movement stage 1 and that local subject-level differences in importance arise. We further show that sex, followed by medication and age had significant effects upon the patterns learned by the classifier. Our novel methods enhance explainability for the growing field of multimodal classification, provide avenues for the advancement of personalized medicine, and yield novel insights into the effects of demographic and clinical variables upon classifiers.

2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince Calhoun

The frequency domain of electroencephalography (EEG) data has developed as a particularly important area of EEG analysis. EEG spectra have been analyzed with explainable machine learning and deep learning methods. However, as deep learning has developed, most studies use raw EEG data, which is not well-suited for traditional explainability methods. Several studies have introduced methods for spectral insight into classifiers trained on raw EEG data. These studies have provided global insight into the frequency bands that are generally important to a classifier but do not provide local insight into the frequency bands important for the classification of individual samples. This local explainability could be particularly helpful for EEG analysis domains like sleep stage classification that feature multiple evolving states. We present a novel local spectral explainability approach and use it to explain a convolutional neural network trained for automated sleep stage classification. We use our approach to show how the relative importance of different frequency bands varies over time and even within the same sleep stages. Furthermore, to better understand how our approach compares to existing methods, we compare a global estimate of spectral importance generated from our local results with an existing global spectral importance approach. We find that the δ band is most important for most sleep stages, though β is most important for the non-rapid eye movement 2 (NREM2) sleep stage. Additionally, θ is particularly important for identifying Awake and NREM1 samples. Our study represents the first approach developed for local spectral insight into deep learning classifiers trained on raw EEG time series.


2021 ◽  
Author(s):  
Charles A Ellis ◽  
Darwin A Carbajal ◽  
Rongen Zhang ◽  
Mohammad S. E. Sendi ◽  
Robyn L Miller ◽  
...  

With the growing use of multimodal data for deep learning classification in healthcare research, more studies have begun to present explainability methods for insight into multimodal classifiers. Among these studies, few have utilized local explainability methods, which could provide (1) insight into the classification of each sample and (2) an opportunity to better understand the effects of latent variables within datasets (e.g., medication of subjects in electrophysiology data). To the best of our knowledge, this opportunity has not yet been explored within multimodal classification. We present a novel local ablation approach that shows the importance of each modality to the correct classification of each class and explore the effects of latent variables upon the classifier. As a use-case, we train a convolutional neural network for automated sleep staging with electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) data. We find that EEG is the most important modality across most stages, though EOG is particular important for non-rapid eye movement stage 1. Further, we identify significant relationships between the local explanations and subject age, sex, and state of medication which suggest that the classifier learned specific features associated with these variables across multiple modalities and correctly classified samples. Our novel explainability approach has implications for many fields involving multimodal classification. Moreover, our examination of the degree to which demographic and clinical variables may affect classifiers could provide direction for future studies in automated biomarker discovery.


2013 ◽  
Vol 23 (03) ◽  
pp. 1350012 ◽  
Author(s):  
L. J. HERRERA ◽  
C. M. FERNANDES ◽  
A. M. MORA ◽  
D. MIGOTINA ◽  
R. LARGO ◽  
...  

This work proposes a methodology for sleep stage classification based on two main approaches: the combination of features extracted from electroencephalogram (EEG) signal by different extraction methods, and the use of stacked sequential learning to incorporate predicted information from nearby sleep stages in the final classifier. The feature extraction methods used in this work include three representative ways of extracting information from EEG signals: Hjorth features, wavelet transformation and symbolic representation. Feature selection was then used to evaluate the relevance of individual features from this set of methods. Stacked sequential learning uses a second-layer classifier to improve the classification by using previous and posterior first-layer predicted stages as additional features providing information to the model. Results show that both approaches enhance the sleep stage classification accuracy rate, thus leading to a closer approximation to the experts' opinion.


2021 ◽  
Author(s):  
Charles A. Ellis ◽  
Mohammad S.E. Sendi ◽  
Robyn L Miller ◽  
Vince D. Calhoun

The automated feature extraction capabilities of deep learning classifiers have promoted their broader application to EEG analysis. In contrast to earlier machine learning studies that used extracted features and traditional explainability approaches, explainability for classifiers trained on raw data is particularly challenging. As such, studies have begun to present methods that provide insight into the spectral features learned by deep learning classifiers trained on raw EEG. These approaches have two key shortcomings. (1) They involve perturbation, which can create out-of-distribution samples that cause inaccurate explanations. (2) They are global, not local. Local explainability approaches can be used to examine how demographic and clinical variables affected the patterns learned by the classifier. In our study, we present a novel local spectral explainability approach. We apply it to a convolutional neural network trained for automated sleep stage classification. We apply layer-wise relevance propagation to identify the relative importance of the features in the raw EEG and subsequently examine the frequency domain of the explanations to determine the importance of each canonical frequency band locally and globally. We then perform a statistical analysis to determine whether age and sex affected the patterns learned by the classifier for each frequency band and sleep stage. Results showed that δ, β, and γ were the overall most important frequency bands. In addition, age and sex significantly affected the patterns learned by the classifier for most sleep stages and frequency bands. Our study presents a novel spectral explainability approach that could substantially increase the level of insight into classifiers trained on raw EEG.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Sarun Paisarnsrisomsuk ◽  
Carolina Ruiz ◽  
Sergio A. Alvarez

AbstractDeep neural networks can provide accurate automated classification of human sleep signals into sleep stages that enables more effective diagnosis and treatment of sleep disorders. We develop a deep convolutional neural network (CNN) that attains state-of-the-art sleep stage classification performance on input data consisting of human sleep EEG and EOG signals. Nested cross-validation is used for optimal model selection and reliable estimation of out-of-sample classification performance. The resulting network attains a classification accuracy of $$84.50 \pm 0.13\%$$ 84.50 ± 0.13 % ; its performance exceeds human expert inter-scorer agreement, even on single-channel EEG input data, therefore providing more objective and consistent labeling than human experts demonstrate as a group. We focus on analyzing the learned internal data representations of our network, with the aim of understanding the development of class differentiation ability across the layers of processing units, as a function of layer depth. We approach this problem visually, using t-Stochastic Neighbor Embedding (t-SNE), and propose a pooling variant of Centered Kernel Alignment (CKA) that provides an objective quantitative measure of the development of sleep stage specialization and differentiation with layer depth. The results reveal a monotonic progression of both of these sleep stage modeling abilities as layer depth increases.


2008 ◽  
Vol 20 (2) ◽  
pp. 296-311 ◽  
Author(s):  
Perrine Ruby ◽  
Anne Caclin ◽  
Sabrina Boulet ◽  
Claude Delpuech ◽  
Dominique Morlet

How does the sleeping brain process external stimuli, and in particular, up to which extent does the sleeping brain detect and process modifications in its sensory environment? In order to address this issue, we investigated brain reactivity to simple auditory stimulations during sleep in young healthy subjects. Electroencephalogram signal was acquired continuously during a whole night of sleep while a classical oddball paradigm with duration deviance was applied. In all sleep stages, except Sleep Stage 4, a mismatch negativity (MMN) was unquestionably found in response to deviant tones, revealing for the first time preserved sensory memory processing during almost the whole night. Surprisingly, during Sleep Stage 2 and paradoxical sleep, both P3a-like and P3b-like components were identified after the MMN, whereas a P3a alone followed the MMN in wakefulness and in Sleep Stage 1. This totally new result suggests elaborated processing of external stimulation during sleep. We propose that the P3b-like response could be associated to an active processing of the deviant tone in the dream's consciousness.


2018 ◽  
Vol 63 (2) ◽  
pp. 177-190 ◽  
Author(s):  
Junming Zhang ◽  
Yan Wu

AbstractMany systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.


Author(s):  
Asma Salamatian ◽  
Ali Khadem

Purpose: Sleep is one of the necessities of the body, such as eating, drinking, etc., that affects different aspects of human life. Sleep monitoring and sleep stage classification play an important role in the diagnosis of sleeprelated diseases and neurological disorders. Empirically, classification of sleep stages is a time-consuming, tedious, and complex task, which heavily depends on the experience of the experts. As a result, there is a crucial need for an automatic efficient sleep staging system. Materials and Methods: This study develops a 13-layer 1D Convolutional Neural Network (CNN) using singlechannel Electroencephalogram (EEG) signal for extracting features automatically and classifying the sleep stages. To overcome the negative effect of an imbalance dataset, we have used the Synthetic Minority Oversampling Technique (SMOTE). In our study, the single-channel EEG signal is given to a 1D CNN, without any feature extraction/selection processes. This deep network can self-learn the discriminative features from the EEG signal. Results: Applying the proposed method to sleep-EDF dataset resulted in overall accuracy, sensitivity, specificity, and Precision of 94.09%, 74.73%, 96.43%, and 71.02%, respectively, for classifying five sleep stages. Using single-channel EEG and providing a network with fewer trainable parameters than most of the available deep learning-based methods are the main advantages of the proposed method. Conclusion: In this study, a 13-layer 1D CNN model was proposed for sleep stage classification. This model has an end-to-end complete architecture and does not require any separate feature extraction/selection and classification stages. Having a low number of network parameters and layers while still having high classification accuracy, is the main advantage of the proposed method over most of the previous deep learning-based approaches.


2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince D Calhoun

Recent years have shown a growth in the application of deep learning architectures such as convolutional neural networks (CNNs), to electrophysiology analysis. However, using neural networks with raw time-series data makes explainability a significant challenge. Multiple explainability approaches have been developed for insight into the spectral features learned by CNNs from EEG. However, across electrophysiology modalities, and even within EEG, there are many unique waveforms of clinical relevance. Existing methods that provide insight into waveforms learned by CNNs are of questionable utility. In this study, we present a novel model visualization-based approach that analyzes the filters in the first convolutional layer of the network. To our knowledge, this is the first method focused on extracting explainable information from EEG waveforms learned by CNNs while also providing insight into the learned spectral features. We demonstrate the viability of our approach within the context of automated sleep stage classification, a well-characterized domain that can help validate our approach. We identify 3 subgroups of filters with distinct spectral properties, determine the relative importance of each group of filters, and identify several unique waveforms learned by the classifier that were vital to the classifier performance. Our approach represents a significant step forward in explainability for electrophysiology classifiers, which we also hope will be useful for providing insights in future studies.


Sign in / Sign up

Export Citation Format

Share Document