A Novel Local Explainability Approach for Spectral Insight into Raw EEG-Based Deep Learning Classifiers
The frequency domain of electroencephalography (EEG) data has developed as a particularly important area of EEG analysis. EEG spectra have been analyzed with explainable machine learning and deep learning methods. However, as deep learning has developed, most studies use raw EEG data, which is not well-suited for traditional explainability methods. Several studies have introduced methods for spectral insight into classifiers trained on raw EEG data. These studies have provided global insight into the frequency bands that are generally important to a classifier but do not provide local insight into the frequency bands important for the classification of individual samples. This local explainability could be particularly helpful for EEG analysis domains like sleep stage classification that feature multiple evolving states. We present a novel local spectral explainability approach and use it to explain a convolutional neural network trained for automated sleep stage classification. We use our approach to show how the relative importance of different frequency bands varies over time and even within the same sleep stages. Furthermore, to better understand how our approach compares to existing methods, we compare a global estimate of spectral importance generated from our local results with an existing global spectral importance approach. We find that the δ band is most important for most sleep stages, though β is most important for the non-rapid eye movement 2 (NREM2) sleep stage. Additionally, θ is particularly important for identifying Awake and NREM1 samples. Our study represents the first approach developed for local spectral insight into deep learning classifiers trained on raw EEG time series.