scholarly journals ConvDip: A Convolutional Neural Network for Better EEG Source Imaging

2021 ◽  
Vol 15 ◽  
Author(s):  
Lukas Hecker ◽  
Rebekka Rupprecht ◽  
Ludger Tebartz Van Elst ◽  
Jürgen Kornmeier

The electroencephalography (EEG) is a well-established non-invasive method in neuroscientific research and clinical diagnostics. It provides a high temporal but low spatial resolution of brain activity. To gain insight about the spatial dynamics of the EEG, one has to solve the inverse problem, i.e., finding the neural sources that give rise to the recorded EEG activity. The inverse problem is ill-posed, which means that more than one configuration of neural sources can evoke one and the same distribution of EEG activity on the scalp. Artificial neural networks have been previously used successfully to find either one or two dipole sources. These approaches, however, have never solved the inverse problem in a distributed dipole model with more than two dipole sources. We present ConvDip, a novel convolutional neural network (CNN) architecture, that solves the EEG inverse problem in a distributed dipole model based on simulated EEG data. We show that (1) ConvDip learned to produce inverse solutions from a single time point of EEG data and (2) outperforms state-of-the-art methods on all focused performance measures. (3) It is more flexible when dealing with varying number of sources, produces less ghost sources and misses less real sources than the comparison methods. It produces plausible inverse solutions for real EEG recordings from human participants. (4) The trained network needs <40 ms for a single prediction. Our results qualify ConvDip as an efficient and easy-to-apply novel method for source localization in EEG data, with high relevance for clinical applications, e.g., in epileptology and real-time applications.

Author(s):  
Lukas Hecker ◽  
Rebekka Rupprecht ◽  
Ludger Tebartz van Elst ◽  
Juergen Kornmeier

AbstractEEG and MEG are well-established non-invasive methods in neuroscientific research and clinical diagnostics. Both methods provide a high temporal but low spatial resolution of brain activity. In order to gain insight about the spatial dynamics of the M/EEG one has to solve the inverse problem, which means that more than one configuration of neural sources can evoke one and the same distribution of EEG activity on the scalp. Artificial neural networks have been previously used successfully to find either one or two dipoles sources. These approaches, however, have never solved the inverse problem in a distributed dipole model with more than two dipole sources. We present ConvDip, a novel convolutional neural network (CNN) architecture that solves the EEG inverse problem in a distributed dipole model based on simulated EEG data. We show that (1) ConvDip learned to produce inverse solutions from a single time point of EEG data and (2) outperforms state-of-the-art methods (eLORETA and LCMV beamforming) on all focused performance measures. (3) It is more flexible when dealing with varying number of sources, produces less ghost sources and misses less real sources than the comparison methods. (4) It produces plausible inverse solutions for real-world EEG recordings and needs less than 40 ms for a single forward pass. Our results qualify ConvDip as an efficient and easy-to-apply novel method for source localization in EEG and MEG data, with high relevance for clinical applications, e.g. in epileptology and real time applications.


2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.


2020 ◽  
Vol 32 (4) ◽  
pp. 731-737
Author(s):  
Akinari Onishi ◽  
◽  

Brain-computer interface (BCI) enables us to interact with the external world via electroencephalography (EEG) signals. Recently, deep learning methods have been applied to the BCI to reduce the time required for recording training data. However, more evidence is required due to lack of comparison. To reveal more evidence, this study proposed a deep learning method named time-wise convolutional neural network (TWCNN), which was applied to a BCI dataset. In the evaluation, EEG data from a subject was classified utilizing previously recorded EEG data from other subjects. As a result, TWCNN showed the highest accuracy, which was significantly higher than the typically used classifier. The results suggest that the deep learning method may be useful to reduce the recording time of training data.


Author(s):  
Zeng Hui ◽  
Li Ying ◽  
Wang Lingyue ◽  
Yin Ning ◽  
Yang Shuo

Electroencephalography (EEG) inverse problem is a typical inverse problem, in which the electrical activity within the brain is reconstructed based on EEG data collected from the scalp electrodes. In this paper, the four-layer concentric head model is used for simulation firstly, four deep neural network models including a multilayer perceptron (MLP) model and three convolutional neural networks (CNNs) are adopted to solve EEG inverse problem based on equal current dipole (ECD) model. In the simulations, 100,000 samples are generated randomly, of which 60% are used for network training and 20% are used for cross-validation. Eventually, the generalization performance of the model using the optimal function is measured by the errors in the rest 20% testing set. The experimental results show that the absolute error, relative error, mean positioning error and standard deviation of the four models are extremely low. The CNN with 6 convolutional layers and 3 pooling layers (CNN-3) is the best model. Its absolute error is about 0.015, its relative error is about 0.005, and its dipole position error is 0.040±0.029 cm. Furthermore, we use CNN-3 for source localization of the real EEG data in Working Memory. The results are in accord with physiological experience. The deep neural network method in our study needs fewer calculation parameters, takes less time, and has better positioning results.


Author(s):  
Zhijie Fang ◽  
Weiqun Wang ◽  
Shixin Ren ◽  
Jiaxing Wang ◽  
Weiguo Shi ◽  
...  

Recent deep learning-based Brain-Computer Interface (BCI) decoding algorithms mainly focus on spatial-temporal features, while failing to explicitly explore spectral information which is one of the most important cues for BCI. In this paper, we propose a novel regional attention convolutional neural network (RACNN) to take full advantage of spectral-spatial-temporal features for EEG motion intention recognition. Time-frequency based analysis is adopted to reveal spectral-temporal features in terms of neural oscillations of primary sensorimotor. The basic idea of RACNN is to identify the activated area of the primary sensorimotor adaptively. The RACNN aggregates a varied number of spectral-temporal features produced by a backbone convolutional neural network into a compact fixed-length representation. Inspired by the neuroscience findings that functional asymmetry of the cerebral hemisphere, we propose a region biased loss to encourage high attention weights for the most critical regions. Extensive evaluations on two benchmark datasets and real-world BCI dataset show that our approach significantly outperforms previous methods.


2020 ◽  
Vol 2 (3) ◽  
pp. 121-127
Author(s):  
Dr. Vijayakumar T.

The proposed paper addresses the inverse problems using a novel deep convolutional neural network (CNN). Over the years, regularized iterative algorithms have been observed to be the standard approach to address this issue. Though these methodologies give an excellent output, they still impose challenges such as difficulty of hyper parameter selection, increasing computational cost for adjoint operators and forward operators. It has been observed that when the normal operator of the forward model is seen to be a convolution, unrolled iterative methods take up the CNN form. In view of this observation we have proposed a methodology which uses CNN after direct inversion to find the solution for convolutional inverse problem. In the first step the physical model of the system is analyzed using direct inversion. However, this leads to artifacts which are then removed using a combination of residual learning and multi-resolution decomposition in CNN. The results show that the performance of the proposed work outperforms other algorithm and requires a maximum of 1 second to reconstruct an image of high definition.


2021 ◽  
Vol 10 (15) ◽  
pp. e335101522712
Author(s):  
Amanda Ferrari Iaquinta ◽  
Ana Carolina de Sousa Silva ◽  
Aldrumont Ferraz Júnior ◽  
Jessica Monique de Toledo ◽  
Gustavo Voltani von Atzingen

The electrical signal emitted by the eyes movement produces a very strong artifact on EEG signal due to its close proximity to the sensors and abundance of occurrence. In the context of detecting eye blink artifacts in EEG waveforms for further removal and signal purification, multiple strategies where proposed in the literature. Most commonly applied methods require the use of a large number of electrodes, complex equipment for sampling and processing data. The goal of this work is to create a reliable and user independent algorithm for detecting and removing eye blink in EEG signals using CNN (convolutional neural network). For training and validation, three sets of public EEG data were used. All three sets contain samples obtained while the recruited subjects performed assigned tasks that included blink voluntarily in specific moments, watch a video and read an article. The model used in this study was able to have an embracing understanding of all the features that distinguish a trivial EEG signal from a signal contaminated with eye blink artifacts without being overfitted by specific features that only occurred in the situations when the signals were registered.


Sign in / Sign up

Export Citation Format

Share Document