scholarly journals A Novel Time-Incremental End-to-End Shared Neural Network with Attention-Based Feature Fusion for Multiclass Motor Imagery Recognition

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Shidong Lian ◽  
Jialin Xu ◽  
Guokun Zuo ◽  
Xia Wei ◽  
Huilin Zhou

In the research of motor imagery brain-computer interface (MI-BCI), traditional electroencephalogram (EEG) signal recognition algorithms appear to be inefficient in extracting EEG signal features and improving classification accuracy. In this paper, we discuss a solution to this problem based on a novel step-by-step method of feature extraction and pattern classification for multiclass MI-EEG signals. First, the training data from all subjects is merged and enlarged through autoencoder to meet the need for massive amounts of data while reducing the bad effect on signal recognition because of randomness, instability, and individual variability of EEG data. Second, an end-to-end sharing structure with attention-based time-incremental shallow convolution neural network is proposed. Shallow convolution neural network (SCNN) and bidirectional long short-term memory (BiLSTM) network are used to extract frequency-spatial domain features and time-series features of EEG signals, respectively. Then, the attention model is introduced into the feature fusion layer to dynamically weight these extracted temporal-frequency-spatial domain features, which greatly contributes to the reduction of feature redundancy and the improvement of classification accuracy. At last, validation tests using BCI Competition IV 2a data sets show that classification accuracy and kappa coefficient have reached 82.7 ± 5.57% and 0.78 ± 0.074, which can strongly prove its advantages in improving classification accuracy and reducing individual difference among different subjects from the same network.

2021 ◽  
Vol 15 ◽  
Author(s):  
Xiongliang Xiao ◽  
Yuee Fang

Brain computer interaction (BCI) based on EEG can help patients with limb dyskinesia to carry out daily life and rehabilitation training. However, due to the low signal-to-noise ratio and large individual differences, EEG feature extraction and classification have the problems of low accuracy and efficiency. To solve this problem, this paper proposes a recognition method of motor imagery EEG signal based on deep convolution network. This method firstly aims at the problem of low quality of EEG signal characteristic data, and uses short-time Fourier transform (STFT) and continuous Morlet wavelet transform (CMWT) to preprocess the collected experimental data sets based on time series characteristics. So as to obtain EEG signals that are distinct and have time-frequency characteristics. And based on the improved CNN network model to efficiently recognize EEG signals, to achieve high-quality EEG feature extraction and classification. Further improve the quality of EEG signal feature acquisition, and ensure the high accuracy and precision of EEG signal recognition. Finally, the proposed method is validated based on the BCI competiton dataset and laboratory measured data. Experimental results show that the accuracy of this method for EEG signal recognition is 0.9324, the precision is 0.9653, and the AUC is 0.9464. It shows good practicality and applicability.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Vladimir A. Maksimenko ◽  
Semen A. Kurkin ◽  
Elena N. Pitsik ◽  
Vyacheslav Yu. Musatov ◽  
Anastasia E. Runnova ◽  
...  

We apply artificial neural network (ANN) for recognition and classification of electroencephalographic (EEG) patterns associated with motor imagery in untrained subjects. Classification accuracy is optimized by reducing complexity of input experimental data. From multichannel EEG recorded by the set of 31 electrodes arranged according to extended international 10-10 system, we select an appropriate type of ANN which reaches 80 ± 10% accuracy for single trial classification. Then, we reduce the number of the EEG channels and obtain an appropriate recognition quality (up to 73 ± 15%) using only 8 electrodes located in frontal lobe. Finally, we analyze the time-frequency structure of EEG signals and find that motor-related features associated with left and right leg motor imagery are more pronounced in the mu (8–13 Hz) and delta (1–5 Hz) brainwaves than in the high-frequency beta brainwave (15–30 Hz). Based on the obtained results, we propose further ANN optimization by preprocessing the EEG signals with a low-pass filter with different cutoffs. We demonstrate that the filtration of high-frequency spectral components significantly enhances the classification performance (up to 90 ± 5% accuracy using 8 electrodes only). The obtained results are of particular interest for the development of brain-computer interfaces for untrained subjects.


2021 ◽  
Vol 11 (2) ◽  
pp. 197
Author(s):  
Tianjun Liu ◽  
Deling Yang

Motor imagery (MI) is a classical method of brain–computer interaction (BCI), in which electroencephalogram (EEG) signal features evoked by imaginary body movements are recognized, and relevant information is extracted. Recently, various deep-learning methods are being focused on in finding an easy-to-use EEG representation method that can preserve both temporal information and spatial information. To further utilize the spatial and temporal features of EEG signals, an improved 3D representation of the EEG and a densely connected multi-branch 3D convolutional neural network (dense M3D CNN) for MI classification are introduced in this paper. Specifically, as compared to the original 3D representation, a new padding method is proposed to pad the points without electrodes with the mean of all the EEG signals. Based on this new 3D presentation, a densely connected multi-branch 3D CNN with a novel dense connectivity is proposed for extracting the EEG signal features. Experiments were carried out on the WAY-EEG-GAL and BCI competition IV 2a datasets to verify the performance of this proposed method. The experimental results show that the proposed framework achieves a state-of-the-art performance that significantly outperforms the multi-branch 3D CNN framework, with a 6.208% improvement in the average accuracy for the BCI competition IV 2a datasets and 6.281% improvement in the average accuracy for the WAY-EEG-GAL datasets, with a smaller standard deviation. The results also prove the effectiveness and robustness of the method, along with validating its use in MI-classification tasks.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3451 ◽  
Author(s):  
Sławomir Opałka ◽  
Bartłomiej Stasiak ◽  
Dominik Szajerman ◽  
Adam Wojciechowski

Mental tasks classification is increasingly recognized as a major challenge in the field of EEG signal processing and analysis. State-of-the-art approaches face the issue of spatially unstable structure of highly noised EEG signals. To address this problem, this paper presents a multi-channel convolutional neural network architecture with adaptively optimized parameters. Our solution outperforms alternative methods in terms of classification accuracy of mental tasks (imagination of hand movements and speech sounds generation) while providing high generalization capability (∼5%). Classification efficiency was obtained by using a frequency-domain multi-channel neural network feeding scheme by EEG signal frequency sub-bands analysis and architecture supporting feature mapping with two subsequent convolutional layers terminated with a fully connected layer. For dataset V from BCI Competition III, the method achieved an average classification accuracy level of nearly 70%, outperforming alternative methods. The solution presented applies a frequency domain for input data processed by a multi-channel architecture that isolates frequency sub-bands in time windows, which enables multi-class signal classification that is highly generalizable and more accurate (∼1.2%) than the existing solutions. Such an approach, combined with an appropriate learning strategy and parameters optimization, adapted to signal characteristics, outperforms reference single- or multi-channel networks, such as AlexNet, VGG-16 and Cecotti’s multi-channel NN. With the classification accuracy improvement of 1.2%, our solution is a clear advance as compared to the top three state-of-the-art methods, which achieved the result of no more than 0.3%.


Sign in / Sign up

Export Citation Format

Share Document