Classify Motor Imagery by a Novel CNN with Data Augmentation*

Author(s):  
Weijian Huang ◽  
Li Wang ◽  
Zhenxiong Yan ◽  
Yanjun Liu
2020 ◽  
Vol 17 (1) ◽  
pp. 016041 ◽  
Author(s):  
Daniel Freer ◽  
Guang-Zhong Yang

2021 ◽  
Author(s):  
Binghua Li ◽  
Zhiwen Zhang ◽  
Feng Duan ◽  
Zhenglu Yang ◽  
Qibin Zhao ◽  
...  

2020 ◽  
Author(s):  
Elnaz Lashgari ◽  
Dehua Liang ◽  
Uri Maoz

-BackgroundData augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc.-New methodWe review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected?-ResultsDA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively. -Comparing with existing methodsPercent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average.-ConclusionsDA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis.


2020 ◽  
Vol 62 ◽  
pp. 102152
Author(s):  
Paulo Henrique Gubert ◽  
Márcio Holsbach Costa ◽  
Cleison Daniel Silva ◽  
Alexande Trofino-Neto

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4485 ◽  
Author(s):  
Kai Zhang ◽  
Guanghua Xu ◽  
Zezhen Han ◽  
Kaiquan Ma ◽  
Xiaowei Zheng ◽  
...  

As an important paradigm of spontaneous brain-computer interfaces (BCIs), motor imagery (MI) has been widely used in the fields of neurological rehabilitation and robot control. Recently, researchers have proposed various methods for feature extraction and classification based on MI signals. The decoding model based on deep neural networks (DNNs) has attracted significant attention in the field of MI signal processing. Due to the strict requirements for subjects and experimental environments, it is difficult to collect large-scale and high-quality electroencephalogram (EEG) data. However, the performance of a deep learning model depends directly on the size of the datasets. Therefore, the decoding of MI-EEG signals based on a DNN has proven highly challenging in practice. Based on this, we investigated the performance of different data augmentation (DA) methods for the classification of MI data using a DNN. First, we transformed the time series signals into spectrogram images using a short-time Fourier transform (STFT). Then, we evaluated and compared the performance of different DA methods for this spectrogram data. Next, we developed a convolutional neural network (CNN) to classify the MI signals and compared the classification performance of after DA. The Fréchet inception distance (FID) was used to evaluate the quality of the generated data (GD) and the classification accuracy, and mean kappa values were used to explore the best CNN-DA method. In addition, analysis of variance (ANOVA) and paired t-tests were used to assess the significance of the results. The results showed that the deep convolutional generative adversarial network (DCGAN) provided better augmentation performance than traditional DA methods: geometric transformation (GT), autoencoder (AE), and variational autoencoder (VAE) (p < 0.01). Public datasets of the BCI competition IV (datasets 1 and 2b) were used to verify the classification performance. Improvements in the classification accuracies of 17% and 21% (p < 0.01) were observed after DA for the two datasets. In addition, the hybrid network CNN-DCGAN outperformed the other classification methods, with average kappa values of 0.564 and 0.677 for the two datasets.


2022 ◽  
Author(s):  
Arunabha Mohan Roy

Electroencephalogram (EEG) based motor imagery (MI) classification is an important aspect in brain-machine interfaces (BMIs) which bridges between neural system and computer devices decoding brain signals into recognizable machine commands. However, the MI classification task is challenging due to inherent complex properties, inter-subject variability, and low signal-to-noise ratio (SNR) of EEG signals. To overcome the above-mentioned issues, the current work proposes an efficient multi-scale convolutional neural network (MS-CNN) which can extract the distinguishable features of several non-overlapping canonical frequency bands of EEG signals from multiple scales for MI-BCI classification. In the framework, discriminant user-specific features have been extracted and integrated to improve the accuracy and performance of the CNN classifier. Additionally, different data augmentation methods have been implemented to further improve the accuracy and robustness of the model. The model achieves an average classification accuracy of 93.74% and Cohen's kappa-coefficient of 0.92 on the BCI competition IV2b dataset outperforming several baseline and current state-of-the-art EEG-based MI classification models. The proposed algorithm effectively addresses the shortcoming of existing CNN-based EEG-MI classification models and significantly improves the classification accuracy. The current framework can provide a stimulus for designing efficient and robust real-time human-robot interaction.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 15945-15954 ◽  
Author(s):  
Zhiwen Zhang ◽  
Feng Duan ◽  
Jordi Sole-Casals ◽  
Josep Dinares-Ferran ◽  
Andrzej Cichocki ◽  
...  

Author(s):  
Francesco Mattioli ◽  
Camillo Porcaro ◽  
Gianluca Baldassarre

Abstract Objective: Brain-computer interface (BCI) aims to establish communication paths between the brain processes and external devices. Different methods have been used to extract human intentions from electroencephalography (EEG) recordings. Those based on motor imagery (MI) seem to have a great potential for future applications. These approaches rely on the extraction of EEG distinctive patterns during imagined movements. Techniques able to extract patterns from raw signals represent an important target for BCI as they do not need labor-intensive data pre-processing. Approach: We propose a new approach based on a 10-layer one-dimensional convolution neural network (1D-CNN) to classify five brain states (four MI classes plus a ‘baseline’ class) using a data augmentation algorithm and a limited number of EEG channels. In addition, we present a transfer learning method used to extract critical features from the EEG group dataset and then to customize the model to the single individual by training its outer layers with only 12-minute individual-related data. Main results: The model tested with the ‘EEG Motor Movement/Imagery Dataset’ outperforms the current state-of-the-art models by achieving a 99.38% accuracy at the group level. In addition, the transfer learning approach we present achieves an average accuracy of 99.46%. Significance: The proposed methods could foster future BCI applications relying on few-channel portable recording devices and individual-based training.


Sign in / Sign up

Export Citation Format

Share Document