Fusion Convolutional Neural Network for Multi-Class Motor Imagery of EEG Signals Classification

Author(s):  
Amira Echtioui ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
Habib Hamam
Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.


Author(s):  
Geliang Tian ◽  
Yue Liu

This article proposes a classification method of two-class motor imagery electroencephalogram (EEG) signals based on convolutional neural network (CNN), in which EEG signals from C3, C4 and Cz electrodes of publicly available BCI competition IV dataset 2b were used to test the performance of the CNN. The authors investigate two similar CNNs: a single-input CNN with a form of 2-dimensional input from short time Fourier transform (STFT) combining time, frequency and location information, and a multiple-input CNN with 3-dimensional input which processes the electrodes as an independent dimension. Fisher discriminant analysis-type F-score based on band pass (BP) feature and power spectra density (PSD) feature are employed respectively to select the subject-optimal frequency bands. In the experiments, typical frequency bands related to motor imagery EEG signals, subject-optimal frequency bands and extension frequency bands are employed respectively as the frequency range of the input image of CNN. The better classification performance of extension frequency bands show that CNN can extract optimal feature from frequency information automatically. The classification result also demonstrates that the proposed approach is more competitive in prediction of left/right hand motor imagery task compared with other state-of-art approaches.


2020 ◽  
Vol 40 (5) ◽  
pp. 663-672
Author(s):  
Nijisha Shajil ◽  
Sasikala Mohan ◽  
Poonguzhali Srinivasan ◽  
Janani Arivudaiyanambi ◽  
Arunnagiri Arasappan Murrugesan

2021 ◽  
Vol 11 (2) ◽  
pp. 197
Author(s):  
Tianjun Liu ◽  
Deling Yang

Motor imagery (MI) is a classical method of brain–computer interaction (BCI), in which electroencephalogram (EEG) signal features evoked by imaginary body movements are recognized, and relevant information is extracted. Recently, various deep-learning methods are being focused on in finding an easy-to-use EEG representation method that can preserve both temporal information and spatial information. To further utilize the spatial and temporal features of EEG signals, an improved 3D representation of the EEG and a densely connected multi-branch 3D convolutional neural network (dense M3D CNN) for MI classification are introduced in this paper. Specifically, as compared to the original 3D representation, a new padding method is proposed to pad the points without electrodes with the mean of all the EEG signals. Based on this new 3D presentation, a densely connected multi-branch 3D CNN with a novel dense connectivity is proposed for extracting the EEG signal features. Experiments were carried out on the WAY-EEG-GAL and BCI competition IV 2a datasets to verify the performance of this proposed method. The experimental results show that the proposed framework achieves a state-of-the-art performance that significantly outperforms the multi-branch 3D CNN framework, with a 6.208% improvement in the average accuracy for the BCI competition IV 2a datasets and 6.281% improvement in the average accuracy for the WAY-EEG-GAL datasets, with a smaller standard deviation. The results also prove the effectiveness and robustness of the method, along with validating its use in MI-classification tasks.


Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g. hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause for the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: 1) a long short-term memory (LSTM); 2) a proposed spectrogram-based convolutional neural network model (pCNN); and 3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (manual) feature engineering. Results were evaluated on our own, publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2020 ◽  
Vol 10 (5) ◽  
pp. 1605 ◽  
Author(s):  
Feng Li ◽  
Fan He ◽  
Fei Wang ◽  
Dengyong Zhang ◽  
Yi Xia ◽  
...  

Left and right hand motor imagery electroencephalogram (MI-EEG) signals are widely used in brain-computer interface (BCI) systems to identify a participant intent in controlling external devices. However, due to a series of reasons, including low signal-to-noise ratios, there are great challenges for efficient motor imagery classification. The recognition of left and right hand MI-EEG signals is vital for the application of BCI systems. Recently, the method of deep learning has been successfully applied in pattern recognition and other fields. However, there are few effective deep learning algorithms applied to BCI systems, particularly for MI based BCI. In this paper, we propose an algorithm that combines continuous wavelet transform (CWT) and a simplified convolutional neural network (SCNN) to improve the recognition rate of MI-EEG signals. Using the CWT, the MI-EEG signals are mapped to time-frequency image signals. Then the image signals are input into the SCNN to extract the features and classify them. Tested by the BCI Competition IV Dataset 2b, the experimental results show that the average classification accuracy of the nine subjects is 83.2%, and the mean kappa value is 0.651, which is 11.9% higher than that of the champion in the BCI Competition IV. Compared with other algorithms, the proposed CWT-SCNN algorithm has a better classification performance and a shorter training time. Therefore, this algorithm could enhance the classification performance of MI based BCI and be applied in real-time BCI systems for use by disabled people.


2019 ◽  
Vol 19 (12) ◽  
pp. 4494-4500 ◽  
Author(s):  
Shalu Chaudhary ◽  
Sachin Taran ◽  
Varun Bajaj ◽  
Abdulkadir Sengur

Sign in / Sign up

Export Citation Format

Share Document