scholarly journals A Novel Convolutional Neural Network Classification Approach of Motor-Imagery EEG Recording Based on Deep Learning

2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.

2021 ◽  
Author(s):  
Navneet Tibrewal ◽  
Nikki Leeuwis ◽  
Maryam Alimardani

Motor Imagery (MI) is a mental process by which an individual rehearses body movements without actually performing physical actions. Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with this mental process and convert them into commands for external devices. Traditionally, MI-BCIs operate on Machine Learning (ML) algorithms, which require extensive signal processing and feature engineering to extract changes in sensorimotor rhythms (SMR). However, in recent years, Deep Learning (DL) models have gained popularity for EEG classification as they provide a solution for automatic extraction of spatio-temporal features in the signals. In this study, EEG signals from 54 subjects who performed a MI task of left- or right-hand grasp was employed to compare the performance of two MI-BCI classifiers; a ML approach vs. a DL approach. In the ML approach, Common Spatial Patterns (CSP) was used for feature extraction and then Linear Discriminant Analysis (LDA) model was employed for binary classification of the MI task. In the DL approach, a Convolutional Neural Network (CNN) model was constructed on the raw EEG signals. The mean classification accuracies achieved by the CNN and CSP+LDA models were 69.42% and 52.56%, respectively. Further analysis showed that the DL approach improved the classification accuracy for all subjects within the range of 2.37 to 28.28% and that the improvement was significantly stronger for low performers. Our findings show promise for employment of DL models in future MI-BCI systems, particularly for BCI inefficient users who are unable to produce desired sensorimotor patterns for conventional ML approaches.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2020 ◽  
Vol 32 (4) ◽  
pp. 731-737
Author(s):  
Akinari Onishi ◽  
◽  

Brain-computer interface (BCI) enables us to interact with the external world via electroencephalography (EEG) signals. Recently, deep learning methods have been applied to the BCI to reduce the time required for recording training data. However, more evidence is required due to lack of comparison. To reveal more evidence, this study proposed a deep learning method named time-wise convolutional neural network (TWCNN), which was applied to a BCI dataset. In the evaluation, EEG data from a subject was classified utilizing previously recorded EEG data from other subjects. As a result, TWCNN showed the highest accuracy, which was significantly higher than the typically used classifier. The results suggest that the deep learning method may be useful to reduce the recording time of training data.


2020 ◽  
Vol 40 (5) ◽  
pp. 663-672
Author(s):  
Nijisha Shajil ◽  
Sasikala Mohan ◽  
Poonguzhali Srinivasan ◽  
Janani Arivudaiyanambi ◽  
Arunnagiri Arasappan Murrugesan

Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g. hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause for the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: 1) a long short-term memory (LSTM); 2) a proposed spectrogram-based convolutional neural network model (pCNN); and 3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (manual) feature engineering. Results were evaluated on our own, publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2020 ◽  
Vol 10 (5) ◽  
pp. 1605 ◽  
Author(s):  
Feng Li ◽  
Fan He ◽  
Fei Wang ◽  
Dengyong Zhang ◽  
Yi Xia ◽  
...  

Left and right hand motor imagery electroencephalogram (MI-EEG) signals are widely used in brain-computer interface (BCI) systems to identify a participant intent in controlling external devices. However, due to a series of reasons, including low signal-to-noise ratios, there are great challenges for efficient motor imagery classification. The recognition of left and right hand MI-EEG signals is vital for the application of BCI systems. Recently, the method of deep learning has been successfully applied in pattern recognition and other fields. However, there are few effective deep learning algorithms applied to BCI systems, particularly for MI based BCI. In this paper, we propose an algorithm that combines continuous wavelet transform (CWT) and a simplified convolutional neural network (SCNN) to improve the recognition rate of MI-EEG signals. Using the CWT, the MI-EEG signals are mapped to time-frequency image signals. Then the image signals are input into the SCNN to extract the features and classify them. Tested by the BCI Competition IV Dataset 2b, the experimental results show that the average classification accuracy of the nine subjects is 83.2%, and the mean kappa value is 0.651, which is 11.9% higher than that of the champion in the BCI Competition IV. Compared with other algorithms, the proposed CWT-SCNN algorithm has a better classification performance and a shorter training time. Therefore, this algorithm could enhance the classification performance of MI based BCI and be applied in real-time BCI systems for use by disabled people.


Author(s):  
Hamdi Altaheri ◽  
Ghulam Muhammad ◽  
Mansour Alsulaiman ◽  
Syed Umar Amin ◽  
Ghadir Ali Altuwaijri ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document