scholarly journals Tangent Space Features-Based Transfer Learning Classification Model for Two-Class Motor Imagery Brain–Computer Interface

2019 ◽  
Vol 29 (10) ◽  
pp. 1950025 ◽  
Author(s):  
Pramod Gaur ◽  
Karl McCreadie ◽  
Ram Bilas Pachori ◽  
Hui Wang ◽  
Girijesh Prasad

The performance of a brain–computer interface (BCI) will generally improve by increasing the volume of training data on which it is trained. However, a classifier’s generalization ability is often negatively affected when highly non-stationary data are collected across both sessions and subjects. The aim of this work is to reduce the long calibration time in BCI systems by proposing a transfer learning model which can be used for evaluating unseen single trials for a subject without the need for training session data. A method is proposed which combines a generalization of the previously proposed subject-specific “multivariate empirical-mode decomposition” preprocessing technique by taking a fixed band of 8–30[Formula: see text]Hz for all four motor imagery tasks and a novel classification model which exploits the structure of tangent space features drawn from the Riemannian geometry framework, that is shared among the training data of multiple sessions and subjects. Results demonstrate comparable performance improvement across multiple subjects without subject-specific calibration, when compared with other state-of-the-art techniques.

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Ibrahim Hossain ◽  
Abbas Khosravi ◽  
Imali Hettiarachchi ◽  
Saeid Nahavandi

A widely discussed paradigm for brain-computer interface (BCI) is the motor imagery task using noninvasive electroencephalography (EEG) modality. It often requires long training session for collecting a large amount of EEG data which makes user exhausted. One of the approaches to shorten this session is utilizing the instances from past users to train the learner for the novel user. In this work, direct transferring from past users is investigated and applied to multiclass motor imagery BCI. Then, active learning (AL) driven informative instance transfer learning has been attempted for multiclass BCI. Informative instance transfer shows better performance than direct instance transfer which reaches the benchmark using a reduced amount of training data (49% less) in cases of 6 out of 9 subjects. However, none of these methods has superior performance for all subjects in general. To get a generic transfer learning framework for BCI, an optimal ensemble of informative and direct transfer methods is designed and applied. The optimized ensemble outperforms both direct and informative transfer method for all subjects except one in BCI competition IV multiclass motor imagery dataset. It achieves the benchmark performance for 8 out of 9 subjects using average 75% less training data. Thus, the requirement of large training data for the new user is reduced to a significant amount.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yilu Xu ◽  
Xin Huang ◽  
Quan Lan

A motor imagery (MI) brain-computer interface (BCI) plays an important role in the neurological rehabilitation training for stroke patients. Electroencephalogram (EEG)-based MI BCI has high temporal resolution, which is convenient for real-time BCI control. Therefore, we focus on EEG-based MI BCI in this paper. The identification of MI EEG signals is always quite challenging. Due to high inter-session/subject variability, each subject should spend long and tedious calibration time in collecting amounts of labeled samples for a subject-specific model. To cope with this problem, we present a supervised selective cross-subject transfer learning (sSCSTL) approach which simultaneously makes use of the labeled samples from target and source subjects based on Riemannian tangent space. Since the covariance matrices representing the multi-channel EEG signals belong to the smooth Riemannian manifold, we perform the Riemannian alignment to make the covariance matrices from different subjects close to each other. Then, all aligned covariance matrices are converted into the Riemannian tangent space features to train a classifier in the Euclidean space. To investigate the role of unlabeled samples, we further propose semi-supervised and unsupervised versions which utilize the total samples and unlabeled samples from target subject, respectively. Sequential forward floating search (SFFS) method is executed for source selection. All our proposed algorithms transfer the labeled samples from most suitable source subjects into the feature space of target subject. Experimental results on two publicly available MI datasets demonstrated that our algorithms outperformed several state-of-the-art algorithms using small number of the labeled samples from target subject, especially for good target subjects.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Mingwei Zhang ◽  
Yao Hou ◽  
Rongnian Tang ◽  
Youjun Li

In motor imagery brain computer interface system, the spatial covariance matrices of EEG signals which carried important discriminative information have been well used to improve the decoding performance of motor imagery. However, the covariance matrices often suffer from the problem of high dimensionality, which leads to a high computational cost and overfitting. These problems directly limit the application ability and work efficiency of the BCI system. To improve these problems and enhance the performance of the BCI system, in this study, we propose a novel semisupervised locality-preserving graph embedding model to learn a low-dimensional embedding. This approach enables a low-dimensional embedding to capture more discriminant information for classification by efficiently incorporating information from testing and training data into a Riemannian graph. Furthermore, we obtain an efficient classification algorithm using an extreme learning machine (ELM) classifier developed on the tangent space of a learned embedding. Experimental results show that our proposed approach achieves higher classification performance than benchmark methods on various datasets, including the BCI Competition IIa dataset and in-house BCI datasets.


2019 ◽  
Vol 16 (2) ◽  
pp. 172988141984086 ◽  
Author(s):  
Chuanqi Tan ◽  
Fuchun Sun ◽  
Bin Fang ◽  
Tao Kong ◽  
Wenchang Zhang

The brain–computer interface-based rehabilitation robot has quickly become a very important research area due to its natural interaction. One of the most important problems in brain–computer interface is that large-scale annotated electroencephalography data sets required by advanced classifiers are almost impossible to acquire because biological data acquisition is challenging and quality annotation is costly. Transfer learning relaxes the hypothesis that the training data must be independent and identically distributed with the test data. It can be considered a powerful tool for solving the problem of insufficient training data. There are two basic issues with transfer learning, under transfer and negative transfer. We proposed a novel brain–computer interface framework by using autoencoder-based transfer learning, which includes three main components: an autoencoder framework, a joint adversarial network, and a regularized manifold constraint. The autoencoder framework automatically encodes and reconstructs data from source and target domains and forces the neural network to learn to represent these domains reliably. The joint adversarial network aims to force the network to learn to encode more appropriately for the source domain and target domain simultaneously, thereby overcoming the problem of under transfer. The regularized manifold constraint aims to avoid the problem of negative transfer by avoiding geometric manifold structure in the target domain being destroyed by the source domain. Experiments show that the brain–computer interface framework proposed by us can achieve better results than state-of-the-art approaches in electroencephalography signal classification tasks. This is helpful in aiding our rehabilitation robot to understand the intention of patients and can help patients to carry out rehabilitation exercises effectively.


Sign in / Sign up

Export Citation Format

Share Document