A review on transfer learning approaches in brain–computer interface

Author(s):  
Ahmed M. Azab ◽  
Jake Toth ◽  
Lyudmila S. Mihaylova ◽  
Mahnaz Arvaneh
2019 ◽  
Vol 16 (2) ◽  
pp. 172988141984086 ◽  
Author(s):  
Chuanqi Tan ◽  
Fuchun Sun ◽  
Bin Fang ◽  
Tao Kong ◽  
Wenchang Zhang

The brain–computer interface-based rehabilitation robot has quickly become a very important research area due to its natural interaction. One of the most important problems in brain–computer interface is that large-scale annotated electroencephalography data sets required by advanced classifiers are almost impossible to acquire because biological data acquisition is challenging and quality annotation is costly. Transfer learning relaxes the hypothesis that the training data must be independent and identically distributed with the test data. It can be considered a powerful tool for solving the problem of insufficient training data. There are two basic issues with transfer learning, under transfer and negative transfer. We proposed a novel brain–computer interface framework by using autoencoder-based transfer learning, which includes three main components: an autoencoder framework, a joint adversarial network, and a regularized manifold constraint. The autoencoder framework automatically encodes and reconstructs data from source and target domains and forces the neural network to learn to represent these domains reliably. The joint adversarial network aims to force the network to learn to encode more appropriately for the source domain and target domain simultaneously, thereby overcoming the problem of under transfer. The regularized manifold constraint aims to avoid the problem of negative transfer by avoiding geometric manifold structure in the target domain being destroyed by the source domain. Experiments show that the brain–computer interface framework proposed by us can achieve better results than state-of-the-art approaches in electroencephalography signal classification tasks. This is helpful in aiding our rehabilitation robot to understand the intention of patients and can help patients to carry out rehabilitation exercises effectively.


2019 ◽  
Vol 29 (10) ◽  
pp. 1950025 ◽  
Author(s):  
Pramod Gaur ◽  
Karl McCreadie ◽  
Ram Bilas Pachori ◽  
Hui Wang ◽  
Girijesh Prasad

The performance of a brain–computer interface (BCI) will generally improve by increasing the volume of training data on which it is trained. However, a classifier’s generalization ability is often negatively affected when highly non-stationary data are collected across both sessions and subjects. The aim of this work is to reduce the long calibration time in BCI systems by proposing a transfer learning model which can be used for evaluating unseen single trials for a subject without the need for training session data. A method is proposed which combines a generalization of the previously proposed subject-specific “multivariate empirical-mode decomposition” preprocessing technique by taking a fixed band of 8–30[Formula: see text]Hz for all four motor imagery tasks and a novel classification model which exploits the structure of tangent space features drawn from the Riemannian geometry framework, that is shared among the training data of multiple sessions and subjects. Results demonstrate comparable performance improvement across multiple subjects without subject-specific calibration, when compared with other state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document