transductive support vector machine
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 3)

H-INDEX

3
(FIVE YEARS 2)

2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Yilu Xu ◽  
Jing Hua ◽  
Hua Zhang ◽  
Ronghua Hu ◽  
Xin Huang ◽  
...  

Long and tedious calibration time hinders the development of motor imagery- (MI-) based brain-computer interface (BCI). To tackle this problem, we use a limited labelled set and a relatively large unlabelled set from the same subject for training based on the transductive support vector machine (TSVM) framework. We first introduce an improved TSVM (ITSVM) method, in which a comprehensive feature of each sample consists of its common spatial patterns (CSP) feature and its geometric feature. Moreover, we use the concave-convex procedure (CCCP) to solve the optimization problem of TSVM under a new balancing constraint that can address the unknown distribution of the unlabelled set by considering various possible distributions. In addition, we propose an improved self-training TSVM (IST-TSVM) method that can iteratively perform CSP feature extraction and ITSVM classification using an expanded labelled set. Extensive experimental results on dataset IV-a from BCI competition III and dataset II-a from BCI competition IV show that our algorithms outperform the other competing algorithms, where the sizes and distributions of the labelled sets are variable. In particular, IST-TSVM provides average accuracies of 63.25% and 69.43% with the abovementioned two datasets, respectively, where only four positive labelled samples and sixteen negative labelled samples are used. Therefore, our algorithms can provide an alternative way to reduce the calibration time.


2018 ◽  
Vol 27 (12) ◽  
pp. 1850185 ◽  
Author(s):  
Yanchao Li ◽  
Yongli Wang ◽  
Junlong Zhou ◽  
Xiaohui Jiang

Semi-Supervised Learning (SSL) aims to improve the performance of models trained with a small set of labeled data and a large collection of unlabeled data. Learning multi-view representations from different perspectives of data has proved to be very effectively for improving generalization performance. However, existing semi-supervised multi-view learning methods tend to ignore the specific difficulty of different unlabeled examples, such as the outliers and noise, leading to error-prone classification. To address this problem, this paper proposes Robust Transductive Support Vector Machine (RTSVM) that introduces the margin distribution into TSVM, which is robust to the outliers and noise. Specifically, the first-order (margin mean) and second-order statistics (margin variance) are regularized into TSVM, which try to achieve strong generalization performance. Then, we impose a global similarity constraint between distinct RTSVMs each trained from one view of the data. Moreover, our algorithm runs with fast convergence by using concave–convex procedure. Finally, we validate our proposed method on a variety of multi-view datasets, and the experimental results demonstrate that our proposed algorithm is effective. By exploring large amount of unlabeled examples and being robust to the outliers and noise among different views, the generalization performance of our method show the superiority to single-view learning and other semi-supervised multi-view learning methods.


2016 ◽  
Vol 173 ◽  
pp. 1288-1298 ◽  
Author(s):  
Xibin Wang ◽  
Junhao Wen ◽  
Shafiq Alam ◽  
Zhuo Jiang ◽  
Yingbo Wu

Sign in / Sign up

Export Citation Format

Share Document