Single-trial classification of vowel speech imagery using common spatial patterns

2009 ◽  
Vol 22 (9) ◽  
pp. 1334-1339 ◽  
Author(s):  
Charles S. DaSalla ◽  
Hiroyuki Kambara ◽  
Makoto Sato ◽  
Yasuharu Koike
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Qian Cai ◽  
Weiqiang Gong ◽  
Yue Deng ◽  
Haixian Wang

As a multichannel spatial filtering technique, common spatial patterns (CSP) have been successfully applied in brain-computer interfaces (BCI) community based on electroencephalogram (EEG). However, it is sensitive to outliers because of the employment of the L2-norm in its formulation. It is beneficial to perform robust modelling for CSP. In this paper, we propose a robust framework, called CSP-Lp/q, by formulating the variances of two EEG classes with Lp- and Lq-norms ( 0 < p   and  q < 2 ) separately. The method CSP-Lp/q with mixed Lp- and Lq-norms takes the class-wise difference into account in formulating the sample dispersion. We develop an iterative algorithm to optimize the objective function of CSP-Lp/q and show its monotonity theoretically. The superiority of the proposed CSP-Lp/q technique is experimentally demonstrated on three real EEG datasets of BCI competitions.


Sign in / Sign up

Export Citation Format

Share Document