scholarly journals L1-Norm Distance Discriminant Analysis with Multiple Adaptive Graphs and Sample Reconstruction

2021 ◽  
Author(s):  
Guowan Shao ◽  
Chunjiang Peng ◽  
Wenchu Ou ◽  
Kai Duan

Linear discriminant analysis (LDA) is sensitive to noise and its performance may decline greatly. Recursive discriminative subspace learning method with an L1-norm distance constraint (RDSL) formulates LDA with the maximum margin criterion and becomes robust to noise by applying L1-norm and slack variables. However, the method only considers inter-class separation and intra-class compactness and ignores the intra-class manifold structure and the global structure of data. In this paper, we present L1-norm distance discriminant analysis with multiple adaptive graphs and sample reconstruction (L1-DDA) to deal with the problem. We use multiple adaptive graphs to preserve intra-class manifold structure and simultaneously apply the sample reconstruction technique to preserve the global structure of data. Moreover, we use an alternating iterative technique to obtain projection vectors. Experimental results on three real databases demonstrate that our method obtains better classification performance than RDSL.

2021 ◽  
Author(s):  
Guowan Shao ◽  
Chunjiang Peng ◽  
Wenchu Ou ◽  
Kai Duan

Dimensionality reduction plays an important role in the fields of pattern recognition and computer vision. Recursive discriminative subspace learning with an L1-norm distance constraint (RDSL) is proposed to robustly extract features from contaminated data and L1-norm and slack variables are utilized for accomplishing the goal. However, its performance may decline when too many outliers are available. Moreover, the method ignores the global structure of the data. In this paper, we propose cutting L1-norm distance discriminant analysis with sample reconstruction (C-L1-DDA) to solve the two problems. We apply cutting L1-norm to measure within-class and between-class distances and thus outliers may be strongly suppressed. Moreover, we use cutting squared L2-norm to measure reconstruction errors. In this way, outliers may be constrained and the global structure of data may be approximately preserved. Finally, we give an alternating iterative algorithm to extract feature vectors. Experimental results on two publicly available real databases verify the feasibility and effectiveness of the proposed method.


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Zhicheng Lu ◽  
Zhizheng Liang

Linear discriminant analysis has been widely studied in data mining and pattern recognition. However, when performing the eigen-decomposition on the matrix pair (within-class scatter matrix and between-class scatter matrix) in some cases, one can find that there exist some degenerated eigenvalues, thereby resulting in indistinguishability of information from the eigen-subspace corresponding to some degenerated eigenvalue. In order to address this problem, we revisit linear discriminant analysis in this paper and propose a stable and effective algorithm for linear discriminant analysis in terms of an optimization criterion. By discussing the properties of the optimization criterion, we find that the eigenvectors in some eigen-subspaces may be indistinguishable if the degenerated eigenvalue occurs. Inspired from the idea of the maximum margin criterion (MMC), we embed MMC into the eigen-subspace corresponding to the degenerated eigenvalue to exploit discriminability of the eigenvectors in the eigen-subspace. Since the proposed algorithm can deal with the degenerated case of eigenvalues, it not only handles the small-sample-size problem but also enables us to select projection vectors from the null space of the between-class scatter matrix. Extensive experiments on several face images and microarray data sets are conducted to evaluate the proposed algorithm in terms of the classification performance, and experimental results show that our method has smaller standard deviations than other methods in most cases.


2019 ◽  
Vol 337 ◽  
pp. 80-96 ◽  
Author(s):  
Chun-Na Li ◽  
Meng-Qi Shang ◽  
Yuan-Hai Shao ◽  
Yan Xu ◽  
Li-Ming Liu ◽  
...  

2020 ◽  
Author(s):  
Jan Sosulski ◽  
Jan-Philipp Kemmer ◽  
Michael Tangermann

AbstractElectroencephalogram data used in the domain of brain–computer interfaces typically has subpar signal-to-noise ratio and data acquisition is expensive. An effective and commonly used classifier to discriminate event-related potentials is the linear discriminant analysis which, however, requires an estimate of the feature distribution. While this information is provided by the feature covariance matrix its large number of free parameters calls for regularization approaches like Ledoit–Wolf shrinkage. Assuming that the noise of event-related potential recordings is not time-locked, we propose to decouple the time component from the covariance matrix of event-related potential data in order to further improve the estimates of the covariance matrix for linear discriminant analysis. We compare three regularized variants thereof and a feature representation based on Riemannian geometry against our proposed novel linear discriminant analysis with time-decoupled covariance estimates. Extensive evaluations on 14 electroencephalogram datasets reveal, that the novel approach increases the classification performance by up to four percentage points for small training datasets, and gracefully converges to the performance of standard shrinkage-regularized LDA for large training datasets. Given these results, practitioners in this field should consider using our proposed time-decoupled covariance estimation when they apply linear discriminant analysis to classify event-related potentials, especially when few training data points are available.


Sign in / Sign up

Export Citation Format

Share Document