scholarly journals A Complete Subspace Analysis of Linear Discriminant Analysis and Its Robust Implementation

2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Zhicheng Lu ◽  
Zhizheng Liang

Linear discriminant analysis has been widely studied in data mining and pattern recognition. However, when performing the eigen-decomposition on the matrix pair (within-class scatter matrix and between-class scatter matrix) in some cases, one can find that there exist some degenerated eigenvalues, thereby resulting in indistinguishability of information from the eigen-subspace corresponding to some degenerated eigenvalue. In order to address this problem, we revisit linear discriminant analysis in this paper and propose a stable and effective algorithm for linear discriminant analysis in terms of an optimization criterion. By discussing the properties of the optimization criterion, we find that the eigenvectors in some eigen-subspaces may be indistinguishable if the degenerated eigenvalue occurs. Inspired from the idea of the maximum margin criterion (MMC), we embed MMC into the eigen-subspace corresponding to the degenerated eigenvalue to exploit discriminability of the eigenvectors in the eigen-subspace. Since the proposed algorithm can deal with the degenerated case of eigenvalues, it not only handles the small-sample-size problem but also enables us to select projection vectors from the null space of the between-class scatter matrix. Extensive experiments on several face images and microarray data sets are conducted to evaluate the proposed algorithm in terms of the classification performance, and experimental results show that our method has smaller standard deviations than other methods in most cases.

Author(s):  
WEN-SHENG CHEN ◽  
PONG C. YUEN ◽  
JIAN HUANG

This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix [Formula: see text], which is suitable for parameter reduction. Based on the new definition of [Formula: see text], we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix [Formula: see text] approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.


Author(s):  
JUN LIU ◽  
SONGCAN CHEN ◽  
XIAOYANG TAN ◽  
DAOQIANG ZHANG

Pseudoinverse Linear Discriminant Analysis (PLDA) is a classical and pioneer method that deals with the Small Sample Size (SSS) problem in LDA when applied to such applications as face recognition. However, it is expensive in computation and storage due to direct manipulation on extremely large d × d matrices, where d is the dimension of the sample image. As a result, although frequently cited in literature, PLDA is hardly compared in terms of classification performance with the newly proposed methods. In this paper, we propose a new feature extraction method named RSw + LDA, which is (1) much more efficient than PLDA in both computation and storage; and (2) theoretically equivalent to PLDA, meaning that it produces the same projection matrix as PLDA. Further, to make PLDA deal better with data of nonlinear distribution, we propose a Kernel PLDA (KPLDA) method with the well-known kernel trick. Finally, our experimental results on AR face dataset, a challenging dataset with variations in expression, lighting and occlusion, show that PLDA (or RSw + LDA) can achieve significantly higher classification accuracy than the recently proposed Linear Discriminant Analysis via QR decomposition and Discriminant Common Vectors, and KPLDA can yield better classification performance compared to PLDA and Kernel PCA.


Author(s):  
David Zhang ◽  
Fengxi Song ◽  
Yong Xu ◽  
Zhizhen Liang

This chapter is a brief introduction to biometric discriminant analysis technologies — Section I of the book. Section 2.1 describes two kinds of linear discriminant analysis (LDA) approaches: classification-oriented LDA and feature extraction-oriented LDA. Section 2.2 discusses LDA for solving the small sample size (SSS) pattern recognition problems. Section 2.3 shows the organization of Section I.


2011 ◽  
Vol 128-129 ◽  
pp. 58-61
Author(s):  
Shi Ping Li ◽  
Yu Cheng ◽  
Hui Bin Liu ◽  
Lin Mu

Linear Discriminant Analysis (LDA) [1] is a well-known method for face recognition in feature extraction and dimension reduction. To solve the “small sample” effect of LDA, Two-Dimensional Linear Discriminant Analysis (2DLDA) [2] has been used for face recognition recently,but its could hardly take use of the relationship between the adjacent scatter matrix. In this paper, I improved the between-class scatter matrix, proposed paired-class scatter matrix for face representation and recognition. In this new method, a paired between-class scatter matrix distance metric is used to measure the distance between random paired between-class scatter matrix. To test this new method, ORL face database is used and the results show that the paired between-class scatter matrix based 2DLDA method (N2DLDA) outperforms the 2DLDA method and achieves higher classification accuracy than the 2DLDA algorithm.


Author(s):  
KULDIP K. PALIWAL ◽  
ALOK SHARMA

Pseudoinverse linear discriminant analysis (PLDA) is a classical method for solving small sample size problem. However, its performance is limited. In this paper, we propose an improved PLDA method which is faster and produces better classification accuracy when experimented on several datasets.


Author(s):  
XIPENG QIU ◽  
LIDE WU

Linear Discriminant Analysis (LDA) is a popular feature extraction technique in statistical pattern recognition. However, it often suffers from the small sample size problem when dealing with high-dimensional data. Moreover, while LDA is guaranteed to find the best directions when each class has a Gaussian density with a common covariance matrix, it can fail if the class densities are more general. In this paper, a novel nonparametric linear feature extraction method, nearest neighbor discriminant analysis (NNDA), is proposed from the view of the nearest neighbor classification. NNDA finds the important discriminant directions without assuming the class densities belong to any particular parametric family. It does not depend on the nonsingularity of the within-class scatter matrix either. Then we give an approximate approach to optimize NNDA and an extension to k-NN. We apply NNDA to the simulated data and real world data, the results demonstrate that NNDA outperforms the existing variant LDA methods.


Sign in / Sign up

Export Citation Format

Share Document