Parameterized Discriminant Analysis Methods

Author(s):  
David Zhang ◽  
Fengxi Song ◽  
Yong Xu ◽  
Zhizhen Liang

In this chapter, we mainly present three kinds of weighted LDA methods. In Sections 5.1, 5.2 and 5.3, we respectively present parameterized direct linear discriminant analysis, weighted nullspace linear discriminant analysis and weighted LDA in the range of within-class scatter matrix. We offer a brief summery of the chapter in Section 5.4.

2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Zhicheng Lu ◽  
Zhizheng Liang

Linear discriminant analysis has been widely studied in data mining and pattern recognition. However, when performing the eigen-decomposition on the matrix pair (within-class scatter matrix and between-class scatter matrix) in some cases, one can find that there exist some degenerated eigenvalues, thereby resulting in indistinguishability of information from the eigen-subspace corresponding to some degenerated eigenvalue. In order to address this problem, we revisit linear discriminant analysis in this paper and propose a stable and effective algorithm for linear discriminant analysis in terms of an optimization criterion. By discussing the properties of the optimization criterion, we find that the eigenvectors in some eigen-subspaces may be indistinguishable if the degenerated eigenvalue occurs. Inspired from the idea of the maximum margin criterion (MMC), we embed MMC into the eigen-subspace corresponding to the degenerated eigenvalue to exploit discriminability of the eigenvectors in the eigen-subspace. Since the proposed algorithm can deal with the degenerated case of eigenvalues, it not only handles the small-sample-size problem but also enables us to select projection vectors from the null space of the between-class scatter matrix. Extensive experiments on several face images and microarray data sets are conducted to evaluate the proposed algorithm in terms of the classification performance, and experimental results show that our method has smaller standard deviations than other methods in most cases.


2014 ◽  
Vol 556-562 ◽  
pp. 4825-4829 ◽  
Author(s):  
Kai Li ◽  
Peng Tang

Linear discriminant analysis (LDA) is an important feature extraction method. This paper proposes an improved linear discriminant analysis method, which redefines the within-class scatter matrix and introduces the normalized parameter to control the bias and variance of eigenvalues. In addition, it makes the between-class scatter matrix to weight and avoids the overlapping of neighboring class samples. Some experiments for the improved algorithm presented by us are performed on the ORL, FERET and YALE face databases, and it is compared with other commonly used methods. Experimental results show that the proposed algorithm is the effective.


2011 ◽  
Vol 128-129 ◽  
pp. 58-61
Author(s):  
Shi Ping Li ◽  
Yu Cheng ◽  
Hui Bin Liu ◽  
Lin Mu

Linear Discriminant Analysis (LDA) [1] is a well-known method for face recognition in feature extraction and dimension reduction. To solve the “small sample” effect of LDA, Two-Dimensional Linear Discriminant Analysis (2DLDA) [2] has been used for face recognition recently,but its could hardly take use of the relationship between the adjacent scatter matrix. In this paper, I improved the between-class scatter matrix, proposed paired-class scatter matrix for face representation and recognition. In this new method, a paired between-class scatter matrix distance metric is used to measure the distance between random paired between-class scatter matrix. To test this new method, ORL face database is used and the results show that the paired between-class scatter matrix based 2DLDA method (N2DLDA) outperforms the 2DLDA method and achieves higher classification accuracy than the 2DLDA algorithm.


Author(s):  
WEN-SHENG CHEN ◽  
PONG C. YUEN ◽  
JIAN HUANG

This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix [Formula: see text], which is suitable for parameter reduction. Based on the new definition of [Formula: see text], we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix [Formula: see text] approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.


Sign in / Sign up

Export Citation Format

Share Document