Author(s):  
David Zhang ◽  
Fengxi Song ◽  
Yong Xu ◽  
Zhizhen Liang

This chapter is a brief introduction to biometric discriminant analysis technologies — Section I of the book. Section 2.1 describes two kinds of linear discriminant analysis (LDA) approaches: classification-oriented LDA and feature extraction-oriented LDA. Section 2.2 discusses LDA for solving the small sample size (SSS) pattern recognition problems. Section 2.3 shows the organization of Section I.


Author(s):  
WEN-SHENG CHEN ◽  
PONG C. YUEN ◽  
JIAN HUANG

This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix [Formula: see text], which is suitable for parameter reduction. Based on the new definition of [Formula: see text], we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix [Formula: see text] approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.


2015 ◽  
Vol 14 (01) ◽  
pp. 59-66
Author(s):  
Ikuthen Gabriel Barus ◽  
Riko Arlando Saragih

Tulisan ini memaparkan simulasi ekstraksi citra wajah secara global dengan menggunakan salah satu teknik Linear Discriminant Analysis (LDA), yaitu Direct Fractional-Step LDA (DF-LDA) untuk pengenalan wajah. Tujuan tulisan ini adalah untuk mengevaluasi unjuk kerja teknik ini terhadap masalah small sample size (SSS) yang sering muncul di dalam pengenalan wajah. Pada dasarnya teknik berbasis LDA ini (DF-LDA) merupakan kombinasi dari teknik D-LDA dan F-LDA, dimana untuk merepresentasikan citra wajah secara global secara efisien dapat ditambahkan sebuah fungsi pembobot (weighting function) dengan bertahap secara langsung dan fraksional pada proses LDA. Proses pencocokan dilakukan dengan mencari jarak Euclidean minimum antara ciri citra wajah uji terhadap ciri citra wajah latih yang terdapat di dalam database. Dari hasil simulasi untuk Database Face Recognition Data dan Database Mahasiswa Maranatha diperoleh akurasi pengenalan wajah yang lebih baik untuk kondisi jumlah citra wajah satu per orang di dalam proses pelatihan jika database wajah diproses secara terpisah.


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Zhicheng Lu ◽  
Zhizheng Liang

Linear discriminant analysis has been widely studied in data mining and pattern recognition. However, when performing the eigen-decomposition on the matrix pair (within-class scatter matrix and between-class scatter matrix) in some cases, one can find that there exist some degenerated eigenvalues, thereby resulting in indistinguishability of information from the eigen-subspace corresponding to some degenerated eigenvalue. In order to address this problem, we revisit linear discriminant analysis in this paper and propose a stable and effective algorithm for linear discriminant analysis in terms of an optimization criterion. By discussing the properties of the optimization criterion, we find that the eigenvectors in some eigen-subspaces may be indistinguishable if the degenerated eigenvalue occurs. Inspired from the idea of the maximum margin criterion (MMC), we embed MMC into the eigen-subspace corresponding to the degenerated eigenvalue to exploit discriminability of the eigenvectors in the eigen-subspace. Since the proposed algorithm can deal with the degenerated case of eigenvalues, it not only handles the small-sample-size problem but also enables us to select projection vectors from the null space of the between-class scatter matrix. Extensive experiments on several face images and microarray data sets are conducted to evaluate the proposed algorithm in terms of the classification performance, and experimental results show that our method has smaller standard deviations than other methods in most cases.


Sign in / Sign up

Export Citation Format

Share Document