Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition

2005 ◽  
Vol 26 (2) ◽  
pp. 181-191 ◽  
Author(s):  
Juwei Lu ◽  
K.N. Plataniotis ◽  
A.N. Venetsanopoulos
Author(s):  
WEN-SHENG CHEN ◽  
PONG C. YUEN ◽  
JIAN HUANG

This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix [Formula: see text], which is suitable for parameter reduction. Based on the new definition of [Formula: see text], we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix [Formula: see text] approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.


2015 ◽  
Vol 14 (01) ◽  
pp. 59-66
Author(s):  
Ikuthen Gabriel Barus ◽  
Riko Arlando Saragih

Tulisan ini memaparkan simulasi ekstraksi citra wajah secara global dengan menggunakan salah satu teknik Linear Discriminant Analysis (LDA), yaitu Direct Fractional-Step LDA (DF-LDA) untuk pengenalan wajah. Tujuan tulisan ini adalah untuk mengevaluasi unjuk kerja teknik ini terhadap masalah small sample size (SSS) yang sering muncul di dalam pengenalan wajah. Pada dasarnya teknik berbasis LDA ini (DF-LDA) merupakan kombinasi dari teknik D-LDA dan F-LDA, dimana untuk merepresentasikan citra wajah secara global secara efisien dapat ditambahkan sebuah fungsi pembobot (weighting function) dengan bertahap secara langsung dan fraksional pada proses LDA. Proses pencocokan dilakukan dengan mencari jarak Euclidean minimum antara ciri citra wajah uji terhadap ciri citra wajah latih yang terdapat di dalam database. Dari hasil simulasi untuk Database Face Recognition Data dan Database Mahasiswa Maranatha diperoleh akurasi pengenalan wajah yang lebih baik untuk kondisi jumlah citra wajah satu per orang di dalam proses pelatihan jika database wajah diproses secara terpisah.


Author(s):  
David Zhang ◽  
Fengxi Song ◽  
Yong Xu ◽  
Zhizhen Liang

This chapter is a brief introduction to biometric discriminant analysis technologies — Section I of the book. Section 2.1 describes two kinds of linear discriminant analysis (LDA) approaches: classification-oriented LDA and feature extraction-oriented LDA. Section 2.2 discusses LDA for solving the small sample size (SSS) pattern recognition problems. Section 2.3 shows the organization of Section I.


Author(s):  
JUN LIU ◽  
SONGCAN CHEN ◽  
XIAOYANG TAN ◽  
DAOQIANG ZHANG

Pseudoinverse Linear Discriminant Analysis (PLDA) is a classical and pioneer method that deals with the Small Sample Size (SSS) problem in LDA when applied to such applications as face recognition. However, it is expensive in computation and storage due to direct manipulation on extremely large d × d matrices, where d is the dimension of the sample image. As a result, although frequently cited in literature, PLDA is hardly compared in terms of classification performance with the newly proposed methods. In this paper, we propose a new feature extraction method named RSw + LDA, which is (1) much more efficient than PLDA in both computation and storage; and (2) theoretically equivalent to PLDA, meaning that it produces the same projection matrix as PLDA. Further, to make PLDA deal better with data of nonlinear distribution, we propose a Kernel PLDA (KPLDA) method with the well-known kernel trick. Finally, our experimental results on AR face dataset, a challenging dataset with variations in expression, lighting and occlusion, show that PLDA (or RSw + LDA) can achieve significantly higher classification accuracy than the recently proposed Linear Discriminant Analysis via QR decomposition and Discriminant Common Vectors, and KPLDA can yield better classification performance compared to PLDA and Kernel PCA.


Sign in / Sign up

Export Citation Format

Share Document