PCA PLUS F-LDA: A NEW APPROACH TO FACE RECOGNITION

Author(s):  
HUIYUAN WANG ◽  
ZENGFENG WANG ◽  
YAN LENG ◽  
XIAOJUAN WU ◽  
QING LI

A new feature extraction method for face recognition based on principal component analysis (PCA) and fractional-step linear discriminant analysis (F-LDA) is given in this paper. In order to reduce the computation complexity, PCA is first used to reduce the dimension. In addition, before using F-LDA, we transform the pooled within-class scatter matrix into an identity matrix. The proposed method is tested on AR and UMIST face databases. Experiment results show that our method gains higher classification accuracy than other existing methods used in the experiment.

2014 ◽  
Vol 556-562 ◽  
pp. 4825-4829 ◽  
Author(s):  
Kai Li ◽  
Peng Tang

Linear discriminant analysis (LDA) is an important feature extraction method. This paper proposes an improved linear discriminant analysis method, which redefines the within-class scatter matrix and introduces the normalized parameter to control the bias and variance of eigenvalues. In addition, it makes the between-class scatter matrix to weight and avoids the overlapping of neighboring class samples. Some experiments for the improved algorithm presented by us are performed on the ORL, FERET and YALE face databases, and it is compared with other commonly used methods. Experimental results show that the proposed algorithm is the effective.


2014 ◽  
Vol 568-570 ◽  
pp. 668-671
Author(s):  
Yi Long ◽  
Fu Rong Liu ◽  
Guo Qing Qiu

To address the problem that the dimension of the feature vector extracted by Local Binary Pattern (LBP) for face recognition is too high and Principal Component Analysis (PCA) extract features are not the best classification features, an efficient feature extraction method using LBP, PCA and Maximum scatter difference (MSD) has been introduced in this paper. The original face image is firstly divided into sub-images, then the LBP operator is applied to extract the histogram feature. and the feature dimensions are further reduced by using PCA. Finally,MSD is performed on the reduced PCA-based feature.The experimental results on ORL and Yale database demonstrate that the proposed method can classify more effectively and can get higher recognition rate than the traditional recognition methods.


Author(s):  
Haoran Li ◽  
Hua Xu

In this paper, we propose a new feature extraction method called hvnLBP-TOP for video-based sentiment analysis. Furthermore, we use principal component analysis (PCA) and bidirectional long short term memory (bi-LSTM) for dimensionality reduction and classification. We achieved an average recognition accuracy of 71.1% on the MOUD dataset and 63.9% on the CMU-MOSI dataset.


2011 ◽  
Vol 63-64 ◽  
pp. 55-58
Author(s):  
Yan Wang ◽  
Xiu Xia Wang ◽  
Sheng Lai

In ensemble learning, in order to improve the performance of individual classifiers and the diversity of classifiers, from the classifiers generation and combination, this paper proposes a kind of combination feature division and diversity measure of multi-classifier selective ensemble algorithm. The algorithm firstly applied bagging method to create some feature subsets, Secondly using principal component analysis of feature extraction method on each feature subsets, then select classifiers with high-classification accuracy; finally before classifier combination we use classifier diversity measure method select diversity classifiers. Experimental results prove that classification accuracy of the algorithm is obviously higher than popular bagging algorithm.


Author(s):  
JUN LIU ◽  
SONGCAN CHEN ◽  
XIAOYANG TAN ◽  
DAOQIANG ZHANG

Pseudoinverse Linear Discriminant Analysis (PLDA) is a classical and pioneer method that deals with the Small Sample Size (SSS) problem in LDA when applied to such applications as face recognition. However, it is expensive in computation and storage due to direct manipulation on extremely large d × d matrices, where d is the dimension of the sample image. As a result, although frequently cited in literature, PLDA is hardly compared in terms of classification performance with the newly proposed methods. In this paper, we propose a new feature extraction method named RSw + LDA, which is (1) much more efficient than PLDA in both computation and storage; and (2) theoretically equivalent to PLDA, meaning that it produces the same projection matrix as PLDA. Further, to make PLDA deal better with data of nonlinear distribution, we propose a Kernel PLDA (KPLDA) method with the well-known kernel trick. Finally, our experimental results on AR face dataset, a challenging dataset with variations in expression, lighting and occlusion, show that PLDA (or RSw + LDA) can achieve significantly higher classification accuracy than the recently proposed Linear Discriminant Analysis via QR decomposition and Discriminant Common Vectors, and KPLDA can yield better classification performance compared to PLDA and Kernel PCA.


Author(s):  
Wei Huang ◽  
Xiaohui Wang ◽  
Jianzhong Li ◽  
Zhong Jin

Representation-based classification have received much attention in the field of face recognition. Collaborative representation-based classification (CRC) has shown the robustness and high performance. In this paper, we proposed a new feature extraction method-based collaborative representation. Firstly, we get the coefficients of all face samples by collaborative representation. Then we define the inter-class reconstructive errors and intra-class reconstructive errors for each sample. After that, Fisher criterion is used to get the discriminative feature. At last, CRC is executed to get the identification results in the new feature space. Different from other feature extraction methods, the proposed method integrates the classification criterion into the feature extraction. So the feature space we get fits the classifier better. Experiment results on several face databases show that the proposed method is more effective than other state-of-the-art face recognition methods.


Author(s):  
Hsein Kew

AbstractIn this paper, we propose a method to generate an audio output based on spectroscopy data in order to discriminate two classes of data, based on the features of our spectral dataset. To do this, we first perform spectral pre-processing, and then extract features, followed by machine learning, for dimensionality reduction. The features are then mapped to the parameters of a sound synthesiser, as part of the audio processing, so as to generate audio samples in order to compute statistical results and identify important descriptors for the classification of the dataset. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. FM synthesis provides a higher subjective classification accuracy as compared with to AM synthesis. We then further compare the dimensionality reduction method of Principal Component Analysis (PCA) and Linear Discriminant Analysis in order to optimise our sonification algorithm. The results of classification accuracy using FM synthesis as the sound synthesiser and PCA as the dimensionality reduction method yields a mean classification accuracies of 93.81% and 88.57% for the coffee dataset and the fruit puree dataset respectively, and indicate that this spectroscopic analysis model is able to provide relevant information on the spectral data, and most importantly, is able to discriminate accurately between the two spectra and thus provides a complementary tool to supplement current methods.


2020 ◽  
pp. 1-11
Author(s):  
Mayamin Hamid Raha ◽  
Tonmoay Deb ◽  
Mahieyin Rahmun ◽  
Tim Chen

Face recognition is the most efficient image analysis application, and the reduction of dimensionality is an essential requirement. The curse of dimensionality occurs with the increase in dimensionality, the sample density decreases exponentially. Dimensionality Reduction is the process of taking into account the dimensionality of the feature space by obtaining a set of principal features. The purpose of this manuscript is to demonstrate a comparative study of Principal Component Analysis and Linear Discriminant Analysis methods which are two of the highly popular appearance-based face recognition projection methods. PCA creates a flat dimensional data representation that describes as much data variance as possible, while LDA finds the vectors that best discriminate between classes in the underlying space. The main idea of PCA is to transform high dimensional input space into the function space that displays the maximum variance. Traditional LDA feature selection is obtained by maximizing class differences and minimizing class distance.


Sign in / Sign up

Export Citation Format

Share Document