scholarly journals Virtual Training Samples and CRC based Test Sample Reconstruction and Face Recognition Experiments

Author(s):  
Wei HUANG ◽  
Li-ming MIAO
Author(s):  
Wei Huang ◽  
Xiaohui Wang ◽  
Yinghui Zhu ◽  
Gengzhong Zheng

Lack of training samples always affects the performance and robustness of face recognition. Generating virtual samples is one of effective methods to expand the training set. When the virtual samples are able to simulate the variations of facial images including variations of illuminations, facial postures and the facial expressions, the robustness will be enhanced and the accuracy will be improved obviously in the face recognition problem. In this paper, an improved linear representation-based classification combined virtual samples (ILRCVS) is proposed. First, we design a new objective function that simultaneously considers the information of the virtual training samples and the virtual test sample. Second, an alternating minimization algorithm is proposed to solve the optimization problem of the objective function. Finally, a new classification criterion combined with the virtual training and test sample is proposed. Experimental results on the Georgia Tech, FERET and Yale B face databases show that the proposed method is more robust than three state-of-the-art face recognition methods, LRC, SRC and CRC.


Author(s):  
Shuhuan Zhao

Face recognition (FR) is a hotspot in pattern recognition and image processing for its wide applications in real life. One of the most challenging problems in FR is single sample face recognition (SSFR). In this paper, we proposed a novel algorithm based on nonnegative sparse representation, collaborative presentation, and probabilistic graph estimation to address SSFR. The proposed algorithm is named as Nonnegative Sparse Probabilistic Estimation (NNSPE). To extract the variation information from the generic training set, we first select some neighbor samples from the generic training set for each sample in the gallery set and the generic training set can be partitioned into some reference subsets. To make more meaningful reconstruction, the proposed method adopts nonnegative sparse representation to reconstruct training samples, and according to the reconstruction coefficients, NNSPE computes the probabilistic label estimation for the samples of the generic training set. Then, for a given test sample, collaborative representation (CR) is used to acquire an adaptive variation subset. Finally, the NNSPE classifies the test sample with the adaptive variation subset and probabilistic label estimation. The experiments on the AR and PIE verify the effectiveness of the proposed method both in recognition rates and time cost.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Minna Qiu ◽  
Jian Zhang ◽  
Jiayan Yang ◽  
Liying Ye

Face recognition has become a very active field of biometrics. Different pictures of the same face might include various changes of expressions, poses, and illumination. However, a face recognition system usually suffers from the problem that nonsufficient training samples cannot convey these possible changes effectively. The main reason is that a system has only limited storage space and limited time to capture training samples. Many previous literatures ignored the problem of nonsufficient training samples. In this paper, we overcome the insufficiency of training sample size problem by fusing two kinds of virtual samples and the original samples to perform small sample face recognition. The two used kinds of virtual samples are mirror faces and symmetrical faces. Firstly, we transform the original face image to obtain mirror faces and symmetrical faces. Secondly, we fuse these two kinds of virtual samples to achieve the matching scores between the test sample and each class. Finally, we integrate the matching scores to get the final classification results. We compare the proposed method with the single virtual sample augment methods and the original representation-based classification. The experiments on various face databases show that the proposed scheme achieves the best accuracy among the representation-based classification methods.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Rong Wang

In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Zhengming Li ◽  
Qi Zhu ◽  
Binglei Xie ◽  
Jian Cao ◽  
Jin Zhang

We propose a new collaborative neighbor representation algorithm for face recognition based on a revised regularized reconstruction error (RRRE), called the two-phase collaborative neighbor representation algorithm (TCNR). Specifically, the RRRE is the division of  l2-norm of reconstruction error of each class into a linear combination of  l2-norm of reconstruction coefficients of each class, which can be used to increase the discrimination information for classification. The algorithm is as follows: in the first phase, the test sample is represented as a linear combination of all the training samples by incorporating the neighbor information into the objective function. In the second phase, we use thekclasses to represent the test sample and calculate the collaborative neighbor representation coefficients. TCNR not only can preserve locality and similarity information of sparse coding but also can eliminate the side effect on the classification decision of the class that is far from the test sample. Moreover, the rationale and alternative scheme of TCNR are given. The experimental results show that TCNR algorithm achieves better performance than seven previous algorithms.


2016 ◽  
Vol 25 (04) ◽  
pp. 1650017 ◽  
Author(s):  
Zhengming Li

Dictionary learning (DL) algorithms have shown very good performance in face recognition. However, conventional DL algorithms exploit only the training samples to obtain the dictionary and totally neglect the test sample in the learning procedure. As a result, if DL is associated with the linear representation of test sample, DL may be able to perform better in classifying the test samples than conventional DL algorithms. In this paper, we propose a test sample oriented dictionary learning (TSODL) algorithm for face recognition. We combine the linear representation (including the [Formula: see text]-norm, [Formula: see text]-norm and [Formula: see text]-norm) of a test sample and the basic model of DL to learn a single dictionary for each test sample. Thus, it can simultaneously obtain the dictionary and representation coefficients of the test sample by minimizing only one objective function. In order to make the learning procedure more efficient, we initialize a dictionary for the new test sample by selecting from the dictionaries of previous test samples. The experimental results show that the TSODL algorithm can classify test samples more accurately than some of the state-of-the-art DL and sparse coding algorithms by using a linear classifier method on three public face databases.


2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Zun-xiong Liu ◽  
Zhi-qiang Huang ◽  
Heng Zhang

The two-step test sample representation method is proposed for face recognition. It first identifiesk“representative” samples from each category training samples for the test sample then produces a weighted sum of all the “representative” samples that well approximates the test sample. This method assigns the test sample to the class whose training samples have the smallest deviation from the test sample. As the method proposed in this paper is able to reduce the side-effect of the other training samples that are very “far” from the test sample on the recognition decision of the test sample, the high recognition rates can be obtained.


Sign in / Sign up

Export Citation Format

Share Document