Subspace Learning Based on Data Distribution for Face Recognition

Author(s):  
Yong Ye
2006 ◽  
Vol 03 (01) ◽  
pp. 45-51
Author(s):  
YANWEI PANG ◽  
ZHENGKAI LIU ◽  
YUEFANG SUN

Subspace-based face recognition method aims to find a low-dimensional subspace of face appearance embedded in a high-dimensional image space. The differences between different methods lie in their different motivations and objective functions. The objective function of the proposed method is formed by combining the ideas of linear Laplacian eigenmaps and linear discriminant analysis. The actual computation of the subspace reduces to a maximum eigenvalue problem. Major advantage of the proposed method over traditional methods is that it utilizes both local manifold structure information and discriminant information of the training data. Experimental results on the AR face databases demonstrate the effectiveness of the proposed method.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 13050-13056 ◽  
Author(s):  
Mengqing Mei ◽  
Jianzhong Huang ◽  
Weiwei Xiong

2008 ◽  
Vol 41 (12) ◽  
pp. 3822-3833 ◽  
Author(s):  
Annalisa Franco ◽  
Dario Maio ◽  
Davide Maltoni

2013 ◽  
Vol 42 ◽  
pp. 28-43 ◽  
Author(s):  
Yu Chen ◽  
Wei-Shi Zheng ◽  
Xiao-Hong Xu ◽  
Jian-Huang Lai

Author(s):  
ZHAOKUI LI ◽  
LIXIN DING ◽  
YAN WANG ◽  
JINRONG HE

This paper proposes a simple, yet very powerful local face representation, called the Gradient Orientations and Euler Mapping (GOEM). GOEM consists of two stages: gradient orientations and Euler mapping. In the first stage, we calculate gradient orientations of a central pixel and get the corresponding orientation representations by performing convolution operator. These representation results display spatial locality and orientation properties. To encompass different spatial localities and orientations, we concatenate all these representation results and derive a concatenated orientation feature vector. In the second stage, we define an explicit Euler mapping which maps the space of the concatenated orientation into a complex space. For a mapping image, we find that the imaginary part and the real part characterize the high frequency and the low frequency components, respectively. To encompass different frequencies, we concatenate the imaginary part and the real part and derive a concatenated mapping feature vector. For a given image, we use the two stages to construct a GOEM image and derive an augmented feature vector which resides in a space of very high dimensionality. In order to derive low-dimensional feature vector, we present a class of GOEM-based kernel subspace learning methods for face recognition. These methods, which are robust to changes in occlusion and illumination, apply the kernel subspace learning model with explicit Euler mapping to an augmented feature vector derived from the GOEM representation of face images. Experimental results show that our methods significantly outperform popular methods and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Pang Ying Han ◽  
Andrew Teoh Beng Jin ◽  
Lim Heng Siong

Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF), is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.


2020 ◽  
Vol 9 (4) ◽  
pp. 136-142 ◽  
Author(s):  
Yu‐Feng Yu ◽  
Qiangchang Wang ◽  
Min Jiang

Sign in / Sign up

Export Citation Format

Share Document