ILLUMINATION INSENSITIVE FACE REPRESENTATION FOR FACE RECOGNITION BASED ON MODIFIED WEBERFACE

2013 ◽  
Vol 6 (5) ◽  
pp. 1995-2005
Author(s):  
Min Yao ◽  
◽  
Hiroshi Nagahashi
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


2011 ◽  
Vol 74 (5) ◽  
pp. 741-748 ◽  
Author(s):  
Hui Yan ◽  
Jian Yang ◽  
Jingyu Yang

Author(s):  
Taehwa Hong ◽  
Hagbae Kim ◽  
Hyeonjoon Moon ◽  
Yongguk Kim ◽  
Jongweon Lee ◽  
...  

Author(s):  
Daniel Riccio ◽  
Andrea Casanova ◽  
Gianni Fenu

Face recognition in real world applications is a very difficult task because of image misalignments, pose and illumination variations, or occlusions. Many researchers in this field have investigated both face representation and classification techniques able to deal with these drawbacks. However, none of them is free from limitations. Early proposed algorithms were generally holistic, in the sense they consider the face object as a whole. Recently, challenging benchmarks demonstrated that they are not adequate to be applied in unconstrained environments, despite of their good performances in more controlled conditions. Therefore, the researchers' attention is now turning on local features that have been demonstrated to be more robust to a large set of non-monotonic distortions. Nevertheless, though local operators partially overcome some drawbacks, they are still opening new questions (e.g., Which criteria should be used to select the most representative features?). This is the reason why, among all the others, hybrid approaches are showing a high potential in terms of recognition accuracy when applied in uncontrolled settings, as they integrate complementary information from both local and global features. This chapter explores local, global, and hybrid approaches.


Author(s):  
Josef Kittler ◽  
Paul Koppen ◽  
Philipp Kopp ◽  
Patrik Huber ◽  
Matthias Ratsch

Author(s):  
ZHAOKUI LI ◽  
LIXIN DING ◽  
YAN WANG ◽  
JINRONG HE

This paper proposes a simple, yet very powerful local face representation, called the Gradient Orientations and Euler Mapping (GOEM). GOEM consists of two stages: gradient orientations and Euler mapping. In the first stage, we calculate gradient orientations of a central pixel and get the corresponding orientation representations by performing convolution operator. These representation results display spatial locality and orientation properties. To encompass different spatial localities and orientations, we concatenate all these representation results and derive a concatenated orientation feature vector. In the second stage, we define an explicit Euler mapping which maps the space of the concatenated orientation into a complex space. For a mapping image, we find that the imaginary part and the real part characterize the high frequency and the low frequency components, respectively. To encompass different frequencies, we concatenate the imaginary part and the real part and derive a concatenated mapping feature vector. For a given image, we use the two stages to construct a GOEM image and derive an augmented feature vector which resides in a space of very high dimensionality. In order to derive low-dimensional feature vector, we present a class of GOEM-based kernel subspace learning methods for face recognition. These methods, which are robust to changes in occlusion and illumination, apply the kernel subspace learning model with explicit Euler mapping to an augmented feature vector derived from the GOEM representation of face images. Experimental results show that our methods significantly outperform popular methods and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition.


2012 ◽  
Vol 12 (02) ◽  
pp. 1250011
Author(s):  
GANG XU ◽  
HUCHUAN LU ◽  
ZUNYI WANG

Robust face recognition is a challenging problem, due to facial appearance variations in illumination, pose, expression, aging, partial occlusions and other changes. This paper proposes a novel face recognition approach, where face images are represented by Gabor pixel-pattern-based texture feature (GPPBTF) and local binary pattern (LBP), and null pace-based kernel Fisher discriminant analysis (NKFDA) is applied to the two features independently to obtain two recognition results which are eventually combined together for a final identification. To get GPPBTF, we first transform an image into Gabor magnitude maps of different orientations and scales, and then use pixel-pattern-based texture feature to extract texture features from Gabor maps. In order to improve the final performance of the classification, this paper proposes a multiple NKFDA classifiers combination approach. Extensive experiments on FERET face database demonstrate that the proposed method not only greatly reduces the dimensionality of face representation, but also achieves more robust result and higher recognition accuracy.


Author(s):  
Xiang Wu ◽  
Huaibo Huang ◽  
Vishal M. Patel ◽  
Ran He ◽  
Zhenan Sun

Visible (VIS) to near infrared (NIR) face matching is a challenging problem due to the significant domain discrepancy between the domains and a lack of sufficient data for training cross-modal matching algorithms. Existing approaches attempt to tackle this problem by either synthesizing visible faces from NIR faces, extracting domain-invariant features from these modalities, or projecting heterogeneous data onto a common latent space for cross-modal matching. In this paper, we take a different approach in which we make use of the Disentangled Variational Representation (DVR) for crossmodal matching. First, we model a face representation with an intrinsic identity information and its within-person variations. By exploring the disentangled latent variable space, a variational lower bound is employed to optimize the approximate posterior for NIR and VIS representations. Second, aiming at obtaining more compact and discriminative disentangled latent space, we impose a minimization of the identity information for the same subject and a relaxed correlation alignment constraint between the NIR and VIS modality variations. An alternative optimization scheme is proposed for the disentangled variational representation part and the heterogeneous face recognition network part. The mutual promotion between these two parts effectively reduces the NIR and VIS domain discrepancy and alleviates over-fitting. Extensive experiments on three challenging NIR-VIS heterogeneous face recognition databases demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document