scholarly journals PF -cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild

Author(s):  
Fariborz Taherkhani ◽  
Veeru Talreja ◽  
Jeremy Dawson ◽  
Matthew C. Valenti ◽  
Nasser M. Nasrabadi
Keyword(s):  
2019 ◽  
Vol 28 (4) ◽  
pp. 2051-2062 ◽  
Author(s):  
Shiming Ge ◽  
Shengwei Zhao ◽  
Chenyu Li ◽  
Jia Li

Author(s):  
Lavika Goel ◽  
Lavanya B. ◽  
Pallavi Panchal

This chapter aims to apply a novel hybridized evolutionary algorithm to the application of face recognition. Biogeography-based optimization (BBO) has some element of randomness to it that apart from improving the feasibility of a solution could reduce it as well. In order to overcome this drawback, this chapter proposes a hybridization of BBO with gravitational search algorithm (GSA), another nature-inspired algorithm, by incorporating certain knowledge into BBO instead of the randomness. The migration procedure of BBO that migrates SIVs between solutions is done between solutions only if the migration would lead to the betterment of a solution. BBO-GSA algorithm is applied to face recognition with the LFW (labelled faces in the wild) and ORL datasets in order to test its efficiency. Experimental results show that the proposed BBO-GSA algorithm outperforms or is on par with some of the nature-inspired techniques that have been applied to face recognition so far by achieving a recognition rate of 80% with the LFW dataset and 99.75% with the ORL dataset.


Author(s):  
Jian Zhao ◽  
Yu Cheng ◽  
Yi Cheng ◽  
Yang Yang ◽  
Fang Zhao ◽  
...  

Despite the remarkable progress in face recognition related technologies, reliably recognizing faces across ages still remains a big challenge. The appearance of a human face changes substantially over time, resulting in significant intraclass variations. As opposed to current techniques for ageinvariant face recognition, which either directly extract ageinvariant features for recognition, or first synthesize a face that matches target age before feature extraction, we argue that it is more desirable to perform both tasks jointly so that they can leverage each other. To this end, we propose a deep Age-Invariant Model (AIM) for face recognition in the wild with three distinct novelties. First, AIM presents a novel unified deep architecture jointly performing cross-age face synthesis and recognition in a mutual boosting way. Second, AIM achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding the requirement of paired data and the true age of testing samples. Third, we develop effective and novel training strategies for end-to-end learning the whole deep architecture, which generates powerful age-invariant face representations explicitly disentangled from the age variation. Extensive experiments on several cross-age datasets (MORPH, CACD and FG-NET) demonstrate the superiority of the proposed AIM model over the state-of-the-arts. Benchmarking our model on one of the most popular unconstrained face recognition datasets IJB-C additionally verifies the promising generalizability of AIM in recognizing faces in the wild.


2020 ◽  
Vol 30 (10) ◽  
pp. 3387-3397 ◽  
Author(s):  
Shiming Ge ◽  
Chenyu Li ◽  
Shengwei Zhao ◽  
Dan Zeng
Keyword(s):  

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 146 ◽  
Author(s):  
Vittorio Cuculo ◽  
Alessandro D’Amelio ◽  
Giuliano Grossi ◽  
Raffaella Lanzarotti ◽  
Jianyi Lin

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LiMapS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.


Sign in / Sign up

Export Citation Format

Share Document