Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

Author(s):  
Luan Tran ◽  
Xi Yin ◽  
Xiaoming Liu
Author(s):  
Jian Zhao ◽  
Yu Cheng ◽  
Yi Cheng ◽  
Yang Yang ◽  
Fang Zhao ◽  
...  

Despite the remarkable progress in face recognition related technologies, reliably recognizing faces across ages still remains a big challenge. The appearance of a human face changes substantially over time, resulting in significant intraclass variations. As opposed to current techniques for ageinvariant face recognition, which either directly extract ageinvariant features for recognition, or first synthesize a face that matches target age before feature extraction, we argue that it is more desirable to perform both tasks jointly so that they can leverage each other. To this end, we propose a deep Age-Invariant Model (AIM) for face recognition in the wild with three distinct novelties. First, AIM presents a novel unified deep architecture jointly performing cross-age face synthesis and recognition in a mutual boosting way. Second, AIM achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding the requirement of paired data and the true age of testing samples. Third, we develop effective and novel training strategies for end-to-end learning the whole deep architecture, which generates powerful age-invariant face representations explicitly disentangled from the age variation. Extensive experiments on several cross-age datasets (MORPH, CACD and FG-NET) demonstrate the superiority of the proposed AIM model over the state-of-the-arts. Benchmarking our model on one of the most popular unconstrained face recognition datasets IJB-C additionally verifies the promising generalizability of AIM in recognizing faces in the wild.


2021 ◽  
Vol 11 (24) ◽  
pp. 11991
Author(s):  
Mayank Kejriwal

Despite recent Artificial Intelligence (AI) advances in narrow task areas such as face recognition and natural language processing, the emergence of general machine intelligence continues to be elusive. Such an AI must overcome several challenges, one of which is the ability to be aware of, and appropriately handle, context. In this article, we argue that context needs to be rigorously treated as a first-class citizen in AI research and discourse for achieving true general machine intelligence. Unfortunately, context is only loosely defined, if at all, within AI research. This article aims to synthesize the myriad pragmatic ways in which context has been used, or implicitly assumed, as a core concept in multiple AI sub-areas, such as representation learning and commonsense reasoning. While not all definitions are equivalent, we systematically identify a set of seven features associated with context in these sub-areas. We argue that such features are necessary for a sufficiently rich theory of context, as applicable to practical domains and applications in AI.


Sign in / Sign up

Export Citation Format

Share Document