Exploring aligned latent representations for cross-domain face recognition

2020 ◽  
Vol 28 (10) ◽  
pp. 2311-2322
Author(s):  
Yue MING ◽  
◽  
Shao-Ying WANG ◽  
Chun-Xiao FAN ◽  
Jiang-Wan ZHOU
2020 ◽  
Author(s):  
Geoffrey Schau ◽  
Erik Burlingame ◽  
Young Hwan Chang

AbstractDeep learning systems have emerged as powerful mechanisms for learning domain translation models. However, in many cases, complete information in one domain is assumed to be necessary for sufficient cross-domain prediction. In this work, we motivate a formal justification for domain-specific information separation in a simple linear case and illustrate that a self-supervised approach enables domain translation between data domains while filtering out domain-specific data features. We introduce a novel approach to identify domainspecific information from sets of unpaired measurements in complementary data domains by considering a deep learning cross-domain autoencoder architecture designed to learn shared latent representations of data while enabling domain translation. We introduce an orthogonal gate block designed to enforce orthogonality of input feature sets by explicitly removing non-sharable information specific to each domain and illustrate separability of domain-specific information on a toy dataset.


2018 ◽  
Vol 9 ◽  
Author(s):  
Anna Krasotkina ◽  
Antonia Götz ◽  
Barbara Höhle ◽  
Gudrun Schwarzer

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 50452-50464 ◽  
Author(s):  
Han Byeol Bae ◽  
Taejae Jeon ◽  
Yongju Lee ◽  
Sungjun Jang ◽  
Sangyoun Lee

1988 ◽  
Vol 40 (3) ◽  
pp. 561-580 ◽  
Author(s):  
Andrew W. Young ◽  
Deborah Hellawell ◽  
Edward H. F. De Haan

Cross-domain semantic priming of person recognition (from face primes to name targets at 500msecs SOA) is investigated in normal subjects and a brain-injured patient (PH) with a very severe impairment of overt face recognition ability. Experiment 1 demonstrates equivalent semantic priming effects for normal subjects from face primes to name targets (cross-domain priming) and from name primes to name targets (within-domain priming). Experiment 2 demonstrates cross-domain semantic priming effects from face primes that PH cannot recognize overtly. Experiment 3 shows that cross-domain semantic priming effects can be found for normal subjects when target names are repeated across all conditions. This (repeated targets) method is then used in Experiment 4 to establish that PH shows equivalent semantic priming to normal subjects from face primes which he is very poor at identifying overtly and from name primes which he can identify overtly. These findings demonstrate that automatic aspects of face recognition can remain intact even when all sense of overt recognition has been lost.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 97503-97515 ◽  
Author(s):  
Dongdong Zheng ◽  
Kaibing Zhang ◽  
Jian Lu ◽  
Junfeng Jing ◽  
Zenggang Xiong

2021 ◽  
Author(s):  
Masoud Faraki ◽  
Xiang Yu ◽  
Yi-Hsuan Tsai ◽  
Yumin Suh ◽  
Manmohan Chandraker

2021 ◽  
Vol 336 ◽  
pp. 06007
Author(s):  
Yuying Shao ◽  
Lin Cao ◽  
Changwu Chen ◽  
Kangning Du

Because of the large modal difference between sketch image and optical image, and the problem that traditional deep learning methods are easy to overfit in the case of a small amount of training data, the Cross Domain Meta-Network for sketch face recognition method is proposed. This method first designs a meta-learning training strategy to solve the small sample problem, and then proposes entropy average loss and cross domain adaptive loss to reduce the modal difference between the sketch domain and the optical domain. The experimental results on UoM-SGFS and PRIP-VSGC sketch face data sets show that this method and other sketch face recognition methods.


2021 ◽  
Vol 16 ◽  
pp. 346-360
Author(s):  
Chunlei Peng ◽  
Nannan Wang ◽  
Jie Li ◽  
Xinbo Gao

Sign in / Sign up

Export Citation Format

Share Document