3D face representation using scale and transform invariant features

Author(s):  
Erdem Akagunduz ◽  
Ilkay Ulusoy
2020 ◽  
Vol 128 (10-11) ◽  
pp. 2534-2551 ◽  
Author(s):  
Stylianos Moschoglou ◽  
Stylianos Ploumpis ◽  
Mihalis A. Nicolaou ◽  
Athanasios Papaioannou ◽  
Stefanos Zafeiriou

Abstract Over the past few years, Generative Adversarial Networks (GANs) have garnered increased interest among researchers in Computer Vision, with applications including, but not limited to, image generation, translation, imputation, and super-resolution. Nevertheless, no GAN-based method has been proposed in the literature that can successfully represent, generate or translate 3D facial shapes (meshes). This can be primarily attributed to two facts, namely that (a) publicly available 3D face databases are scarce as well as limited in terms of sample size and variability (e.g., few subjects, little diversity in race and gender), and (b) mesh convolutions for deep networks present several challenges that are not entirely tackled in the literature, leading to operator approximations and model instability, often failing to preserve high-frequency components of the distribution. As a result, linear methods such as Principal Component Analysis (PCA) have been mainly utilized towards 3D shape analysis, despite being unable to capture non-linearities and high frequency details of the 3D face—such as eyelid and lip variations. In this work, we present 3DFaceGAN, the first GAN tailored towards modeling the distribution of 3D facial surfaces, while retaining the high frequency details of 3D face shapes. We conduct an extensive series of both qualitative and quantitative experiments, where the merits of 3DFaceGAN are clearly demonstrated against other, state-of-the-art methods in tasks such as 3D shape representation, generation, and translation.


2010 ◽  
Vol 46 (13) ◽  
pp. 905 ◽  
Author(s):  
E. Akagündüz ◽  
I. Ulusoy

2020 ◽  
Vol 10 (2) ◽  
pp. 601 ◽  
Author(s):  
Huan Tu ◽  
Gesang Duoji ◽  
Qijun Zhao ◽  
Shuang Wu

Face recognition using a single sample per person is a challenging problem in computer vision. In this scenario, due to the lack of training samples, it is difficult to distinguish between inter-class variations caused by identity and intra-class variations caused by external factors such as illumination, pose, etc. To address this problem, we propose a scheme to improve the recognition rate by both generating additional samples to enrich the intra-variation and eliminating external factors to extract invariant features. Firstly, a 3D face modeling module is proposed to recover the intrinsic properties of the input image, i.e., 3D face shape and albedo. To obtain the complete albedo, we come up with an end-to-end network to estimate the full albedo UV map from incomplete textures. The obtained albedo UV map not only eliminates the influence of the illumination, pose, and expression, but also retains the identity information. With the help of the recovered intrinsic properties, we then generate images under various illuminations, expressions, and poses. Finally, the albedo and the generated images are used to assist single sample per person face recognition. The experimental results on Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), Celebrities in Frontal-Profile (CFP) and other face databases demonstrate the effectiveness of the proposed method.


2019 ◽  
Vol 363 ◽  
pp. 375-397 ◽  
Author(s):  
Ying Cai ◽  
Yinjie Lei ◽  
Menglong Yang ◽  
Zhisheng You ◽  
Shiguang Shan

Sign in / Sign up

Export Citation Format

Share Document