scholarly journals Face to Face: Anthropometry-Based Interactive Face Shape Modeling Using Model Priors

2009 ◽  
Vol 2009 ◽  
pp. 1-15 ◽  
Author(s):  
Yu Zhang ◽  
Edmond C. Prakash

This paper presents a new anthropometrics-based method for generating realistic, controllable face models. Our method establishes an intuitive and efficient interface to facilitate procedures for interactive 3D face modeling and editing. It takes 3D face scans as examples in order to exploit the variations presented in the real faces of individuals. The system automatically learns a model prior from the data-sets of example meshes of facial features using principal component analysis (PCA) and uses it to regulate the naturalness of synthesized faces. For each facial feature, we compute a set of anthropometric measurements to parameterize the example meshes into a measurement space. Using PCA coefficients as a compact shape representation, we formulate the face modeling problem in a scattered data interpolation framework which takes the user-specified anthropometric parameters as input. Solving the interpolation problem in a reduced subspace allows us to generate a natural face shape that satisfies the user-specified constraints. At runtime, the new face shape can be generated at an interactive rate. We demonstrate the utility of our method by presenting several applications, including analysis of facial features of subjects in different race groups, facial feature transfer, and adapting face models to a particular population group.

2002 ◽  
Vol 17 (3) ◽  
pp. 243-259 ◽  
Author(s):  
Taro Goto ◽  
Won-Sook Lee ◽  
Nadia Magnenat-Thalmann

1998 ◽  
Author(s):  
Hiroyuki Sato ◽  
Masaharu Shimanuki ◽  
Takao Akatsuka

2020 ◽  
Vol 128 (10-11) ◽  
pp. 2534-2551 ◽  
Author(s):  
Stylianos Moschoglou ◽  
Stylianos Ploumpis ◽  
Mihalis A. Nicolaou ◽  
Athanasios Papaioannou ◽  
Stefanos Zafeiriou

Abstract Over the past few years, Generative Adversarial Networks (GANs) have garnered increased interest among researchers in Computer Vision, with applications including, but not limited to, image generation, translation, imputation, and super-resolution. Nevertheless, no GAN-based method has been proposed in the literature that can successfully represent, generate or translate 3D facial shapes (meshes). This can be primarily attributed to two facts, namely that (a) publicly available 3D face databases are scarce as well as limited in terms of sample size and variability (e.g., few subjects, little diversity in race and gender), and (b) mesh convolutions for deep networks present several challenges that are not entirely tackled in the literature, leading to operator approximations and model instability, often failing to preserve high-frequency components of the distribution. As a result, linear methods such as Principal Component Analysis (PCA) have been mainly utilized towards 3D shape analysis, despite being unable to capture non-linearities and high frequency details of the 3D face—such as eyelid and lip variations. In this work, we present 3DFaceGAN, the first GAN tailored towards modeling the distribution of 3D facial surfaces, while retaining the high frequency details of 3D face shapes. We conduct an extensive series of both qualitative and quantitative experiments, where the merits of 3DFaceGAN are clearly demonstrated against other, state-of-the-art methods in tasks such as 3D shape representation, generation, and translation.


Sign in / Sign up

Export Citation Format

Share Document