scholarly journals 3DFaceGAN: Adversarial Nets for 3D Face Representation, Generation, and Translation

2020 ◽  
Vol 128 (10-11) ◽  
pp. 2534-2551 ◽  
Author(s):  
Stylianos Moschoglou ◽  
Stylianos Ploumpis ◽  
Mihalis A. Nicolaou ◽  
Athanasios Papaioannou ◽  
Stefanos Zafeiriou

Abstract Over the past few years, Generative Adversarial Networks (GANs) have garnered increased interest among researchers in Computer Vision, with applications including, but not limited to, image generation, translation, imputation, and super-resolution. Nevertheless, no GAN-based method has been proposed in the literature that can successfully represent, generate or translate 3D facial shapes (meshes). This can be primarily attributed to two facts, namely that (a) publicly available 3D face databases are scarce as well as limited in terms of sample size and variability (e.g., few subjects, little diversity in race and gender), and (b) mesh convolutions for deep networks present several challenges that are not entirely tackled in the literature, leading to operator approximations and model instability, often failing to preserve high-frequency components of the distribution. As a result, linear methods such as Principal Component Analysis (PCA) have been mainly utilized towards 3D shape analysis, despite being unable to capture non-linearities and high frequency details of the 3D face—such as eyelid and lip variations. In this work, we present 3DFaceGAN, the first GAN tailored towards modeling the distribution of 3D facial surfaces, while retaining the high frequency details of 3D face shapes. We conduct an extensive series of both qualitative and quantitative experiments, where the merits of 3DFaceGAN are clearly demonstrated against other, state-of-the-art methods in tasks such as 3D shape representation, generation, and translation.

2009 ◽  
Vol 2009 ◽  
pp. 1-15 ◽  
Author(s):  
Yu Zhang ◽  
Edmond C. Prakash

This paper presents a new anthropometrics-based method for generating realistic, controllable face models. Our method establishes an intuitive and efficient interface to facilitate procedures for interactive 3D face modeling and editing. It takes 3D face scans as examples in order to exploit the variations presented in the real faces of individuals. The system automatically learns a model prior from the data-sets of example meshes of facial features using principal component analysis (PCA) and uses it to regulate the naturalness of synthesized faces. For each facial feature, we compute a set of anthropometric measurements to parameterize the example meshes into a measurement space. Using PCA coefficients as a compact shape representation, we formulate the face modeling problem in a scattered data interpolation framework which takes the user-specified anthropometric parameters as input. Solving the interpolation problem in a reduced subspace allows us to generate a natural face shape that satisfies the user-specified constraints. At runtime, the new face shape can be generated at an interactive rate. We demonstrate the utility of our method by presenting several applications, including analysis of facial features of subjects in different race groups, facial feature transfer, and adapting face models to a particular population group.


Deep learning recently became the state-of-the-art in many pattern recognition tasks. Advance-ment of computational power and big datasets brings opportunity to use deep learning methods for image processing. We have used deep convolutional generative adversarial networks (DCGAN) to do various image processing tasks such as deconvolution , denoising and super-resolution. With DCGAN we can use a single architecture to perform different image processing tasks . While the results sometimes shows slightly lower PSNR for DCGAN compared to traditional methods but it tries to achieve competitive psnr scores. Thus , it allows to view quite appealing then other methods While it can learn from big data-sets very efficiently and allows itself to add high-frequency details automatically which traditional methods can’t. The architectgure in DCGAN is based on two neural networks of generator and discriminator which both tries to deceive each other and allows it to generate more appealing and realistic images from the datasets.


Crisis ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 113-119 ◽  
Author(s):  
Michael S. Rodi ◽  
Lucas Godoy Garraza ◽  
Christine Walrath ◽  
Robert L. Stephens ◽  
D. Susanne Condron ◽  
...  

Background: In order to better understand the posttraining suicide prevention behavior of gatekeeper trainees, the present article examines the referral and service receipt patterns among gatekeeper-identified youths. Methods: Data for this study were drawn from 26 Garrett Lee Smith grantees funded between October 2005 and October 2009 who submitted data about the number, characteristics, and service access of identified youths. Results: The demographic characteristics of identified youths are not related to referral type or receipt. Furthermore, referral setting does not seem to be predictive of the type of referral. Demographic as well as other (nonrisk) characteristics of the youths are not key variables in determining identification or service receipt. Limitations: These data are not necessarily representative of all youths identified by gatekeepers represented in the dataset. The prevalence of risk among all members of the communities from which these data are drawn is unknown. Furthermore, these data likely disproportionately represent gatekeepers associated with systems that effectively track gatekeepers and youths. Conclusions: Gatekeepers appear to be identifying youth across settings, and those youths are being referred for services without regard for race and gender or the settings in which they are identified. Furthermore, youths that may be at highest risk may be more likely to receive those services.


2014 ◽  
Author(s):  
Susana J. Ferradas ◽  
G. Nicole Rider ◽  
Johanna D. Williams ◽  
Brittany J. Dancy ◽  
Lauren R. Mcghee

2003 ◽  
Author(s):  
Isis H. Settles ◽  
William A. Jellison ◽  
Joan R. Poulsen

Sign in / Sign up

Export Citation Format

Share Document