scholarly journals Age Progression and Regression with Spatial Attention Modules

2020 ◽  
Vol 34 (07) ◽  
pp. 11378-11385
Author(s):  
Qi Li ◽  
Yunfan Liu ◽  
Zhenan Sun

Age progression and regression refers to aesthetically rendering a given face image to present effects of face aging and rejuvenation, respectively. Although numerous studies have been conducted in this topic, there are two major problems: 1) multiple models are usually trained to simulate different age mappings, and 2) the photo-realism of generated face images is heavily influenced by the variation of training images in terms of pose, illumination, and background. To address these issues, in this paper, we propose a framework based on conditional Generative Adversarial Networks (cGANs) to achieve age progression and regression simultaneously. Particularly, since face aging and rejuvenation are largely different in terms of image translation patterns, we model these two processes using two separate generators, each dedicated to one age changing process. In addition, we exploit spatial attention mechanisms to limit image modifications to regions closely related to age changes, so that images with high visual fidelity could be synthesized for in-the-wild cases. Experiments on multiple datasets demonstrate the ability of our model in synthesizing lifelike face images at desired ages with personalized features well preserved, and keeping age-irrelevant regions unchanged.

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Zhiyi Cao ◽  
Shaozhang Niu ◽  
Jiwei Zhang

Generative Adversarial Networks (GANs) have achieved significant success in unsupervised image-to-image translation between given categories (e.g., zebras to horses). Previous GANs models assume that the shared latent space between different categories will be captured from the given categories. Unfortunately, besides the well-designed datasets from given categories, many examples come from different wild categories (e.g., cats to dogs) holding special shapes and sizes (short for adversarial examples), so the shared latent space is troublesome to capture, and it will cause the collapse of these models. For this problem, we assume the shared latent space can be classified as global and local and design a weakly supervised Similar GANs (Sim-GAN) to capture the local shared latent space rather than the global shared latent space. For the well-designed datasets, the local shared latent space is close to the global shared latent space. For the wild datasets, we will get the local shared latent space to stop the model from collapse. Experiments on four public datasets show that our model significantly outperforms state-of-the-art baseline methods.


2020 ◽  
Vol 128 (10-11) ◽  
pp. 2629-2650
Author(s):  
Evangelos Ververas ◽  
Stefanos Zafeiriou

Abstract Image-to-image (i2i) translation is the dense regression problem of learning how to transform an input image into an output using aligned image pairs. Remarkable progress has been made in i2i translation with the advent of deep convolutional neural networks and particular using the learning paradigm of generative adversarial networks (GANs). In the absence of paired images, i2i translation is tackled with one or multiple domain transformations (i.e., CycleGAN, StarGAN etc.). In this paper, we study the problem of image-to-image translation, under a set of continuous parameters that correspond to a model describing a physical process. In particular, we propose the SliderGAN which transforms an input face image into a new one according to the continuous values of a statistical blendshape model of facial motion. We show that it is possible to edit a facial image according to expression and speech blendshapes, using sliders that control the continuous values of the blendshape model. This provides much more flexibility in various tasks, including but not limited to face editing, expression transfer and face neutralisation, comparing to models based on discrete expressions or action units.


Author(s):  
Pallavi Madhukar ◽  
Rachana Chetan ◽  
Supriya Prasad ◽  
Mohamed Shayan ◽  
B. Niranjana Krupa

Sign in / Sign up

Export Citation Format

Share Document