In-the-Wild Facial Highlight Removal via Generative Adversarial Networks

Author(s):  
Zhibo Wang ◽  
Ming Lu ◽  
Feng Xu ◽  
Xun Cao
2020 ◽  
Vol 34 (07) ◽  
pp. 11378-11385
Author(s):  
Qi Li ◽  
Yunfan Liu ◽  
Zhenan Sun

Age progression and regression refers to aesthetically rendering a given face image to present effects of face aging and rejuvenation, respectively. Although numerous studies have been conducted in this topic, there are two major problems: 1) multiple models are usually trained to simulate different age mappings, and 2) the photo-realism of generated face images is heavily influenced by the variation of training images in terms of pose, illumination, and background. To address these issues, in this paper, we propose a framework based on conditional Generative Adversarial Networks (cGANs) to achieve age progression and regression simultaneously. Particularly, since face aging and rejuvenation are largely different in terms of image translation patterns, we model these two processes using two separate generators, each dedicated to one age changing process. In addition, we exploit spatial attention mechanisms to limit image modifications to regions closely related to age changes, so that images with high visual fidelity could be synthesized for in-the-wild cases. Experiments on multiple datasets demonstrate the ability of our model in synthesizing lifelike face images at desired ages with personalized features well preserved, and keeping age-irrelevant regions unchanged.


Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 807 ◽  
Author(s):  
Weiwei Zhuang ◽  
Liang Chen ◽  
Chaoqun Hong ◽  
Yuxin Liang ◽  
Keshou Wu

Face recognition has been comprehensively studied. However, face recognition in the wild still suffers from unconstrained face directions. Frontal face synthesis is a popular solution, but some facial features are missed after synthesis. This paper presents a novel method for pose-invariant face recognition. It is based on face transformation with key points alignment based on generative adversarial networks (FT-GAN). In this method, we introduce CycleGAN for pixel transformation to achieve coarse face transformation results, and these results are refined by key point alignment. In this way, frontal face synthesis is modeled as a two-task process. The results of comprehensive experiments show the effectiveness of FT-GAN.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Zhiyi Cao ◽  
Shaozhang Niu ◽  
Jiwei Zhang

Generative Adversarial Networks (GANs) have achieved significant success in unsupervised image-to-image translation between given categories (e.g., zebras to horses). Previous GANs models assume that the shared latent space between different categories will be captured from the given categories. Unfortunately, besides the well-designed datasets from given categories, many examples come from different wild categories (e.g., cats to dogs) holding special shapes and sizes (short for adversarial examples), so the shared latent space is troublesome to capture, and it will cause the collapse of these models. For this problem, we assume the shared latent space can be classified as global and local and design a weakly supervised Similar GANs (Sim-GAN) to capture the local shared latent space rather than the global shared latent space. For the well-designed datasets, the local shared latent space is close to the global shared latent space. For the wild datasets, we will get the local shared latent space to stop the model from collapse. Experiments on four public datasets show that our model significantly outperforms state-of-the-art baseline methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Zhifeng Chen ◽  
Jinjin Liu

Recent works based on deep learning and facial priors have performed well in superresolving severely degraded facial images. However, due to the limitation of illumination, pixels of the monitoring probe itself, focusing area, and human motion, the face image is usually blurred or even deformed. To address this problem, we properly propose Face Restoration Generative Adversarial Networks to improve the resolution and restore the details of the blurred face. They include the Head Pose Estimation Network, Postural Transformer Network, and Face Generative Adversarial Networks. In this paper, we employ the following: (i) Swish-B activation function that is used in Face Generative Adversarial Networks to accelerate the convergence speed of the cross-entropy cost function, (ii) a special prejudgment monitor that is added to improve the accuracy of the discriminant, and (iii) the modified Postural Transformer Network that is used with 3D face reconstruction network to correct faces at different expression pose angles. Our method improves the resolution of face image and performs well in image restoration. We demonstrate how our method can produce high-quality faces, and it is superior to the most advanced methods on the reconstruction task of blind faces for in-the-wild images; especially, our 8 × SR SSIM and PSNR are, respectively, 0.078 and 1.16 higher than FSRNet in AFLW.


Sign in / Sign up

Export Citation Format

Share Document