scholarly journals Weakly Supervised GAN for Image-to-Image Translation in the Wild

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Zhiyi Cao ◽  
Shaozhang Niu ◽  
Jiwei Zhang

Generative Adversarial Networks (GANs) have achieved significant success in unsupervised image-to-image translation between given categories (e.g., zebras to horses). Previous GANs models assume that the shared latent space between different categories will be captured from the given categories. Unfortunately, besides the well-designed datasets from given categories, many examples come from different wild categories (e.g., cats to dogs) holding special shapes and sizes (short for adversarial examples), so the shared latent space is troublesome to capture, and it will cause the collapse of these models. For this problem, we assume the shared latent space can be classified as global and local and design a weakly supervised Similar GANs (Sim-GAN) to capture the local shared latent space rather than the global shared latent space. For the well-designed datasets, the local shared latent space is close to the global shared latent space. For the wild datasets, we will get the local shared latent space to stop the model from collapse. Experiments on four public datasets show that our model significantly outperforms state-of-the-art baseline methods.

2020 ◽  
Vol 34 (07) ◽  
pp. 11378-11385
Author(s):  
Qi Li ◽  
Yunfan Liu ◽  
Zhenan Sun

Age progression and regression refers to aesthetically rendering a given face image to present effects of face aging and rejuvenation, respectively. Although numerous studies have been conducted in this topic, there are two major problems: 1) multiple models are usually trained to simulate different age mappings, and 2) the photo-realism of generated face images is heavily influenced by the variation of training images in terms of pose, illumination, and background. To address these issues, in this paper, we propose a framework based on conditional Generative Adversarial Networks (cGANs) to achieve age progression and regression simultaneously. Particularly, since face aging and rejuvenation are largely different in terms of image translation patterns, we model these two processes using two separate generators, each dedicated to one age changing process. In addition, we exploit spatial attention mechanisms to limit image modifications to regions closely related to age changes, so that images with high visual fidelity could be synthesized for in-the-wild cases. Experiments on multiple datasets demonstrate the ability of our model in synthesizing lifelike face images at desired ages with personalized features well preserved, and keeping age-irrelevant regions unchanged.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 864
Author(s):  
Tanvi Bhandarkar ◽  
A Murugan

Generative Adversarial Networks (GAN) have its major contribution to the field of Artificial Intelligence. It is becoming so powerful by paving its way in numerous applications of intelligent systems. This is primarily due to its astute prospect of learning and solving complex and high-dimensional problems from the latent space. With the growing demands of GANs, it is necessary to seek its potential and impact in implementations. In short span of time, it has witnessed several variants and extensions in image translation, domain-adaptation and other academic fields. This paper provides an understanding of such imperative GANs mutants and surveys the existing adversarial models which are prominent in their applied field. 


2019 ◽  
Vol 2 (93) ◽  
pp. 64-68
Author(s):  
I. Konarieva ◽  
D. Pydorenko ◽  
O. Turuta

The given work considers the existing methods of text compression (finding keywords or creating summary) using RAKE, Lex Rank, Luhn, LSA, Text Rank algorithms; image generation; text-to-image and image-to-image translation including GANs (generative adversarial networks). Different types of GANs were described such as StyleGAN, GauGAN, Pix2Pix, CycleGAN, BigGAN, AttnGAN. This work aims to show ways to create illustrations for the text. First, key information should be obtained from the text. Second, this key information should be transformed into images. There were proposed several ways to transform keywords to images: generating images or selecting them from a dataset with further transforming like generating new images based on selected ow combining selected images e.g. with applying style from one image to another. Based on results, possibilities for further improving the quality of image generation were also planned: combining image generation with selecting images from a dataset, limiting topics of image generation.


2021 ◽  
Author(s):  
Van Bettauer ◽  
Anna CBP Costa ◽  
Raha Parvizi Omran ◽  
Samira Massahi ◽  
Eftyhios Kirbizakis ◽  
...  

We present deep learning-based approaches for exploring the complex array of morphologies exhibited by the opportunistic human pathogen C. albicans. Our system entitled Candescence automatically detects C. albicans cells from Differential Image Contrast microscopy, and labels each detected cell with one of nine vegetative, mating-competent or filamentous morphologies. The software is based upon a fully convolutional one-stage object detector and exploits a novel cumulative curriculum-based learning strategy that stratifies our images by difficulty from simple vegetative forms to more complex filamentous architectures. Candescence achieves very good performance on this difficult learning set which has substantial intermixing between the predicted classes. To capture the essence of each C. albicans morphology, we develop models using generative adversarial networks and identify subcomponents of the latent space which control technical variables, developmental trajectories or morphological switches. We envision Candescence as a community meeting point for quantitative explorations of C. albicans morphology.


2018 ◽  
Vol 25 (4) ◽  
pp. 551-555 ◽  
Author(s):  
Xiaofeng Han ◽  
Jianfeng Lu ◽  
Chunxia Zhao ◽  
Shaodi You ◽  
Hongdong Li

Sign in / Sign up

Export Citation Format

Share Document