scholarly journals GANs with Multiple Constraints for Image Translation

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Yan Gan ◽  
Junxin Gong ◽  
Mao Ye ◽  
Yang Qian ◽  
Kedi Liu ◽  
...  

Unpaired image translation is a challenging problem in computer vision, while existing generative adversarial networks (GANs) models mainly use the adversarial loss and other constraints to model. But the degree of constraint imposed on the generator and the discriminator is not enough, which results in bad image quality. In addition, we find that the current GANs-based models have not yet been implemented by adding an auxiliary domain, which is used to constrain the generator. To solve the problem mentioned above, we propose a multiscale and multilevel GANs (MMGANs) model for image translation. In this model, we add an auxiliary domain to constrain generator, which combines this auxiliary domain with the original domains for modelling and helps generator learn the detailed content of the image. Then we use multiscale and multilevel feature matching to constrain the discriminator. The purpose is to make the training process as stable as possible. Finally, we conduct experiments on six image translation tasks. The results verify the validity of the proposed model.

Author(s):  
Jianfu Zhang ◽  
Yuanyuan Huang ◽  
Yaoyi Li ◽  
Weijie Zhao ◽  
Liqing Zhang

Recent studies show significant progress in image-to-image translation task, especially facilitated by Generative Adversarial Networks. They can synthesize highly realistic images and alter the attribute labels for the images. However, these works employ attribute vectors to specify the target domain which diminishes image-level attribute diversity. In this paper, we propose a novel model formulating disentangled representations by projecting images to latent units, grouped feature channels of Convolutional Neural Network, to disassemble the information between different attributes. Thanks to disentangled representation, we can transfer attributes according to the attribute labels and moreover retain the diversity beyond the labels, namely, the styles inside each image. This is achieved by specifying some attributes and swapping the corresponding latent units to “swap” the attributes appearance, or applying channel-wise interpolation to blend different attributes. To verify the motivation of our proposed model, we train and evaluate our model on face dataset CelebA. Furthermore, the evaluation of another facial expression dataset RaFD demonstrates the generalizability of our proposed model.


2019 ◽  
Author(s):  
Atin Sakkeer Hussain

Generative Adversarial Networks(GAN) are trained to generate images from random noise vectors, but often these images turn out poorly due to any of several reasons such as model collapse, lack of proper training data, lack of training, etc. To combat this issue this paper, makes use of a Variational Autoencoder(VAE). The VAE is trained on a combination of the training & generated data, after this the VAE can be used to map images generated by the GAN to better versions of it. (This is similar to Denoising, but with few variations in the image). In addition to improving quality the proposed model is shown to work better than normal WGAN’s on sparse datasets with higher variety, in equal number of training epochs.


2022 ◽  
pp. 98-110
Author(s):  
Md Fazle Rabby ◽  
Md Abdullah Al Momin ◽  
Xiali Hei

Generative adversarial networks have been a highly focused research topic in computer vision, especially in image synthesis and image-to-image translation. There are a lot of variations in generative nets, and different GANs are suitable for different applications. In this chapter, the authors investigated conditional generative adversarial networks to generate fake images, such as handwritten signatures. The authors demonstrated an implementation of conditional generative adversarial networks, which can generate fake handwritten signatures according to a condition vector tailored by humans.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-38
Author(s):  
Zhengwei Wang ◽  
Qi She ◽  
Tomás E. Ward

Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation, and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are as follows: (1) the generation of high quality images, (2) diversity of image generation, and (3) stabilizing training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state-of-the-art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress toward addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress toward critical computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Codes related to the GAN-variants studied in this work is summarized on https://github.com/sheqi/GAN_Review.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexander Hepburn ◽  
Valero Laparra ◽  
Ryan McConville ◽  
Raul Santos-Rodriguez

In recent years there has been a growing interest in image generation through deep learning. While an important part of the evaluation of the generated images usually involves visual inspection, the inclusion of human perception as a factor in the training process is often overlooked. In this paper we propose an alternative perceptual regulariser for image-to-image translation using conditional generative adversarial networks (cGANs). To do so automatically (avoiding visual inspection), we use the Normalised Laplacian Pyramid Distance (NLPD) to measure the perceptual similarity between the generated image and the original image. The NLPD is based on the principle of normalising the value of coefficients with respect to a local estimate of mean energy at different scales and has already been successfully tested in different experiments involving human perception. We compare this regulariser with the originally proposed L1 distance and note that when using NLPD the generated images contain more realistic values for both local and global contrast.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 325
Author(s):  
Ángel González-Prieto ◽  
Alberto Mozo ◽  
Edgar Talavera ◽  
Sandra Gómez-Canaval

Generative Adversarial Networks (GANs) are powerful machine learning models capable of generating fully synthetic samples of a desired phenomenon with a high resolution. Despite their success, the training process of a GAN is highly unstable, and typically, it is necessary to implement several accessory heuristics to the networks to reach acceptable convergence of the model. In this paper, we introduce a novel method to analyze the convergence and stability in the training of generative adversarial networks. For this purpose, we propose to decompose the objective function of the adversary min–max game defining a periodic GAN into its Fourier series. By studying the dynamics of the truncated Fourier series for the continuous alternating gradient descend algorithm, we are able to approximate the real flow and to identify the main features of the convergence of GAN. This approach is confirmed empirically by studying the training flow in a 2-parametric GAN, aiming to generate an unknown exponential distribution. As a by-product, we show that convergent orbits in GANs are small perturbations of periodic orbits so the Nash equillibria are spiral attractors. This theoretically justifies the slow and unstable training observed in GANs.


Author(s):  
Johannes Haubold ◽  
René Hosch ◽  
Lale Umutlu ◽  
Axel Wetter ◽  
Patrizia Haubold ◽  
...  

Abstract Objectives To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. Methods Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (−50% and −80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. Results The −80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the −50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. Conclusions The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. Key Points • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.


Sign in / Sign up

Export Citation Format

Share Document