scholarly journals Smooth Deep Image Generator from Noises

Author(s):  
Tianyu Guo ◽  
Chang Xu ◽  
Boxin Shi ◽  
Chao Xu ◽  
Dacheng Tao

Generative Adversarial Networks (GANs) have demonstrated a strong ability to fit complex distributions since they were presented, especially in the field of generating natural images. Linear interpolation in the noise space produces a continuously changing in the image space, which is an impressive property of GANs. However, there is no special consideration on this property in the objective function of GANs or its derived models. This paper analyzes the perturbation on the input of the generator and its influence on the generated images. A smooth generator is then developed by investigating the tolerable input perturbation. We further integrate this smooth generator with a gradient penalized discriminator, and design smooth GAN that generates stable and high-quality images. Experiments on real-world image datasets demonstrate the necessity of studying smooth generator and the effectiveness of the proposed algorithm.

2021 ◽  
Author(s):  
Zhengyang Wang ◽  
Qingchang Guo ◽  
Min Lei ◽  
Shuxiang Guo ◽  
Xiufen Ye

2021 ◽  
Vol 8 (1) ◽  
pp. 3-31
Author(s):  
Yuan Xue ◽  
Yuan-Chen Guo ◽  
Han Zhang ◽  
Tao Xu ◽  
Song-Hai Zhang ◽  
...  

AbstractIn many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260308
Author(s):  
Mauro Castelli ◽  
Luca Manzoni ◽  
Tatiane Espindola ◽  
Aleš Popovič ◽  
Andrea De Lorenzo

Wireless networks are among the fundamental technologies used to connect people. Considering the constant advancements in the field, telecommunication operators must guarantee a high-quality service to keep their customer portfolio. To ensure this high-quality service, it is common to establish partnerships with specialized technology companies that deliver software services in order to monitor the networks and identify faults and respective solutions. A common barrier faced by these specialized companies is the lack of data to develop and test their products. This paper investigates the use of generative adversarial networks (GANs), which are state-of-the-art generative models, for generating synthetic telecommunication data related to Wi-Fi signal quality. We developed, trained, and compared two of the most used GAN architectures: the Vanilla GAN and the Wasserstein GAN (WGAN). Both models presented satisfactory results and were able to generate synthetic data similar to the real ones. In particular, the distribution of the synthetic data overlaps the distribution of the real data for all of the considered features. Moreover, the considered generative models can reproduce the same associations observed for the synthetic features. We chose the WGAN as the final model, but both models are suitable for addressing the problem at hand.


2021 ◽  
Vol 11 (5) ◽  
pp. 2013
Author(s):  
Euihyeok Lee ◽  
Seungwoo Kang

What if the window of our cars is a magic window, which transforms dark views outside of the window at night into bright ones as we can see in the daytime? To realize such a window, one of important requirements is that the stream of transformed images displayed on the window should be of high quality so that users perceive it as real scenes in the day. Although image-to-image translation techniques based on Generative Adversarial Networks (GANs) have been widely studied, night-to-day image translation is still a challenging task. In this paper, we propose Daydriex, a processing pipeline to generate enhanced daytime translation focusing on road views. Our key idea is to supplement the missing information in dark areas of input image frames by using existing daytime images corresponding to the input images from street view services. We present a detailed processing flow and address several issues to realize our idea. Our evaluation shows that the results by Daydriex achieves lower Fréchet Inception Distance (FID) scores and higher user perception scores compared to those by CycleGAN only.


2018 ◽  
Author(s):  
Ingo Fruend ◽  
Elee Stalker

Humans are remarkably well tuned to the statistical properties of natural images. However, quantitative characterization of processing within the domain of natural images has been difficult because most parametric manipulations of a natural image make that image appear less natural. We used generative adversarial networks (GANs) to constrain parametric manipulations to remain within an approximation of the manifold of natural images. In the first experiment, 7 observers decided which one of two synthetic perturbed images matched a synthetic unperturbed comparison image. Observers were significantly more sensitive to perturbations that were constrained to an approximate manifold of natural images than they were to perturbations applied directly in pixel space. Trial by trial errors were consistent with the idea that these perturbations disrupt configural aspects of visual structure used in image segmentation. In a second experiment, 5 observers discriminated paths along the image manifold as recovered by the GAN. Observers were remarkably good at this task, confirming that observers were tuned to fairly detailed properties of an approximate manifold of natural images. We conclude that human tuning to natural images is more general than detecting deviations from natural appearance, and that humans have, to some extent, access to detailed interrelations between natural images.


Sign in / Sign up

Export Citation Format

Share Document