scholarly journals Hierarchical Modes Exploring in Generative Adversarial Networks

2020 ◽  
Vol 34 (07) ◽  
pp. 10981-10988
Author(s):  
Mengxiao Hu ◽  
Jinlong Li ◽  
Maolin Hu ◽  
Tao Hu

In conditional Generative Adversarial Networks (cGANs), when two different initial noises are concatenated with the same conditional information, the distance between their outputs is relatively smaller, which makes minor modes likely to collapse into large modes. To prevent this happen, we proposed a hierarchical mode exploring method to alleviate mode collapse in cGANs by introducing a diversity measurement into the objective function as the regularization term. We also introduced the Expected Ratios of Expansion (ERE) into the regularization term, by minimizing the sum of differences between the real change of distance and ERE, we can control the diversity of generated images w.r.t specific-level features. We validated the proposed algorithm on four conditional image synthesis tasks including categorical generation, paired and un-paired image translation and text-to-image generation. Both qualitative and quantitative results show that the proposed method is effective in alleviating the mode collapse problem in cGANs, and can control the diversity of output images w.r.t specific-level features.

2021 ◽  
Vol 8 (1) ◽  
pp. 3-31
Author(s):  
Yuan Xue ◽  
Yuan-Chen Guo ◽  
Han Zhang ◽  
Tao Xu ◽  
Song-Hai Zhang ◽  
...  

AbstractIn many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.


2021 ◽  
Vol 40 ◽  
pp. 03017
Author(s):  
Amogh Parab ◽  
Ananya Malik ◽  
Arish Damania ◽  
Arnav Parekhji ◽  
Pranit Bari

Through various examples in history such as the early man’s carving on caves, dependence on diagrammatic representations, the immense popularity of comic books we have seen that vision has a higher reach in communication than written words. In this paper, we analyse and propose a new task of transfer of information from text to image synthesis. Through this paper we aim to generate a story from a single sentence and convert our generated story into a sequence of images. We plan to use state of the art technology to implement this task. With the advent of Generative Adversarial Networks text to image synthesis have found a new awakening. We plan to take this task a step further, in order to automate the entire process. Our system generates a multi-lined story given a single sentence using a deep neural network. This story is then fed into our networks of multiple stage GANs inorder to produce a photorealistic image sequence.


2022 ◽  
pp. 98-110
Author(s):  
Md Fazle Rabby ◽  
Md Abdullah Al Momin ◽  
Xiali Hei

Generative adversarial networks have been a highly focused research topic in computer vision, especially in image synthesis and image-to-image translation. There are a lot of variations in generative nets, and different GANs are suitable for different applications. In this chapter, the authors investigated conditional generative adversarial networks to generate fake images, such as handwritten signatures. The authors demonstrated an implementation of conditional generative adversarial networks, which can generate fake handwritten signatures according to a condition vector tailored by humans.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-38
Author(s):  
Zhengwei Wang ◽  
Qi She ◽  
Tomás E. Ward

Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation, and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are as follows: (1) the generation of high quality images, (2) diversity of image generation, and (3) stabilizing training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state-of-the-art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress toward addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress toward critical computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Codes related to the GAN-variants studied in this work is summarized on https://github.com/sheqi/GAN_Review.


In the recent past, text-to-image translation was an active field of research. The ability of a network to know a sentence's context and to create a specific picture that represents the sentence demonstrates the model's ability to think more like humans. Common text--translation methods employ Generative Adversarial Networks to generate high-text-images, but the images produced do not always represent the meaning of the phrase provided to the model as input. Using a captioning network to caption generated images, we tackle this problem and exploit the gap between ground truth captions and generated captions to further enhance the network. We present detailed similarities between our system and the methods already in place. Text-to-Image synthesis is a difficult problem with plenty of space for progress despite the current state-of - the-art results. Synthesized images from current methods give the described image a rough sketch but do not capture the true essence of what the text describes. The re-penny achievement of Generative Adversarial Networks (GANs) demonstrates that they are a decent contender for the decision of design to move toward this issue.


In this burgeoning age and society where people are tending towards learning the benefits adversarial network we hereby benefiting the society tend to extend our research towards adversarial networks as a general-purpose solution to image-to-image translation problems. Image to image translation comes under the peripheral class of computer sciences extending our branch in the field of neural networks. We apprentice Generative adversarial networks as an optimum solution for generating Image to image translation where our motive is to learn a mapping between an input image(X) and an output image(Y) using a set of predefined pairs[4]. But it is not necessary that the paired dataset is provided to for our use and hence adversarial methods comes into existence. Further, we advance a method that is able to convert and recapture an image from a domain X to another domain Y in the absence of paired datasets. Our objective is to learn a mapping function G: A —B such that the mapping is able to distinguish the images of G(A) within the distribution of B using an adversarial loss.[1] Because this mapping is high biased, we introduce an inverse mapping function F B—A and introduce a cycle consistency loss[7]. Furthermore we wish to extend our research with various domains and involve them with neural style transfer, semantic image synthesis. Our essential commitment is to show that on a wide assortment of issues, conditional GANs produce sensible outcomes. This paper hence calls for the attention to the purpose of converting image X to image Y and we commit to the transfer learning of training dataset and optimising our code.You can find the source code for the same here.


2021 ◽  
Vol 37 ◽  
pp. 01005
Author(s):  
M. Krithika alias Anbu Devi ◽  
K. Suganthi

Generative Adversarial Networks (GANs) is one of the vital efficient methods for generating a massive, high-quality artificial picture. For diagnosing particular diseases in a medical image, a general problem is that it is expensive, usage of high radiation dosage, and time-consuming to collect data. Hence GAN is a deep learning method that has been developed for the image to image translation, i.e. from low-resolution to highresolution image, for example generating Magnetic resonance image (MRI) from computed tomography image (CT) and 7T from 3T MRI which can be used to obtain multimodal datasets from single modality. In this review paper, different GAN architectures were discussed for medical image analysis.


2018 ◽  
Vol 35 (12) ◽  
pp. 2141-2149 ◽  
Author(s):  
Hao Yuan ◽  
Lei Cai ◽  
Zhengyang Wang ◽  
Xia Hu ◽  
Shaoting Zhang ◽  
...  

Abstract Motivation Cellular function is closely related to the localizations of its sub-structures. It is, however, challenging to experimentally label all sub-cellular structures simultaneously in the same cell. This raises the need of building a computational model to learn the relationships among these sub-cellular structures and use reference structures to infer the localizations of other structures. Results We formulate such a task as a conditional image generation problem and propose to use conditional generative adversarial networks for tackling it. We employ an encoder–decoder network as the generator and propose to use skip connections between the encoder and decoder to provide spatial information to the decoder. To incorporate the conditional information in a variety of different ways, we develop three different types of skip connections, known as the self-gated connection, encoder-gated connection and label-gated connection. The proposed skip connections are built based on the conditional information using gating mechanisms. By learning a gating function, the network is able to control what information should be passed through the skip connections from the encoder to the decoder. Since the gate parameters are also learned automatically, we expect that only useful spatial information is transmitted to the decoder to help image generation. We perform both qualitative and quantitative evaluations to assess the effectiveness of our proposed approaches. Experimental results show that our cGAN-based approaches have the ability to generate the desired sub-cellular structures correctly. Our results also demonstrate that the proposed approaches outperform the existing approach based on adversarial auto-encoders, and the new skip connections lead to improved performance. In addition, the localizations of generated sub-cellular structures by our approaches are consistent with observations in biological experiments. Availability and implementation The source code and more results are available at https://github.com/divelab/cgan/.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexander Hepburn ◽  
Valero Laparra ◽  
Ryan McConville ◽  
Raul Santos-Rodriguez

In recent years there has been a growing interest in image generation through deep learning. While an important part of the evaluation of the generated images usually involves visual inspection, the inclusion of human perception as a factor in the training process is often overlooked. In this paper we propose an alternative perceptual regulariser for image-to-image translation using conditional generative adversarial networks (cGANs). To do so automatically (avoiding visual inspection), we use the Normalised Laplacian Pyramid Distance (NLPD) to measure the perceptual similarity between the generated image and the original image. The NLPD is based on the principle of normalising the value of coefficients with respect to a local estimate of mean energy at different scales and has already been successfully tested in different experiments involving human perception. We compare this regulariser with the originally proposed L1 distance and note that when using NLPD the generated images contain more realistic values for both local and global contrast.


Author(s):  
J. D. Bermudez ◽  
P. N. Happ ◽  
D. A. B. Oliveira ◽  
R. Q. Feitosa

<p><strong>Abstract.</strong> Optical imagery is often affected by the presence of clouds. Aiming to reduce their effects, different reconstruction techniques have been proposed in the last years. A common alternative is to extract data from active sensors, like Synthetic Aperture Radar (SAR), because they are almost independent on the atmospheric conditions and solar illumination. On the other hand, SAR images are more complex to interpret than optical images requiring particular handling. Recently, Conditional Generative Adversarial Networks (cGANs) have been widely used in different image generation tasks presenting state-of-the-art results. One application of cGANs is learning a nonlinear mapping function from two images of different domains. In this work, we combine the fact that SAR images are hardly affected by clouds with the ability of cGANS for image translation in order to map optical images from SAR ones so as to recover regions that are covered by clouds. Experimental results indicate that the proposed solution achieves better classification accuracy than SAR based classification.</p>


Sign in / Sign up

Export Citation Format

Share Document