scholarly journals Review of Medical Image Synthesis using GAN Techniques

2021 ◽  
Vol 37 ◽  
pp. 01005
Author(s):  
M. Krithika alias Anbu Devi ◽  
K. Suganthi

Generative Adversarial Networks (GANs) is one of the vital efficient methods for generating a massive, high-quality artificial picture. For diagnosing particular diseases in a medical image, a general problem is that it is expensive, usage of high radiation dosage, and time-consuming to collect data. Hence GAN is a deep learning method that has been developed for the image to image translation, i.e. from low-resolution to highresolution image, for example generating Magnetic resonance image (MRI) from computed tomography image (CT) and 7T from 3T MRI which can be used to obtain multimodal datasets from single modality. In this review paper, different GAN architectures were discussed for medical image analysis.

2022 ◽  
pp. 98-110
Author(s):  
Md Fazle Rabby ◽  
Md Abdullah Al Momin ◽  
Xiali Hei

Generative adversarial networks have been a highly focused research topic in computer vision, especially in image synthesis and image-to-image translation. There are a lot of variations in generative nets, and different GANs are suitable for different applications. In this chapter, the authors investigated conditional generative adversarial networks to generate fake images, such as handwritten signatures. The authors demonstrated an implementation of conditional generative adversarial networks, which can generate fake handwritten signatures according to a condition vector tailored by humans.


In the recent past, text-to-image translation was an active field of research. The ability of a network to know a sentence's context and to create a specific picture that represents the sentence demonstrates the model's ability to think more like humans. Common text--translation methods employ Generative Adversarial Networks to generate high-text-images, but the images produced do not always represent the meaning of the phrase provided to the model as input. Using a captioning network to caption generated images, we tackle this problem and exploit the gap between ground truth captions and generated captions to further enhance the network. We present detailed similarities between our system and the methods already in place. Text-to-Image synthesis is a difficult problem with plenty of space for progress despite the current state-of - the-art results. Synthesized images from current methods give the described image a rough sketch but do not capture the true essence of what the text describes. The re-penny achievement of Generative Adversarial Networks (GANs) demonstrates that they are a decent contender for the decision of design to move toward this issue.


In this burgeoning age and society where people are tending towards learning the benefits adversarial network we hereby benefiting the society tend to extend our research towards adversarial networks as a general-purpose solution to image-to-image translation problems. Image to image translation comes under the peripheral class of computer sciences extending our branch in the field of neural networks. We apprentice Generative adversarial networks as an optimum solution for generating Image to image translation where our motive is to learn a mapping between an input image(X) and an output image(Y) using a set of predefined pairs[4]. But it is not necessary that the paired dataset is provided to for our use and hence adversarial methods comes into existence. Further, we advance a method that is able to convert and recapture an image from a domain X to another domain Y in the absence of paired datasets. Our objective is to learn a mapping function G: A —B such that the mapping is able to distinguish the images of G(A) within the distribution of B using an adversarial loss.[1] Because this mapping is high biased, we introduce an inverse mapping function F B—A and introduce a cycle consistency loss[7]. Furthermore we wish to extend our research with various domains and involve them with neural style transfer, semantic image synthesis. Our essential commitment is to show that on a wide assortment of issues, conditional GANs produce sensible outcomes. This paper hence calls for the attention to the purpose of converting image X to image Y and we commit to the transfer learning of training dataset and optimising our code.You can find the source code for the same here.


2020 ◽  
Vol 34 (07) ◽  
pp. 10981-10988
Author(s):  
Mengxiao Hu ◽  
Jinlong Li ◽  
Maolin Hu ◽  
Tao Hu

In conditional Generative Adversarial Networks (cGANs), when two different initial noises are concatenated with the same conditional information, the distance between their outputs is relatively smaller, which makes minor modes likely to collapse into large modes. To prevent this happen, we proposed a hierarchical mode exploring method to alleviate mode collapse in cGANs by introducing a diversity measurement into the objective function as the regularization term. We also introduced the Expected Ratios of Expansion (ERE) into the regularization term, by minimizing the sum of differences between the real change of distance and ERE, we can control the diversity of generated images w.r.t specific-level features. We validated the proposed algorithm on four conditional image synthesis tasks including categorical generation, paired and un-paired image translation and text-to-image generation. Both qualitative and quantitative results show that the proposed method is effective in alleviating the mode collapse problem in cGANs, and can control the diversity of output images w.r.t specific-level features.


Sign in / Sign up

Export Citation Format

Share Document