scholarly journals A SAR-to-Optical Image Translation Method Based on Conditional Generation Adversarial Network (cGAN)

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 60338-60343 ◽  
Author(s):  
Yu Li ◽  
Randi Fu ◽  
Xiangchao Meng ◽  
Wei Jin ◽  
Feng Shao
Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


2020 ◽  
Vol 34 (07) ◽  
pp. 11490-11498
Author(s):  
Che-Tsung Lin ◽  
Yen-Yi Wu ◽  
Po-Hao Hsu ◽  
Shang-Hong Lai

Unpaired image-to-image translation is proven quite effective in boosting a CNN-based object detector for a different domain by means of data augmentation that can well preserve the image-objects in the translated images. Recently, multimodal GAN (Generative Adversarial Network) models have been proposed and were expected to further boost the detector accuracy by generating a diverse collection of images in the target domain, given only a single/labelled image in the source domain. However, images generated by multimodal GANs would achieve even worse detection accuracy than the ones by a unimodal GAN with better object preservation. In this work, we introduce cycle-structure consistency for generating diverse and structure-preserved translated images across complex domains, such as between day and night, for object detector training. Qualitative results show that our model, Multimodal AugGAN, can generate diverse and realistic images for the target domain. For quantitative comparisons, we evaluate other competing methods and ours by using the generated images to train YOLO, Faster R-CNN and FCN models and prove that our model achieves significant improvement and outperforms other methods on the detection accuracies and the FCN scores. Also, we demonstrate that our model could provide more diverse object appearances in the target domain through comparison on the perceptual distance metric.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 395 ◽  
Author(s):  
Naeem Ul Islam ◽  
Sungmin Lee ◽  
Jaebyung Park

Image-to-image translation based on deep learning has attracted interest in the robotics and vision community because of its potential impact on terrain analysis and image representation, interpretation, modification, and enhancement. Currently, the most successful approach for generating a translated image is a conditional generative adversarial network (cGAN) for training an autoencoder with skip connections. Despite its impressive performance, it has low accuracy and a lack of consistency; further, its training is imbalanced. This paper proposes a balanced training strategy for image-to-image translation, resulting in an accurate and consistent network. The proposed approach uses two generators and a single discriminator. The generators translate images from one domain to another. The discriminator takes the input of three different configurations and guides both the generators to generate realistic images in their corresponding domains while ensuring high accuracy and consistency. Experiments are conducted on different datasets. In particular, the proposed approach outperforms the cGAN in realistic image translation in terms of accuracy and consistency in training.


2021 ◽  
Vol 11 (5) ◽  
pp. 1334-1340
Author(s):  
K. Gokul Kannan ◽  
T. R. Ganesh Babu

Generative Adversarial Network (GAN) is neural network architecture, widely used in many computer vision applications such as super-resolution image generation, art creation and image to image translation. A conventional GAN model consists of two sub-models; generative model and discriminative model. The former one generates new samples based on an unsupervised learning task, and the later one classifies them into real or fake. Though GAN is most commonly used for training generative models, it can be used for developing a classifier model. The main objective is to extend the effectiveness of GAN into semi-supervised learning, i.e., for the classification of fundus images to diagnose glaucoma. The discriminator model in the conventional GAN is improved via transfer learning to predict n + 1 classes by training the model for both supervised classification (n classes) and unsupervised classification (fake or real). Both models share all feature extraction layers and differ in the output layers. Thus any update in one of the model will impact both models. Results show that the semi-supervised GAN performs well than a standalone Convolution Neural Networks (CNNs) model.


2020 ◽  
Vol 18 (8) ◽  
pp. 9-17
Author(s):  
Sung-Woon Jung ◽  
Hyuk-Ju Kwon ◽  
Young-Choon Kim ◽  
Sang-Ho Ahn ◽  
Sung-Hak Lee

2018 ◽  
Vol 7 (10) ◽  
pp. 389 ◽  
Author(s):  
Wei He ◽  
Naoto Yokoya

In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.


2021 ◽  
pp. 108208
Author(s):  
Xi Yang ◽  
Jingyi Zhao ◽  
Ziyu Wei ◽  
Nannan Wang ◽  
Xinbo Gao

Sign in / Sign up

Export Citation Format

Share Document