Domain Adaptation via Image to Image Translation

Author(s):  
Zak Murez ◽  
Soheil Kolouri ◽  
David Kriegman ◽  
Ravi Ramamoorthi ◽  
Kyungnam Kim
Author(s):  
Hongjie Zhang ◽  
Ang Li ◽  
Xu Han ◽  
Zhaoming Chen ◽  
Yang Zhang ◽  
...  

2021 ◽  
Author(s):  
Sachin Chhabra ◽  
Hemanth Venkateswara ◽  
Baoxin Li

2018 ◽  
Vol 7 (3.12) ◽  
pp. 864
Author(s):  
Tanvi Bhandarkar ◽  
A Murugan

Generative Adversarial Networks (GAN) have its major contribution to the field of Artificial Intelligence. It is becoming so powerful by paving its way in numerous applications of intelligent systems. This is primarily due to its astute prospect of learning and solving complex and high-dimensional problems from the latent space. With the growing demands of GANs, it is necessary to seek its potential and impact in implementations. In short span of time, it has witnessed several variants and extensions in image translation, domain-adaptation and other academic fields. This paper provides an understanding of such imperative GANs mutants and surveys the existing adversarial models which are prominent in their applied field. 


Author(s):  
Zak Murez ◽  
Soheil Kolouri ◽  
David Kriegman ◽  
Ravi Ramamoorthi ◽  
Kyungnam Kim

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Zhuorong Li ◽  
Wanliang Wang ◽  
Yanwei Zhao

Image translation, where the input image is mapped to its synthetic counterpart, is attractive in terms of wide applications in fields of computer graphics and computer vision. Despite significant progress on this problem, largely due to a surge of interest in conditional generative adversarial networks (cGANs), most of the cGAN-based approaches require supervised data, which are rarely available and expensive to provide. Instead we elaborate a common framework that is also applicable to the unsupervised cases, learning the image prior by conditioning the discriminator on unaligned targets to reduce the mapping space and improve the generation quality. Besides, domain-adversarial training inspired by domain adaptation is proposed to capture discriminative and expressive features, for the purpose of improving fidelity. Effectiveness of our method is demonstrated by compelling experimental results of our method and comparisons with several baselines. As for the generality, it could be analyzed from two perspectives: adaptation to both supervised and unsupervised setting and the diversity of tasks.


Computation ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 20 ◽  
Author(s):  
Yunfei Teng ◽  
Anna Choromanska

The unsupervised image-to-image translation aims at finding a mapping between the source ( A ) and target ( B ) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings F A B : A → B and F B A : B → A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., F A B ( F B A ( B ) ) ≈ B and F B A ( F A B ( A ) ) ≈ A . Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce F B A to be an inverse operation to F A B . We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.


2022 ◽  
Vol 14 (1) ◽  
pp. 190
Author(s):  
Yuxiang Cai ◽  
Yingchun Yang ◽  
Qiyi Zheng ◽  
Zhengwei Shen ◽  
Yongheng Shang ◽  
...  

When segmenting massive amounts of remote sensing images collected from different satellites or geographic locations (cities), the pre-trained deep learning models cannot always output satisfactory predictions. To deal with this issue, domain adaptation has been widely utilized to enhance the generalization abilities of the segmentation models. Most of the existing domain adaptation methods, which based on image-to-image translation, firstly transfer the source images to the pseudo-target images, adapt the classifier from the source domain to the target domain. However, these unidirectional methods suffer from the following two limitations: (1) they do not consider the inverse procedure and they cannot fully take advantage of the information from the other domain, which is also beneficial, as confirmed by our experiments; (2) these methods may fail in the cases where transferring the source images to the pseudo-target images is difficult. In this paper, in order to solve these problems, we propose a novel framework BiFDANet for unsupervised bidirectional domain adaptation in the semantic segmentation of remote sensing images. It optimizes the segmentation models in two opposite directions. In the source-to-target direction, BiFDANet learns to transfer the source images to the pseudo-target images and adapts the classifier to the target domain. In the opposite direction, BiFDANet transfers the target images to the pseudo-source images and optimizes the source classifier. At test stage, we make the best of the source classifier and the target classifier, which complement each other with a simple linear combination method, further improving the performance of our BiFDANet. Furthermore, we propose a new bidirectional semantic consistency loss for our BiFDANet to maintain the semantic consistency during the bidirectional image-to-image translation process. The experiments on two datasets including satellite images and aerial images demonstrate the superiority of our method against existing unidirectional methods.


Sign in / Sign up

Export Citation Format

Share Document