A SAR-to-Optical Image Translation Method Based on PIX2PIX

Author(s):  
Zongcheng Zuo ◽  
Yuanxiang Li
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 60338-60343 ◽  
Author(s):  
Yu Li ◽  
Randi Fu ◽  
Xiangchao Meng ◽  
Wei Jin ◽  
Feng Shao

2020 ◽  
Vol 18 (8) ◽  
pp. 9-17
Author(s):  
Sung-Woon Jung ◽  
Hyuk-Ju Kwon ◽  
Young-Choon Kim ◽  
Sang-Ho Ahn ◽  
Sung-Hak Lee

2021 ◽  
pp. 108208
Author(s):  
Xi Yang ◽  
Jingyi Zhao ◽  
Ziyu Wei ◽  
Nannan Wang ◽  
Xinbo Gao

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Xu Yin ◽  
Yan Li ◽  
Byeong-Seok Shin

The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks—such as classification and image segmentation—and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.


2021 ◽  
Vol 13 (18) ◽  
pp. 3575
Author(s):  
Jie Guo ◽  
Chengyu He ◽  
Mingjin Zhang ◽  
Yunsong Li ◽  
Xinbo Gao ◽  
...  

With the ability for all-day, all-weather acquisition, synthetic aperture radar (SAR) remote sensing is an important technique in modern Earth observation. However, the interpretation of SAR images is a highly challenging task, even for well-trained experts, due to the imaging principle of SAR images and the high-frequency speckle noise. Some image-to-image translation methods are used to convert SAR images into optical images that are closer to what we perceive through our eyes. There exist two weaknesses in these methods: (1) these methods are not designed for an SAR-to-optical translation task, thereby losing sight of the complexity of SAR images and the speckle noise. (2) The same convolution filters in a standard convolution layer are utilized for the whole feature maps, which ignore the details of SAR images in each window and generate images with unsatisfactory quality. In this paper, we propose an edge-preserving convolutional generative adversarial network (EPCGAN) to enhance the structure and aesthetics of the output image by leveraging the edge information of the SAR image and implementing content-adaptive convolution. The proposed edge-preserving convolution (EPC) decomposes the content of the convolution input into texture components and content components and then generates a content-adaptive kernel to modify standard convolutional filter weights for the content components. Based on the EPC, the EPCGAN is presented for SAR-to-optical image translation. It uses a gradient branch to assist in the recovery of structural image information. Experiments on the SEN1-2 dataset demonstrated that the proposed method can outperform other SAR-to-optical methods by recovering more structures and yielding a superior evaluation index.


2022 ◽  
Vol 130 (3) ◽  
pp. 1-16
Author(s):  
Jong-In Choi ◽  
Soo-Kyun Kim ◽  
Shin-Jin Kang

2020 ◽  
Vol 12 (21) ◽  
pp. 3472
Author(s):  
Jiexin Zhang ◽  
Jianjiang Zhou ◽  
Minglei Li ◽  
Huiyu Zhou ◽  
Tianzhu Yu

Synthetic aperture radar (SAR) images contain severe speckle noise and weak texture, which are unsuitable for visual interpretation. Many studies have been undertaken so far toward exploring the use of SAR-to-optical image translation to obtain near optical representations. However, how to evaluate the translation quality is a challenge. In this paper, we combine image quality assessment (IQA) with SAR-to-optical image translation to pursue a suitable evaluation approach. Firstly, several machine-learning baselines for SAR-to-optical image translation are established and evaluated. Then, extensive comparisons of perceptual IQA models are performed in terms of their use as objective functions for the optimization of image restoration. In order to study feature extraction of the images translated from SAR to optical modes, an application in scene classification is presented. Finally, the attributes of the translated image representations are evaluated using visual inspection and the proposed IQA methods.


2020 ◽  
Vol 891 (1) ◽  
pp. L4 ◽  
Author(s):  
Eunsu Park ◽  
Yong-Jae Moon ◽  
Daye Lim ◽  
Harim Lee

Sign in / Sign up

Export Citation Format

Share Document