GAN-Based SAR-to-Optical Image Translation with Region Information

Author(s):  
Kento Doi ◽  
Ken Sakurada ◽  
Masaki Onishi ◽  
Akira Iwasaki
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 60338-60343 ◽  
Author(s):  
Yu Li ◽  
Randi Fu ◽  
Xiangchao Meng ◽  
Wei Jin ◽  
Feng Shao

2021 ◽  
pp. 108208
Author(s):  
Xi Yang ◽  
Jingyi Zhao ◽  
Ziyu Wei ◽  
Nannan Wang ◽  
Xinbo Gao

2021 ◽  
Vol 13 (18) ◽  
pp. 3575
Author(s):  
Jie Guo ◽  
Chengyu He ◽  
Mingjin Zhang ◽  
Yunsong Li ◽  
Xinbo Gao ◽  
...  

With the ability for all-day, all-weather acquisition, synthetic aperture radar (SAR) remote sensing is an important technique in modern Earth observation. However, the interpretation of SAR images is a highly challenging task, even for well-trained experts, due to the imaging principle of SAR images and the high-frequency speckle noise. Some image-to-image translation methods are used to convert SAR images into optical images that are closer to what we perceive through our eyes. There exist two weaknesses in these methods: (1) these methods are not designed for an SAR-to-optical translation task, thereby losing sight of the complexity of SAR images and the speckle noise. (2) The same convolution filters in a standard convolution layer are utilized for the whole feature maps, which ignore the details of SAR images in each window and generate images with unsatisfactory quality. In this paper, we propose an edge-preserving convolutional generative adversarial network (EPCGAN) to enhance the structure and aesthetics of the output image by leveraging the edge information of the SAR image and implementing content-adaptive convolution. The proposed edge-preserving convolution (EPC) decomposes the content of the convolution input into texture components and content components and then generates a content-adaptive kernel to modify standard convolutional filter weights for the content components. Based on the EPC, the EPCGAN is presented for SAR-to-optical image translation. It uses a gradient branch to assist in the recovery of structural image information. Experiments on the SEN1-2 dataset demonstrated that the proposed method can outperform other SAR-to-optical methods by recovering more structures and yielding a superior evaluation index.


2020 ◽  
Vol 12 (21) ◽  
pp. 3472
Author(s):  
Jiexin Zhang ◽  
Jianjiang Zhou ◽  
Minglei Li ◽  
Huiyu Zhou ◽  
Tianzhu Yu

Synthetic aperture radar (SAR) images contain severe speckle noise and weak texture, which are unsuitable for visual interpretation. Many studies have been undertaken so far toward exploring the use of SAR-to-optical image translation to obtain near optical representations. However, how to evaluate the translation quality is a challenge. In this paper, we combine image quality assessment (IQA) with SAR-to-optical image translation to pursue a suitable evaluation approach. Firstly, several machine-learning baselines for SAR-to-optical image translation are established and evaluated. Then, extensive comparisons of perceptual IQA models are performed in terms of their use as objective functions for the optimization of image restoration. In order to study feature extraction of the images translated from SAR to optical modes, an application in scene classification is presented. Finally, the attributes of the translated image representations are evaluated using visual inspection and the proposed IQA methods.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 70925-70937 ◽  
Author(s):  
Jiexin Zhang ◽  
Jianjiang Zhou ◽  
Xiwen Lu

2019 ◽  
Vol 11 (17) ◽  
pp. 2067 ◽  
Author(s):  
Mario Fuentes Reyes ◽  
Stefan Auer ◽  
Nina Merkle ◽  
Corentin Henry ◽  
Michael Schmitt

Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 129136-129149 ◽  
Author(s):  
Lei Wang ◽  
Xin Xu ◽  
Yue Yu ◽  
Rui Yang ◽  
Rong Gui ◽  
...  

Author(s):  
Javier Noa Turnes ◽  
Jose David Bermudez Castro ◽  
Daliana Lobo Torres ◽  
Pedro Juan Soto Vega ◽  
Raul Queiroz Feitosa ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document