Information Fusion of Optical Image and SAR Image Based on DEM

Author(s):  
Yue Zhang ◽  
Xue Yao ◽  
Shirong Jiang ◽  
Hong Yang ◽  
Xiangfei Nie
Author(s):  
F. N. Numbisi ◽  
F. Van Coillie ◽  
R. De Wulf

<p><strong>Abstract.</strong> Synthetic Aperture Radar (SAR) provides consistent information on target land features; especially in tropical conditions that restrain penetration of optical imaging sensors. Because radar response signal is influenced by geometric and di-electrical properties of surface features’, the different land cover may appear similar in radar images. For discriminating perennial cocoa agroforestry land cover, we compare a multi-spectral optical image from RapidEye, acquired in the dry season, and multi-seasonal C-band SAR of Sentinel 1: A final set of 10 (out of 50) images that represent six dry and four wet seasons from 2015 to 2017. We ran eight RF models for different input band combinations; multi-spectral reflectance, vegetation indices, co-(VV) and cross-(VH) polarised SAR intensity and Grey Level Co-occurrence Matrix (GLCM) texture measures. Following a pixel-based image analysis, we evaluated accuracy metrics and uncertainty Shannon entropy. The model comprising co- and cross-polarised texture bands had the highest accuracy of 88.07<span class="thinspace"></span>% (95<span class="thinspace"></span>% CI: 85.52&amp;ndash;90.31) and kappa of 85.37; and the low class uncertainty for perennial agroforests and transition forests. The optical image had low classification uncertainty for the entire image; but, it performed better in discriminating non-vegetated areas. The measured uncertainty provides reliable validation for comparing class discrimination from different image resolution. The GLCM texture measures that are crucial in delineating vegetation cover differed for the season and polarization of SAR image. Given the high accuracies of mapping, our approach has value for landscape monitoring; and, an improved valuation of agroforestry contribution to REDD+ strategies in the Congo basin sub-region.</p>


Author(s):  
Lei Jiang Lei Jiang ◽  
Xianqing Ling Xianqing Ling ◽  
Jian Geng Jian Geng ◽  
Yongsheng Cheng Yongsheng Cheng

Author(s):  
Abdelrahman Yehia ◽  
Mohamed Safy ◽  
Ahmed S. Amein

Multi-sensor remote sensing data can significantly improve the interpretation and usage of large volume data sources. A combination of satellite Synthetic Aperture Radar (SAR) data and optical sensors enables the use of complementary features of the same image. In this paper, SAR data is injected into optical image using a combining fusion method based on the integration of wavelet Transform and IHS (Intensity, Hue, and Saturation) transform. Not only to preserve the spectral information of the original (MS) image, but also to maintain the spatial content of the high-resolution SAR image. Two data sets are used to evaluate the proposed fusion algorithm: one of them is Pleiades, Turkey and the other one is Boulder, Colorado, USA. The different fused outputs are compared using different image quality indices. Visual and statistical assessment of the fused outputs displays that the proposed approach has an effective translation from SAR to the optical image. Hence, enhances the SAR image interpretability.


Author(s):  
Chunhui Liu ◽  
Yue Qi ◽  
Wenrui Ding

Saliency detection in synthetic aperture radar (SAR) image is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR image. Firstly, we extract four features of SAR image as the input of the MSD model, which include the intensity, orientation, uniqueness and global contrast. Then, the saliency map is generated by the multitask sparsity pursuit (MTSP) which integrates the multiple features collaboratively. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps of the source images, an image fusion method is proposed for the SAR and color optical image fusion. The experimental results of real data show the proposed image fusion method is superior to the presenting methods in terms of several universal quality evaluation indexes, as well as in the visual quality. The salient areas in the SAR image can be highlighted and the spatial and spectral details of color optical image can also be preserved in the fusion result.


Sign in / Sign up

Export Citation Format

Share Document