Fusing Remote Sensing Images Using a Statistical Model

2012 ◽  
Vol 263-266 ◽  
pp. 416-420 ◽  
Author(s):  
Xiao Qing Luo ◽  
Xiao Jun Wu

Enhance spectral fusion quality is the one of most significant targets in the field of remote sensing image fusion. In this paper, a statistical model based fusion method is proposed, which is the improved method for fusing remote sensing images on the basis of the framework of Principal Component Analysis(PCA) and wavelet decomposition-based image fusion. PCA is applied to the source images. In order to retain the entropy information of data, we select the principal component axes based on entropy contribution(ECA). The first entropy component and panchromatic image(PAN) are performed a multiresolution decompositon using wavelet transform. The low frequency subband fused by weighted aggregation approach and high frequency subband fused by statistical model. High resolution multispectral image is then obtained by an inverse wavelet and ECA transform. The experimental results demonstrate that the proposed method can retain the spectral information and spatial information in the fusion of PAN and multi-spectral image(MS).

2020 ◽  
Vol 9 (4) ◽  
pp. 256 ◽  
Author(s):  
Liguo Weng ◽  
Yiming Xu ◽  
Min Xia ◽  
Yonghong Zhang ◽  
Jia Liu ◽  
...  

Changes on lakes and rivers are of great significance for the study of global climate change. Accurate segmentation of lakes and rivers is critical to the study of their changes. However, traditional water area segmentation methods almost all share the following deficiencies: high computational requirements, poor generalization performance, and low extraction accuracy. In recent years, semantic segmentation algorithms based on deep learning have been emerging. Addressing problems associated to a very large number of parameters, low accuracy, and network degradation during training process, this paper proposes a separable residual SegNet (SR-SegNet) to perform the water area segmentation using remote sensing images. On the one hand, without compromising the ability of feature extraction, the problem of network degradation is alleviated by adding modified residual blocks into the encoder, the number of parameters is limited by introducing depthwise separable convolutions, and the ability of feature extraction is improved by using dilated convolutions to expand the receptive field. On the other hand, SR-SegNet removes the convolution layers with relatively more convolution kernels in the encoding stage, and uses the cascading method to fuse the low-level and high-level features of the image. As a result, the whole network can obtain more spatial information. Experimental results show that the proposed method exhibits significant improvements over several traditional methods, including FCN, DeconvNet, and SegNet.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Mengxing Huang ◽  
Shi Liu ◽  
Zhenfeng Li ◽  
Siling Feng ◽  
Di Wu ◽  
...  

A two-stream remote sensing image fusion network (RCAMTFNet) based on the residual channel attention mechanism is proposed by introducing the residual channel attention mechanism (RCAM) in this paper. In the RCAMTFNet, the spatial features of PAN and the spectral features of MS are extracted, respectively, by a two-channel feature extraction layer. Multiresidual connections allow the network to adapt to a deeper network structure without the degradation. The residual channel attention mechanism is introduced to learn the interdependence between channels, and then the correlation features among channels are adapted on the basis of the dependency. In this way, image spatial information and spectral information are extracted exclusively. What is more, pansharpening images are reconstructed across the board. Experiments are conducted on two satellite datasets, GaoFen-2 and WorldView-2. The experimental results show that the proposed algorithm is superior to the algorithms to some existing literature in the comparison of the values of reference evaluation indicators and nonreference evaluation indicators.


Author(s):  
Reham Gharbia ◽  
Aboul Ella Hassanien

This chapter presents a remote sensing image fusion based on swarm intelligence. Image fusion is combining multi-sensor images in a single image that has most informative. Remote sensing image fusion is an effective way to extract a large volume of data from multisource images. However, traditional image fusion approaches cannot meet the requirements of applications because they can lose spatial information or distort spectral characteristics. The core of the image fusion is image fusion rules. The main challenge is getting suitable weight of fusion rule. This chapter proposes swarm intelligence to optimize the image fusion rule. Swarm intelligence algorithms are a family of global optimizers inspired by swarm phenomena in nature and have shown better performance. In this chapter, two remote sensing image fusion based on swarm intelligence algorithms, Particle Swarm Optimization (PSO) and flower pollination algorithm are presented to get an adaptive image fusion rule and comparative between them.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 303 ◽  
Author(s):  
Xiaole Ma ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Jing Fang ◽  
Shuwen Xu

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1756
Author(s):  
Liangliang Li ◽  
Hongbing Ma

The rapid development of remote sensing and space technology provides multisource remote sensing image data for earth observation in the same area. Information provided by these images, however, is often complementary and cooperative, and multisource image fusion is still challenging. This paper proposes a novel multisource remote sensing image fusion algorithm. It integrates the contrast saliency map (CSM) and the sum-modified-Laplacian (SML) in the nonsubsampled shearlet transform (NSST) domain. The NSST is utilized to decompose the source images into low-frequency sub-bands and high-frequency sub-bands. Low-frequency sub-bands reflect the contrast and brightness of the source images, while high-frequency sub-bands reflect the texture and details of the source images. Using this information, the contrast saliency map and SML fusion rules are introduced into the corresponding sub-bands. Finally, the inverse NSST reconstructs the fusion image. Experimental results demonstrate that the proposed multisource remote image fusion technique performs well in terms of contrast enhancement and detail preservation.


Author(s):  
Kang Zhang ◽  
Yongdong Huang ◽  
Cheng Zhao

In order to improve fused image quality of multi-spectral (MS) image and panchromatic (PAN) image, a new remote sensing image fusion algorithm based on robust principal component analysis (RPCA) and non-subsampled shearlet transform (NSST) is proposed. First, the first principle component PC1 of MS image is extracted via principal component analysis (PCA). Then, the component PC1 and PAN image are decomposed by NSST to get the low and high frequency subbands, respectively. For the low frequency subband, the sparse matrix of PAN image by RPCA decomposition is used to guide the fusion rule; for the high frequency subbands, the fusion rule employed is based on adaptive PCNN model. Finally, the fusion image is obtained by inverse NSST transform and inverse PCA transform. The experimental results illustrate that the proposed fusion algorithm outperforms other classical fusion algorithms (PCA, Curvelet, NSCT, NSST and NSCT-PCNN) in terms of visual quality and objective evaluation in whole, and achieve better fusion performance.


Author(s):  
Srinivasa Rao Dammavalam ◽  
Shadi A. Aljawarneh ◽  
N. Rajasekhar

Image fusion is to converge Multispectral (MS) and Panchromatic (PAN) images into a fused image which is further enlightened. Soft computing based image fusion techniques are fuzzy and neuro-fuzzy are exploited to lessening the severance and vagueness in the output. Fused images achieved after the synthesis is utilized in image analysis, medical applications, armed province, and computer revelation. In this research, we convey iterative image fusion based on fuzzy and neuro-fuzzy methods on source images attained from different sources to improve visualization proficiency. We also compared the proposed techniques with principal component analysis (PCA) and wavelets transform based image fusion. Fused outcomes accomplished from image fusion methods are assessed through typical eminence evaluation parameters. The resulting outcome obtained from iterative fusion is improved in terms of spectral and spatial information when compared to the one-time fused image. Due to neural networks structure, applied sorts of biological neural networks and potentiality of the fuzzy and neuro-fuzzy logic, the proposed method overtakes the conventional methods. The complete investigational consequences formed from anticipated methodology established that the utilization of proposed approaches enhanced image content.


Sign in / Sign up

Export Citation Format

Share Document