Remote Sensing Image Fusion of Worldview-2 Satellite Data

2013 ◽  
Vol 333-335 ◽  
pp. 1159-1163
Author(s):  
Lei Cao ◽  
Jun Liu ◽  
Shu Guang Liu

In view of the situation that most image fusion methods make spectral distortion more or less; this paper proposes a spatial projection method by introducing the Gaussian scale space theory. According to mechanism of the human visual system represented by the Gaussian scale space, the spatial details feature information are extracted from the original panchromatic (PAN) and multispectral (MS) images, then the feature differences are projected into the original MS image to obtain the fused image. The experimental results with WorldView-2 images show that the proposed method can improve the spatial resolution of the fused MS image effectively, while it can make little spectral distortion to the fused images so as to maintain the great majority spectral information.

2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


2020 ◽  
Vol 2020 ◽  
pp. 1-16 ◽  
Author(s):  
Bing Huang ◽  
Feng Yang ◽  
Mengxiao Yin ◽  
Xiaoying Mo ◽  
Cheng Zhong

The medical image fusion is the process of coalescing multiple images from multiple imaging modalities to obtain a fused image with a large amount of information for increasing the clinical applicability of medical images. In this paper, we attempt to give an overview of multimodal medical image fusion methods, putting emphasis on the most recent advances in the domain based on (1) the current fusion methods, including based on deep learning, (2) imaging modalities of medical image fusion, and (3) performance analysis of medical image fusion on mainly data set. Finally, the conclusion of this paper is that the current multimodal medical image fusion research results are more significant and the development trend is on the rise but with many challenges in the research field.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2018 ◽  
Vol 215 ◽  
pp. 01002
Author(s):  
Yuhendra ◽  
Minarni

Image fusion is a useful tool for integrating low spatial resolution multispectral (MS) images with a high spatial resolution panchromatic (PAN) image, thus producing a high resolution multispectral image for better understanding of the observed earth surface. A main proposed the research were the effectiveness of different image fusion methods while filtering methods added to speckle suppression in synthetic aperture radar (SAR) images. The quality assessment of the filtering fused image implemented by statistical parameter namely mean, standard deviation, bias, universal index quality image (UIQI) and root mean squared error (RMSE). In order to test the robustness of the image quality, either speckle noise (Gamma map filter) is intentionally added to the fused image. When comparing and testing result, Gram Scmidth (GS) methods have shown better results for good colour reproduction, as compared with high pass filtering (HPF). And the other hands, GS, and wavelet intensity hue saturation (W-IHS) have shown the preserving good colour with original image for Landsat TM data.


2020 ◽  
Vol 8 (5) ◽  
pp. 4388-4392

Image Fusion is a pioneering technique in the area of image processing to generate high quality images. In image fusion, more than one images of a scene with varying conditions are considered and fused together to get an image with more details than the input images. The required features and details will be enhanced in fused image. This technique is used in different areas of application for correcting and enhancing the details that are not visible in sources. This paper provides a clear idea about different Image Fusion approaches. A detailed review on advanced Multi-Exposure Image Fusion methods is also provided. A comparative study on important multi-exposure image fusion methods are done both quantitatively and qualitatively.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


Sign in / Sign up

Export Citation Format

Share Document