scholarly journals Comparison of Image Fusion Methods to Merge KOMPSAT-2 Panchromatic and Multispectral Images

2012 ◽  
Vol 28 (1) ◽  
pp. 39-54 ◽  
Author(s):  
Kwan-Young Oh ◽  
Hyung-Sup Jung ◽  
Kwang-Jae Lee
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


2019 ◽  
Vol 45 ◽  
pp. 153-178 ◽  
Author(s):  
Jiayi Ma ◽  
Yong Ma ◽  
Chang Li

Sign in / Sign up

Export Citation Format

Share Document