Image Fusion Method Based on Universal Hidden Markov Tree Model in the Contourlet Domain

2013 ◽  
Vol 684 ◽  
pp. 491-494
Author(s):  
Xi Cai ◽  
Guang Han ◽  
Jin Kuan Wang

Considering the statistical characteristics of contourlet coefficients of images, we propose an image fusion method based on the universal contourlet hidden Markov tree (uHMT) model. A salience measure and a match measure are presented according to the probability of a contourlet coefficient belonging to the high state of the uHMT model which needs no training. Experimental results prove the effectiveness of our method in the field of visual quality and objective evaluations.

2013 ◽  
Vol 287 ◽  
pp. 63-72 ◽  
Author(s):  
Wei Wu ◽  
Xiaomin Yang ◽  
Yu Pang ◽  
Jian Peng ◽  
Gwanggil Jeon

2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Feng Zhu ◽  
Yingkun Hou ◽  
Jingyu Yang

A new multifocus image fusion method is proposed. Two image blocks are selected by sliding the window from the two source images at the same position, discrete cosine transform (DCT) is implemented, respectively, on these two blocks, and the alternating component (AC) energy of these blocks is then calculated to decide which is the well-focused one. In addition, block matching is used to determine a group of image blocks that are all similar to the well-focused reference block. Finally, all the blocks are returned to their original positions through weighted average. The weight is decided with the AC energy of the well-focused block. Experimental results demonstrate that, unlike other spatial methods, the proposed method effectively avoids block artifacts. The proposed method also significantly improves the objective evaluation results, which are obtained by some transform domain methods.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2021 ◽  
Vol 92 ◽  
pp. 107174
Author(s):  
Yang Zhou ◽  
Xiaomin Yang ◽  
Rongzhu Zhang ◽  
Kai Liu ◽  
Marco Anisetti ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document