Shearlet transform based technique for image fusion using median fusion rule

Author(s):  
Ashish Khare ◽  
Manish Khare ◽  
Richa Srivastava
2018 ◽  
Vol 12 (2) ◽  
pp. 73-84 ◽  
Author(s):  
Peng-Fei Wang ◽  
Xiao-Qing Luo ◽  
Xin-Yi Li ◽  
Zhan-Cheng Zhang

Stacked sparse autoencoder is an efficient unsupervised feature extraction method, which has excellent ability in representation of complex data. Besides, shift invariant shearlet transform is a state-of-the-art multiscale decomposition tool, which is superior to traditional tools in many aspects. Motivated by the advantages mentioned above, a novel stacked sparse autoencoder and shift invariant shearlet transform-based image fusion method is proposed. First, the source images are decomposed into low- and high-frequency subbands by shift invariant shearlet transform; second, a two-layer stacked sparse autoencoder is adopted as a feature extraction method to get deep and sparse representation of high-frequency subbands; third, a stacked sparse autoencoder feature-based choose-max fusion rule is proposed to fuse the high-frequency subband coefficients; then, a weighted average fusion rule is adopted to merge the low-frequency subband coefficients; finally, the fused image is obtained by inverse shift invariant shearlet transform. Experimental results show the proposed method is superior to the conventional methods both in terms of subjective and objective evaluations.


2010 ◽  
Vol 121-122 ◽  
pp. 373-378 ◽  
Author(s):  
Jia Zhao ◽  
Li Lü ◽  
Hui Sun

According to the different frequency areas decomposed by shearlet transform, the selection principles of the lowpass subbands and highpass subbands were discussed respectively. The lowpass subband coefficients of the fused image can be obtained by means of the fusion rule based on the region variation, the highpass subband coefficients can be selected by means of the fusion rule based on the region energy. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach can provide more satisfactory fusion outcome.


Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


Sign in / Sign up

Export Citation Format

Share Document