Multimodal Image Fusion Method Based on Guided Filter

Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zongping Li ◽  
Wenxin Lei ◽  
Xudong Li ◽  
Tingting Liao ◽  
Jianming Zhang

Image fusion is to effectively enhance the accuracy, stability, and comprehensiveness of information. Generally, infrared images lack enough background details to provide an accurate description of the target scene, while visible images are difficult to detect radiation under adverse conditions, such as low light. People hoped that the richness of image details can be improved by using effective fusion algorithms. In this paper, we propose an infrared and visible image fusion algorithm, aiming to overcome some common defects in the process of image fusion. Firstly, we use fast approximate bilateral filter to decompose the infrared image and visible image to obtain the small-scale layers, large-scale layer, and base layer. Then, the fused base layer is obtained based on local energy characteristics, which avoid information loss of traditional fusion rules. The fused small-scale layers are acquired by selecting the absolute maximum, and the fused large-scale layer is obtained by summation rule. Finally, the fused small-scale layers, large-scale layer, and base layer are merged to reconstruct the final fused image. Experimental results show that our method retains more detailed appearance information of the fused image and achieves good results in both qualitative and quantitative evaluations.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 570 ◽  
Author(s):  
Jingchun Piao ◽  
Yunfan Chen ◽  
Hyunchul Shin

In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.


2021 ◽  
Vol 38 (3) ◽  
pp. 607-617
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


Author(s):  
Peng Guo ◽  
Guoqi Xie ◽  
Renfa Li ◽  
Hui Hu

In feature-level image fusion, deep learning technology, particularly convolutional sparse representation (SR) theory, has emerged as a new topic over the past three years. This paper proposes an effective image fusion method based on convolution SR, namely, convolutional sparsity-based morphological component analysis and guided filter (CS-MCA-GF). The guided filter operator and choose-max coefficient fusion scheme introduced in this method can effectively eliminate the artifacts generated by the morphological components in the linear fusion, and maintain the pixel saliency of the source images. Experiments show that the proposed method can achieve an excellent performance in multi-modal image fusion, which includes medical image fusion.


2019 ◽  
Vol 34 (6) ◽  
pp. 605-612
Author(s):  
郭 盼 GUO Pan ◽  
何文超 HE Wen-chao ◽  
梁龙凯 LIANG Long-kai ◽  
张 萌 ZHANG Meng ◽  
吕绪浩 LYU Xu-hao ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Yong Yang ◽  
Wenjuan Zheng ◽  
Shuying Huang

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


Sign in / Sign up

Export Citation Format

Share Document