scholarly journals An Image Fusion Method Based on Curvelet Transform and Guided Filter Enhancement

2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
Hui Zhang ◽  
Xu Ma ◽  
Yanshan Tian

In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. Finally, the fusion result is obtained by inverse transformation of the curvelet transform. The experimental results show that the proposed method has good performance in detail processing, edge protection, and source image information.

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2012 ◽  
Vol 546-547 ◽  
pp. 806-810 ◽  
Author(s):  
Xu Zhang ◽  
Yun Hui Yan ◽  
Wen Hui Chen ◽  
Jun Jun Chen

To solve the problem of the pseudo-Gibbs phenomena around singularities when we implement image fusion with images of strip surface detects obtained from different angles, a novel image fusion method based on Bandelet-PCNN(Pulse coupled neural networks) is proposed. Low-pass sub-band coefficient of source image by Bandelet is inputted into PCNN. And the coefficient is selected by ignition frequency by the neuron iteration. At last the fused image can be got through inverse Bandelet using the coefficient and Geometric flow parameters. Experimental results demonstrate that for the scrip surface detects of scratches, abrasions and pit, fused image effectively combines defect information of multiple image sources. Contrast to the classical wavelet transform and Bandelet transform the method reserves more detailed and comprehensive detect information. Consequently the method proposed in this paper is more effective.


2018 ◽  
Vol 7 (2.19) ◽  
pp. 55
Author(s):  
Gandla Maharnisha ◽  
R Veerasundari ◽  
Gandla Roopesh Kumar ◽  
Arunraj .

The fused image will have structural details of the higher spatial resolution panchromatic images as well as rich spectral information from the multispectral images. Before fusion, Mean adjustment algorithm of Adaptive Median Filter (AMF) and Hybrid Enhancer (combination of AMF and Contrast Limited Adaptive Histogram Equalization (CLAHE)) are used in the pre-processing. Here, conventional Principal Component image fusion method will be compared with newly modified Curvelet transform image fusion method. Principal Component fusion technique will improve the spatial resolution but it may produce spectral degradation in the output image. To overcome the spectral degradation, Curvelet transform fusion methods can be used. Curvelet transform uses curve which represents edges and extraction of the detailed information from the image. Curvelet Transform of individual acquired low-frequency approximate component of PAN image and high-frequency detail components from PAN and MS image is used. Peak Signal to Noise Ratio (PSNR) and Root Mean Square Error (RMSE) are measured to evaluate the image fusion accuracy. 


Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1135 ◽  
Author(s):  
Xinghua Huang ◽  
Guanqiu Qi ◽  
Hongyan Wei ◽  
Yi Chai ◽  
Jaesung Sim

In multi-modality image fusion, source image decomposition, such as multi-scale transform (MST), is a necessary step and also widely used. However, when MST is directly used to decompose source images into high- and low-frequency components, the corresponding decomposed components are not precise enough for the following infrared-visible fusion operations. This paper proposes a non-subsampled contourlet transform (NSCT) based decomposition method for image fusion, by which source images are decomposed to obtain corresponding high- and low-frequency sub-bands. Unlike MST, the obtained high-frequency sub-bands have different decomposition layers, and each layer contains different information. In order to obtain a more informative fused high-frequency component, maximum absolute value and pulse coupled neural network (PCNN) fusion rules are applied to different sub-bands of high-frequency components. Activity measures, such as phase congruency (PC), local measure of sharpness change (LSCM), and local signal strength (LSS), are designed to enhance the detailed features of fused low-frequency components. The fused high- and low-frequency components are integrated to form a fused image. The experiment results show that the fused images obtained by the proposed method achieve good performance in clarity, contrast, and image information entropy.


2013 ◽  
Vol 427-429 ◽  
pp. 1589-1592
Author(s):  
Zhong Jie Xiao

The study proposed an improved NSCT fusion method based on the infrared and visible light images characteristics and fusion requirement. This paper improved the high-frequency coefficient and low-frequency coefficient fusion rules. The low-frequency sub-band images adopted the pixel feature energy weighted fusion rule. The high-frequency sub-band images adopted the neighborhood variance feature information fusion rule. The fusion experiment results show that this algorithm has good robustness. It could effectively extract edges and texture information. The fused images have abundance scene information and clear target. So this algorithm is an effective infrared and visible image fusion method.


Author(s):  
Cheng Zhao ◽  
Yongdong Huang

The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.


Author(s):  
Yahui Zhu ◽  
Li Gao

To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


Sign in / Sign up

Export Citation Format

Share Document