scholarly journals Infrared and Visible Image Fusion Based on Nonlinear Enhancement and NSST Decomposition

2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the wavelet transform(WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.

2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the Wavelet Transform (WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Baoqing Guo ◽  
Xingfang Zhou ◽  
Yingzi Lin ◽  
Liqiang Zhu ◽  
Zujun Yu

Objects intruding high-speed railway clearance do great threat to running trains. In order to improve accuracy of railway intrusion detection, an automatic multimodal registration and fusion algorithm for infrared and visible images with different field of views is presented. The ratio of the nearest to next nearest distance, geometric, similar triangle, and RANSAC constraints are used to refine the matching SURF feature points successively. Correct matching points are accumulated with multiframe to overcome the insufficient matching points in single image pair. After being registered, an improved Contourlet transform fusion algorithm combined with total variation and local region energy is proposed. Inverse Contourlet transform to low frequency subband coefficient fused with total variation model and high frequency subband coefficients fused with local region energy is used to reconstruct the fused image. The comparison to other 4 popular fusion methods shows that our algorithm has the best comprehensive performance for multimodal railway image fusion.


2014 ◽  
Vol 989-994 ◽  
pp. 3763-3767
Author(s):  
Hai Feng Tan ◽  
Tian Wen Luo ◽  
Jing Jun Zhu ◽  
Guan Zhong Li ◽  
Quan Xi Zhang

A novel fusion algorithm based on Nonsubsampled Contourlet Transform (NSCT) is proposed, according to the characteristics of infrared and visible images. Firstly, the registered infrared and visible images from the same scene were transformed by NSCT transforms; then the low frequency coefficient is fused by the combination of local energy and normalised correlation matrix, the high frequency coefficient fusion is fused by regional energy matching with regional variance; finally, the target image is obtained by performing inverse NSCT transforms. experimental results indicate that the proposed algorithm can effectively get more detail information and the fusion performance is dramatically better than traditional fusion methods.


2010 ◽  
Vol 20-23 ◽  
pp. 45-51
Author(s):  
Xiang Li ◽  
Yue Shun He ◽  
Xuan Zhan ◽  
Feng Yu Liu

Direction transform; image fusion; infrared images; fusion rule; anisotropic Abstract Based on analysing the feature of infrared and the visible, this paper proposed an improved algorithm using Directionlet transform.The feature is like this: firstly, separate the color visible images to get the component images, and then make anisotropic decomposition for component images and inrared images, after analysing these images, process them according to regional energy rules ,finally incorporate the intense color to get the fused image. The simulation results shows that,this algorithm can effectively fuse infrared and the visible image, moreover, not only the fused images can maintain the environment details, but also underline the edge features, which applies to fusion with strong edges, therefore,this algorithm is of robust and convenient.


2012 ◽  
Vol 424-425 ◽  
pp. 223-226 ◽  
Author(s):  
Zheng Hong Cao ◽  
Yu Dong Guan ◽  
Peng Wang ◽  
Chun Li Ti

This paper focuses on the fusion method of visible image and infrared image, making in-depth discussion on the existing algorithms and proposes a novel method on the fusion rules. The image is firstly decomposed into low-frequency and high-frequency coefficients by NSCT and the characteristics of visible image and infrared image are then taken into account to finish the fusion. Finally, the quality of the fused image by different algorithms is compared with several existing criterions. MATLAB is employed to finish the simulation and the results will demonstrate this algorithm can improve the quality of the fused image effectively and the features in the image won’t be missing


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3827 ◽  
Author(s):  
Qinglei Du ◽  
Han Xu ◽  
Yong Ma ◽  
Jun Huang ◽  
Fan Fan

In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.


2013 ◽  
Vol 427-429 ◽  
pp. 1589-1592
Author(s):  
Zhong Jie Xiao

The study proposed an improved NSCT fusion method based on the infrared and visible light images characteristics and fusion requirement. This paper improved the high-frequency coefficient and low-frequency coefficient fusion rules. The low-frequency sub-band images adopted the pixel feature energy weighted fusion rule. The high-frequency sub-band images adopted the neighborhood variance feature information fusion rule. The fusion experiment results show that this algorithm has good robustness. It could effectively extract edges and texture information. The fused images have abundance scene information and clear target. So this algorithm is an effective infrared and visible image fusion method.


Author(s):  
Cheng Zhao ◽  
Yongdong Huang

The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


Sign in / Sign up

Export Citation Format

Share Document