A Rapid Fusion Algorithm of Infrared and the Visible Images Based on Directionlet Transform

2010 ◽  
Vol 20-23 ◽  
pp. 45-51
Author(s):  
Xiang Li ◽  
Yue Shun He ◽  
Xuan Zhan ◽  
Feng Yu Liu

Direction transform; image fusion; infrared images; fusion rule; anisotropic Abstract Based on analysing the feature of infrared and the visible, this paper proposed an improved algorithm using Directionlet transform.The feature is like this: firstly, separate the color visible images to get the component images, and then make anisotropic decomposition for component images and inrared images, after analysing these images, process them according to regional energy rules ,finally incorporate the intense color to get the fused image. The simulation results shows that,this algorithm can effectively fuse infrared and the visible image, moreover, not only the fused images can maintain the environment details, but also underline the edge features, which applies to fusion with strong edges, therefore,this algorithm is of robust and convenient.

2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the Wavelet Transform (WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.


2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the wavelet transform(WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.


Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


2020 ◽  
Vol 8 (6) ◽  
pp. 1525-1529

Image fusion is the process of coalescence of two or more images of the same scene taken from different sensors to produce a composite image with rich details. Due to the progression of infrared (IR) and Visible (VI) image fusion and its ever-growing demands it led to an algorithmic development of image fusion in the last several years. The two modalities have to be integrated altogether with the necessary information to form a single image. In this article, a novel image fusion algorithm has been introduced with the combination of bilateral, Robert filters as method I and moving average, bilateral filter as method II to fuse infrared and visible images. The proposed algorithm follows double - scale decomposition by using average filer and the detail information is obtained by subtracting it from the source image. Smooth and detail weights of the source images are obtained by using the two methods mentioned above. Then a weight based fusion rule is used to amalgamate the source image information into a single image. Performances of both methods are compared both qualitatively and quantitatively. Experimental results provide better results for method I compared to method II.


2011 ◽  
Vol 128-129 ◽  
pp. 589-593 ◽  
Author(s):  
Yi Feng Niu ◽  
Sheng Tao Xu ◽  
Wei Dong Hu

Infrared and visible image fusion is an important precondition to realize target perception for unmanned aerial vehicles (UAV) based on which UAV can perform various missions. The details in visible images are abundant, while the target information is more outstanding in infrared images. However, the conventional fusion methods are mostly based on region segmentation, and then the fused image for target recognition can’t be actually acquired. In this paper, a novel fusion method of infrared and visible image based on target regions in discrete wavelet transform (DWT) domain is proposed, which can gain more target information and preserve the details. Experimental results show that our method can generate better fused image for target recognition.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3827 ◽  
Author(s):  
Qinglei Du ◽  
Han Xu ◽  
Yong Ma ◽  
Jun Huang ◽  
Fan Fan

In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.


2017 ◽  
Vol 17 (02) ◽  
pp. 1750008 ◽  
Author(s):  
Meenu Manchanda ◽  
Rajiv Sharma

Extensive development has taken place in the field of image fusion and various algorithms of image fusion have attracted the attention of many researchers in the recent past. Various algorithms of image fusion are used to combine information from multiple source images into a single fused image. In this paper, fusion of multiple images using fuzzy transform is proposed. Images to be fused are initially decomposed into same size blocks. These blocks are then fuzzy transformed and fused using maxima coefficient value-based fusion rule. Finally, the fused image is obtained by performing inverse fuzzy transform. The performance of the proposed algorithm is evaluated by performing experiments on multifocus, medical and visible/infrared images. Further, the performance of the proposed algorithm is compared with the state-of-the-art image fusion algorithms, both subjectively and objectively. Experimental results and comparative study show that the proposed fusion algorithm fuses the multiple images effectively and produces better fusion results for medical and visible/infrared images.


Merging of multiple imaging modalities leads to a single image that acquire high information content. These find useful applications in disease diagnosis and treatment planning. IHS-PCA method is a spatial domain approach for fusion that offersfinestvisibility but demands vast memory and it lacks steering information. We propose an integrated approach that incorporates NSCT combined with PCA utilizing IHS space and histogram matching. The fusion algorithm is applied on MRI with PET image and improved functional property was obtained. The IHS transform is a sharpening technique that converts multispectral image from RGB channels to Intensity Hue and Saturation independent values. Histogram matching is performed with intensity values of the two input images. Pathological details in images can be emphasized in multi-scale and multi-directions by using PCA withNSCT. Fusion rule applied is weighted averaging andprincipal components are used for dimensionality reduction. Inverse NSCT and Inverse IHS are performed so as to obtain the fused image in new RGB space. Visual and subjective investigation is compared with existing methods which demonstrate that our proposed technique gives high structural data content with high spatial and spectral resolution compared withearlier methods.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


Sign in / Sign up

Export Citation Format

Share Document