scholarly journals Infrared and Visible Image Fusion via Fast Approximate Bilateral Filter and Local Energy Characteristics

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zongping Li ◽  
Wenxin Lei ◽  
Xudong Li ◽  
Tingting Liao ◽  
Jianming Zhang

Image fusion is to effectively enhance the accuracy, stability, and comprehensiveness of information. Generally, infrared images lack enough background details to provide an accurate description of the target scene, while visible images are difficult to detect radiation under adverse conditions, such as low light. People hoped that the richness of image details can be improved by using effective fusion algorithms. In this paper, we propose an infrared and visible image fusion algorithm, aiming to overcome some common defects in the process of image fusion. Firstly, we use fast approximate bilateral filter to decompose the infrared image and visible image to obtain the small-scale layers, large-scale layer, and base layer. Then, the fused base layer is obtained based on local energy characteristics, which avoid information loss of traditional fusion rules. The fused small-scale layers are acquired by selecting the absolute maximum, and the fused large-scale layer is obtained by summation rule. Finally, the fused small-scale layers, large-scale layer, and base layer are merged to reconstruct the final fused image. Experimental results show that our method retains more detailed appearance information of the fused image and achieves good results in both qualitative and quantitative evaluations.

Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


2021 ◽  
pp. 1-14
Author(s):  
Feiqiang Liu ◽  
Lihui Chen ◽  
Lu Lu ◽  
Gwanggil Jeon ◽  
Xiaomin Yang

Infrared (IR) and visible (VIS) image fusion technology combines the complementary information of the same scene from IR and VIS imaging sensors to generate a composite image, which is beneficial to post image-processing tasks. In order to achieve good fusion performance, a method by combining rolling guidance filter (RGF) and convolutional sparse representation (CSR) is proposed. In the proposed method, RGF is performed on every pre-registered IR and VIS source images to obtain their detail layers and base layer. Then, the detail layers are fused with a serious of weighted coefficients produced by joint bilateral filer (JBF). The base layer is decomposed into a sub-detail-layer and a sub-base-layer. CSR is applied to fuse the sub-detail-layer and averaging strategy is used to fuse the sub-base-layer. Finally, the fused image is reconstructed by adding the fused detail layer and base layer. Experimental results demonstrate the superiority of our proposed method both in subjective and objective assessment.


2020 ◽  
Vol 13 (1) ◽  
pp. 51-61
Author(s):  
Dawa C. Lepcha ◽  
Bhawna Goyal ◽  
Ayush Dogra

Introduction: Image Fusion is the method which conglomerates complimentary information from the source images to a single fused image . There are numerous applications of image fusion in the current scenario such as in remote sensing, medical diagnosis, machine vision system, astronomy, robotics, military units, biometrics, and surveillance. Objective: In this case multi-sensor or multi-focus devices capture images of the particular scene which are complementary in the context of information content to each other. The details from complementary images are combined through the process of fusion into a single image by applying the algorithmic formulas. The main goal of image fusion is to fetch more and proper information from the primary or source images to the fused image by minimizing the loss of details of the images and by doing so to decrease the artifacts in the final image. Methodology: In this paper, we proposed a new method to fuse the images by applying a cross bilateral filter for gray level similarities and geometric closeness of the neighboring pixels without smoothing edges. Then, the detailed images obtained by subtracting the cross bilateral filter image output from original images are being filtered through the rolling guidance filter for scale aware operation. In particular, it removes the small-scale structures while preserving the other contents of the image and successfully recovers the edges of the detailed images. Finally, the images have been fused using a weighted computed algorithm and weight normalization. Results: The results have been validated and compared with various existing state-of-the-art methods both subjectively and quantitatively. Conclusion: It was observed that the proposed method outperforms the existing methods of image fusion.


Author(s):  
Zixiang Zhao ◽  
Shuang Xu ◽  
Chunxia Zhang ◽  
Junmin Liu ◽  
Jiangshe Zhang ◽  
...  

Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


Author(s):  
Sanjoy Das ◽  
Yunlong Zhang

Fusion of registered images of night scenery that are obtained from cameras tuned to different band-widths will be a significant component of future night-vision devices. A new algorithm for such multispectral image fusion is described. The algorithm performs gray-scale image fusion using a method based on principal components. The monochrome fused image is then colored by means of a suitable pseudocoloring technique to produce the fused color output image. The approach can easily be used for any number of bandwidths. Examples illustrate the algorithm’s use to fuse an intensified low-light visible image with another image obtained from a single forward-looking infrared camera. The algorithm may be implemented readily in hardware for use in night-vision devices as an important aid to surveillance and navigation in total darkness. The applicability of the technology to transportation is also discussed.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shuai Hao ◽  
Beiyi An ◽  
Hu Wen ◽  
Xu Ma ◽  
Keping Yu

Unmanned aerial vehicles, with their inherent fine attributes, such as flexibility, mobility, and autonomy, play an increasingly important role in the Internet of Things (IoT). Airborne infrared and visible image fusion, which constitutes an important data basis for the perception layer of IoT, has been widely used in various fields such as electric power inspection, military reconnaissance, emergency rescue, and traffic management. However, traditional infrared and visible image fusion methods suffer from weak detail resolution. In order to better preserve useful information from source images and produce a more informative image for human observation or unmanned aerial vehicle vision tasks, a novel fusion method based on discrete cosine transform (DCT) and anisotropic diffusion is proposed. First, the infrared and visible images are denoised by using DCT. Second, anisotropic diffusion is applied to the denoised infrared and visible images to obtain the detail and base layers. Third, the base layers are fused by using weighted averaging, and the detail layers are fused by using the Karhunen–Loeve transform, respectively. Finally, the fused image is reconstructed through the linear superposition of the base layer and detail layer. Compared with six other typical fusion methods, the proposed approach shows better fusion performance in both objective and subjective evaluations.


Sign in / Sign up

Export Citation Format

Share Document