An infrared and visible image fusion algorithm based on MAP

Author(s):  
Kai Kang ◽  
Tingting Liu ◽  
Tianyun Wang ◽  
Fuchun Nian ◽  
Xianchun Xu
2019 ◽  
Vol 48 (6) ◽  
pp. 610001
Author(s):  
江泽涛 JIANG Ze-tao ◽  
何玉婷 HE Yu-ting ◽  
张少钦 ZHANG Shao-qin

2014 ◽  
Vol 67 ◽  
pp. 397-407 ◽  
Author(s):  
Xiaoqi Lu ◽  
Baohua Zhang ◽  
Ying Zhao ◽  
He Liu ◽  
Haiquan Pei

Author(s):  
Yumei Wang ◽  
Mingyi Zhang ◽  
Congyong Li ◽  
Tao Wang ◽  
Keming Huang ◽  
...  

2020 ◽  
Vol 12 (5) ◽  
pp. 781 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yang Chen ◽  
Wenhai Xu

Infrared and visible image fusion technology provides many benefits for human vision and computer image processing tasks, including enriched useful information and enhanced surveillance capabilities. However, existing fusion algorithms have faced a great challenge to effectively integrate visual features from complex source images. In this paper, we design a novel infrared and visible image fusion algorithm based on visual attention technology, in which a special visual attention system and a feature fusion strategy based on the saliency maps are proposed. Special visual attention system first utilizes the co-occurrence matrix to calculate the image texture complication, which can select a particular modality to compute a saliency map. Moreover, we improved the iterative operator of the original visual attention model (VAM), a fair competition mechanism is designed to ensure that the visual feature in detail regions can be extracted accurately. For the feature fusion strategy, we use the obtained saliency map to combine the visual attention features, and appropriately enhance the tiny features to ensure that the weak targets can be observed. Different from the general fusion algorithm, the proposed algorithm not only preserve the interesting region but also contain rich tiny details, which can improve the visual ability of human and computer. Moreover, experimental results in complicated ambient conditions show that the proposed algorithm in this paper outperforms state-of-the-art algorithms in both qualitative and quantitative evaluations, and this study can extend to the field of other-type image fusion.


2020 ◽  
Vol 8 (6) ◽  
pp. 1525-1529

Image fusion is the process of coalescence of two or more images of the same scene taken from different sensors to produce a composite image with rich details. Due to the progression of infrared (IR) and Visible (VI) image fusion and its ever-growing demands it led to an algorithmic development of image fusion in the last several years. The two modalities have to be integrated altogether with the necessary information to form a single image. In this article, a novel image fusion algorithm has been introduced with the combination of bilateral, Robert filters as method I and moving average, bilateral filter as method II to fuse infrared and visible images. The proposed algorithm follows double - scale decomposition by using average filer and the detail information is obtained by subtracting it from the source image. Smooth and detail weights of the source images are obtained by using the two methods mentioned above. Then a weight based fusion rule is used to amalgamate the source image information into a single image. Performances of both methods are compared both qualitatively and quantitatively. Experimental results provide better results for method I compared to method II.


2021 ◽  
Vol 9 (3A) ◽  
Author(s):  
Alex Noel Joseph Raj ◽  
◽  
M. Murugappan ◽  
Arunachalam V ◽  
◽  
...  

Several applications utilizing a set of red green blue (RGB) and near infrared (NIR) images have been emerging over recent years. The present work proposes a technique of enhancing an image by combining color (RGB) and near infrared information (NIR). In order to fuse the two types of images, the NIR-channel is considered as a luminance counterpart to the visible image. International standard database (RGB-NIR Scene Dataset) is used in this work for image fusion. The objective of the paper is to present a simple and hardware efficient fusion method, where the original RGB image is converted into two different color spaces, namely, HSV and YCbCr. Later, the luminance channel of the RGB image is replaced with the near infrared channel, thereby obtaining a fused enhanced image. The above procedure is effectively implemented on FPGA using the Xilinx HLS tool. RGB-NIR dataset is used in the present work for testing the proposed image fusion algorithm, and the quality of the fused image is measured through peak signal to noise ratio (PSNR). The experimental results indicate that HSV color space is more efficient in image fusion compared to YCbCr color space based on the average PSNR values of approximately 29db for HSV and 25db for YCbCr for various images, respectively. Finally, this complete fusion algorithm is implemented on Xilinx Nexys4 FPGA board to be able to obtain real-time outputs in the form of vivid, contrasted images that are pleasing to the observers. The experimental results illustrate that the Xilinx FPGA utilizes only 50% of the available hardware resources and consumes approximately 5.3 Watts to implement the fusion process.


Sign in / Sign up

Export Citation Format

Share Document