scholarly journals Enhanced Image Fusion Methodology for Low Light Visible and Infrared Images

Image fusion is the mechanism in which at least two images are consolidated into a single image holding the imperative features from each one of the first images. Emerging images are upgraded and the image content is been enhanced in the entire context, this out coming image is much more preferable than the base images. Certain circumstances in image processing need both high dimensional and high spectral information in a solitary image, which is crucial in remote sensing. Image fusion procedure incorporates intensifying, filtering, and moulding the images for better results. Efficient and imperative approaches for image fusion are enforced here. The image fusion method comprises two discrete types of images, the visible image and the infrared image. The Single Scale Retinex (SSR) is applied to the visible image to obtain an upgraded image, simultaneously Principal Component Analysis (PCA) is been applied to infrared image to obtain an image with superior contrast and colour. Further these treated images are decomposed into a multilayer image by using Laplacian Pyramid algorithm. To end with Weighted Average fusion method aids in fusing the images to reproduce the augmented fused image.

2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Javad Abbasi Aghamaleki ◽  
Alireza Ghorbani

AbstractImage fusion is the combining process of complementary information of multiple same scene images into an output image. The resultant output image that is named fused image, produces more precise description of the scene than any of the individual input images. In this paper, we propose a novel simple and fast strategy for infrared (IR) and visible images based on local important areas of IR image. The fusion method is completed in three step approach. Firstly, only the segmented regions in the infrared image is extracted. Next, the image fusion is applied on segmented area and finally, contour lines are also used to improve the quality of the results of the second step of fusion method. Using a publicly available database, the proposed method is evaluated and compared to the other fusion methods. The experimental results show the effectiveness of the proposed method compared to the state of the art methods.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2012 ◽  
Vol 424-425 ◽  
pp. 223-226 ◽  
Author(s):  
Zheng Hong Cao ◽  
Yu Dong Guan ◽  
Peng Wang ◽  
Chun Li Ti

This paper focuses on the fusion method of visible image and infrared image, making in-depth discussion on the existing algorithms and proposes a novel method on the fusion rules. The image is firstly decomposed into low-frequency and high-frequency coefficients by NSCT and the characteristics of visible image and infrared image are then taken into account to finish the fusion. Finally, the quality of the fused image by different algorithms is compared with several existing criterions. MATLAB is employed to finish the simulation and the results will demonstrate this algorithm can improve the quality of the fused image effectively and the features in the image won’t be missing


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 303 ◽  
Author(s):  
Xiaole Ma ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Jing Fang ◽  
Shuwen Xu

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.


2014 ◽  
Vol 519-520 ◽  
pp. 590-593 ◽  
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Jie Li

Pixel level image fusion algorithm is one of the basic algorithms in image fusion, which is mainly divided into time domain and frequency domain algorithm. The weighted average algorithm and PCA (principal component analysis) are popular algorithms in time domain. Pyramid algorithm and wavelet algorithm are usually used to fuse two or multiple images in frequency domain. In this paper, pixel level image fusion algorithm was summarized, including of operation, characteristics and application etc. MATLAB simulation shows that effect of frequency domain algorithm is better than time domain algorithm. Evaluation criteria mainly refer to entropy, cross entropy, the mean and standard deviation etc. Evaluation standard is the reference of fusion effects, different evaluation criteria could be selected according to different fused image and different fusion purpose.


Author(s):  
Yuqing Wang ◽  
Yong Wang

A biologically inspired image fusion mechanism is analyzed in this paper. A pseudo-color image fusion method is proposed based on the improvement of a traditional method. The proposed model describes the fusion process using several abstract definitions which correspond to the detailed behaviors of neurons. Firstly, the infrared image and visible image are respectively ON against enhanced and OFF against enhanced. Secondly, we feed back the enhanced visible images given by the ON-antagonism system to the active cells in the center-surrounding antagonism receptive field. The fused [Formula: see text]VIS[Formula: see text]IR signal are obtained by feeding back the OFF-enhanced infrared image to the corresponding surrounding-depressing neurons. Then we feed back the enhanced visible signal from OFF-antagonism system to the depressing cells in the center-surrounding antagonism receptive field. The ON-enhanced infrared image is taken as the input signal of the corresponding active cells in the neurons, then the cell response of infrared-enhance-visible is produced in the process, it is denoted as [Formula: see text]IR[Formula: see text]VIS. The three kinds of signal are considered as R, G and B components in the output composite image. Finally, some experiments are performed in order to evaluate the performance of the proposed method. The information entropy, average gradient and objective image fusion measure are used to assess the performance of the proposed method objectively. Some traditional digital signal processing-based fusion methods are also evaluated for comparison in the experiments. In this paper, the Quantitative assessment indices show that the proposed fusion model is superior to the classical Waxman’s model, and some of its performance is better than the other image fusion methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Zi-Jun Feng ◽  
Xiao-Ling Zhang ◽  
Li-Yong Yuan ◽  
Jia-Nan Wang

The main goal of image fusion is to combine substantial information from different images of the same scene into a single image that is suitable for human and machine perception or for further image-processing tasks. In this study, a simple and efficient image fusion approach based on the application of the histogram of infrared images is proposed. A fusion scheme to select adaptively weighted coefficients for preserving salient infrared targets from the infrared image and for obtaining most spatial detailed information from the visible image is presented. Moving and static infrared targets in the fused image are labeled with different colors. This technique enhances perception of the image for the human visual system. In view of the modalities of infrared images, low resolution, and low signal-to-noise ratio, an anisotropic diffusion equation model is adopted to remove noise and to effectively preserve edge information before the fusion stage. By using the proposed method, relevant spatial information is preserved and infrared targets are clearly identified in the resulting fused images.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Kangjian He ◽  
Dongming Zhou ◽  
Xuejie Zhang ◽  
Rencan Nie

The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection.


Sign in / Sign up

Export Citation Format

Share Document