Color Night Vision for Navigation and Surveillance

Author(s):  
Sanjoy Das ◽  
Yunlong Zhang

Fusion of registered images of night scenery that are obtained from cameras tuned to different band-widths will be a significant component of future night-vision devices. A new algorithm for such multispectral image fusion is described. The algorithm performs gray-scale image fusion using a method based on principal components. The monochrome fused image is then colored by means of a suitable pseudocoloring technique to produce the fused color output image. The approach can easily be used for any number of bandwidths. Examples illustrate the algorithm’s use to fuse an intensified low-light visible image with another image obtained from a single forward-looking infrared camera. The algorithm may be implemented readily in hardware for use in night-vision devices as an important aid to surveillance and navigation in total darkness. The applicability of the technology to transportation is also discussed.

Author(s):  
Zixiang Zhao ◽  
Shuang Xu ◽  
Chunxia Zhang ◽  
Junmin Liu ◽  
Jiangshe Zhang ◽  
...  

Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.


2019 ◽  
Vol 48 (6) ◽  
pp. 610001
Author(s):  
江泽涛 JIANG Ze-tao ◽  
何玉婷 HE Yu-ting ◽  
张少钦 ZHANG Shao-qin

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
Hui Zhang ◽  
Xu Ma ◽  
Yanshan Tian

In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. Finally, the fusion result is obtained by inverse transformation of the curvelet transform. The experimental results show that the proposed method has good performance in detail processing, edge protection, and source image information.


2019 ◽  
Vol 64 (2) ◽  
pp. 211-220
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays the result of infrared and visible image fusion has been utilized in significant applications like military, surveillance, remote sensing and medical imaging applications. Discrete wavelet transform based image fusion using unsharp masking is presented. DWT is used for decomposing input images (infrared, visible). Approximation and detailed coefficients are generated. For improving contrast unsharp masking has been applied on approximation coefficients. Then for merging approximation coefficients produced after unsharp masking average fusion rule is used. The rule that is used for merging detailed coefficients is max fusion rule. Finally, IDWT is used for generating a fused image. The result produced using the proposed fusion method is providing good contrast and also giving better performance results in reference to mean, entropy and standard deviation when compared with existing techniques.


Image fusion is the mechanism in which at least two images are consolidated into a single image holding the imperative features from each one of the first images. Emerging images are upgraded and the image content is been enhanced in the entire context, this out coming image is much more preferable than the base images. Certain circumstances in image processing need both high dimensional and high spectral information in a solitary image, which is crucial in remote sensing. Image fusion procedure incorporates intensifying, filtering, and moulding the images for better results. Efficient and imperative approaches for image fusion are enforced here. The image fusion method comprises two discrete types of images, the visible image and the infrared image. The Single Scale Retinex (SSR) is applied to the visible image to obtain an upgraded image, simultaneously Principal Component Analysis (PCA) is been applied to infrared image to obtain an image with superior contrast and colour. Further these treated images are decomposed into a multilayer image by using Laplacian Pyramid algorithm. To end with Weighted Average fusion method aids in fusing the images to reproduce the augmented fused image.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zongping Li ◽  
Wenxin Lei ◽  
Xudong Li ◽  
Tingting Liao ◽  
Jianming Zhang

Image fusion is to effectively enhance the accuracy, stability, and comprehensiveness of information. Generally, infrared images lack enough background details to provide an accurate description of the target scene, while visible images are difficult to detect radiation under adverse conditions, such as low light. People hoped that the richness of image details can be improved by using effective fusion algorithms. In this paper, we propose an infrared and visible image fusion algorithm, aiming to overcome some common defects in the process of image fusion. Firstly, we use fast approximate bilateral filter to decompose the infrared image and visible image to obtain the small-scale layers, large-scale layer, and base layer. Then, the fused base layer is obtained based on local energy characteristics, which avoid information loss of traditional fusion rules. The fused small-scale layers are acquired by selecting the absolute maximum, and the fused large-scale layer is obtained by summation rule. Finally, the fused small-scale layers, large-scale layer, and base layer are merged to reconstruct the final fused image. Experimental results show that our method retains more detailed appearance information of the fused image and achieves good results in both qualitative and quantitative evaluations.


Sign in / Sign up

Export Citation Format

Share Document