Infrared and Visible Image Registration Base on SIFT Features

2012 ◽  
Vol 500 ◽  
pp. 383-389 ◽  
Author(s):  
Kai Wei Yang ◽  
Tian Hua Chen ◽  
Su Xia Xing ◽  
Jing Xian Li

In the System of Target Tracking Recognition, infrared sensors and visible light sensors are two kinds of the most commonly used sensors; fusion effectively for these two images can greatly enhance the accuracy and reliability of identification. Improving the accuracy of registration in infrared light and visible light images by modifying the SIFT algorithm, allowing infrared images and visible images more quickly and accurately register. The method can produce good results for registration by infrared image histogram equa-lization, reasonable to reduce the level of Gaussian blur in the pyramid establishment process of sift algorithm, appropriate adjustments to thresholds and limits the scope of direction of sub-gradient descriptor. The features are invariant to rotation, image scale and change in illumination.

Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3827 ◽  
Author(s):  
Qinglei Du ◽  
Han Xu ◽  
Yong Ma ◽  
Jun Huang ◽  
Fan Fan

In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.


2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Javad Abbasi Aghamaleki ◽  
Alireza Ghorbani

AbstractImage fusion is the combining process of complementary information of multiple same scene images into an output image. The resultant output image that is named fused image, produces more precise description of the scene than any of the individual input images. In this paper, we propose a novel simple and fast strategy for infrared (IR) and visible images based on local important areas of IR image. The fusion method is completed in three step approach. Firstly, only the segmented regions in the infrared image is extracted. Next, the image fusion is applied on segmented area and finally, contour lines are also used to improve the quality of the results of the second step of fusion method. Using a publicly available database, the proposed method is evaluated and compared to the other fusion methods. The experimental results show the effectiveness of the proposed method compared to the state of the art methods.


2014 ◽  
Vol 599-601 ◽  
pp. 1523-1526
Author(s):  
Yan Hai Wu ◽  
Hao Zhang ◽  
Fang Ni Zhang ◽  
Yue Hua Han

This paper gives a method for fusion of visible and infrared image, which combined non-sampling contourlet and wavelet transform. This method firstly makes contrast enhancements to infrared image. Next, does NSCT decomposition to visible image and enhanced-infrared image, then decomposes the low frequency from above decomposition using wavelet. Thirdly, for high-frequency subband of NSCT decomposition and high or low-frequency subband of wavelet, it uses different fusion rules. Finally, it gets fusion image through refactoring of wavelet and NSCT. Experiments show that the method not only retains texture details belong to visible images, but also highlights targets in infrared images. It has a better fusion effect.


Author(s):  
Zhuo Chen ◽  
Ming Fang ◽  
Xu Chai ◽  
Feiran Fu ◽  
Lihong Yuan

Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging. The purpose is that the fusion images are suitable for human eyes and conducive to the next application and processing. In order to solve the problems of incomplete feature extraction, loss of details, and less samples of common data sets, it is not conducive to training, an end-to-end network architecture for image fusion is proposed. U-net is introduced into image fusion, and the final fusion result is obtained by using the generative adversarial network. Through its special convolution structure, the important feature information is extracted to the maximum extent, and the sample does not need to be cut to avoid the problem of reducing the fusion accuracy, but also to improve the training speed. Then the U-net extracted feature is confronted with the discriminator containing infrared image, and the generator model is obtained. The experimental results show that the present algorithm can obtain the fusion image with clear outline, prominent texture and obvious target. SD, SF, SSIM, AG and other indicators are obviously improved.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yubin Yuan ◽  
Yu Shen ◽  
Jing Peng ◽  
Lin Wang ◽  
Hongguo Zhang

Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yuqing Zhao ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Shaolei Zhang

Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.


1998 ◽  
Vol 535 ◽  
Author(s):  
M. Yoshimoto ◽  
J. Saraie ◽  
T. Yasui ◽  
S. HA ◽  
H. Matsunami

AbstractGaAs1–xPx (0.2 <; x < 0.7) was grown by metalorganic molecular beam epitaxy with a GaP buffer layer on Si for visible light-emitting devices. Insertion of the GaP buffer layer resulted in bright photoluminescence of the GaAsP epilayer. Pre-treatment of the Si substrate to avoid SiC formation was also critical to obtain good crystallinity of GaAsP. Dislocation formation, microstructure and photoluminescence in GaAsP grown layer are described. A GaAsP pn junction fabricated on GaP emitted visible light (˜1.86 eV). An initial GaAsP pn diode fabricated on Si emitted infrared light.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Hai Wang ◽  
Yingfeng Cai ◽  
Xiaobo Chen ◽  
Long Chen

The use of night vision systems in vehicles is becoming increasingly common. Several approaches using infrared sensors have been proposed in the literature to detect vehicles in far infrared (FIR) images. However, these systems still have low vehicle detection rates and performance could be improved. This paper presents a novel method to detect vehicles using a far infrared automotive sensor. Firstly, vehicle candidates are generated using a constant threshold from the infrared frame. Contours are then generated by using a local adaptive threshold based on maximum distance, which decreases the number of processing regions for classification and reduces the false positive rate. Finally, vehicle candidates are verified using a deep belief network (DBN) based classifier. The detection rate is 93.9% which is achieved on a database of 5000 images and video streams. This result is approximately a 2.5% improvement on previously reported methods and the false detection rate is also the lowest among them.


Sign in / Sign up

Export Citation Format

Share Document