scholarly journals Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yubin Yuan ◽  
Yu Shen ◽  
Jing Peng ◽  
Lin Wang ◽  
Hongguo Zhang

Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.

2013 ◽  
Vol 756-759 ◽  
pp. 2850-2856 ◽  
Author(s):  
Ze Hua Zhou ◽  
Min Tan

The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coefficients, highlight the contours of parts and the weakening of the details section, fusion, image fusion has the characteristics of two or multiple images, more people or the visual characteristics of the machine, the image for further analysis and understanding, detection and identification or tracking of the target image.


Chemosensors ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 75
Author(s):  
Hyuk-Ju Kwon ◽  
Sung-Hak Lee

Image fusion combines images with different information to create a single, information-rich image. The process may either involve synthesizing images using multiple exposures of the same scene, such as exposure fusion, or synthesizing images of different wavelength bands, such as visible and near-infrared (NIR) image fusion. NIR images are frequently used in surveillance systems because they are beyond the narrow perceptual range of human vision. In this paper, we propose an infrared image fusion method that combines high and low intensities for use in surveillance systems under low-light conditions. The proposed method utilizes a depth-weighted radiance map based on intensities and details to enhance local contrast and reduce noise and color distortion. The proposed method involves luminance blending, local tone mapping, and color scaling and correction. Each of these stages is processed in the LAB color space to preserve the color attributes of a visible image. The results confirm that the proposed method outperforms conventional methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


Author(s):  
Yusuke Arashida ◽  
Atsushi Taninaka ◽  
Takayuki Ochiai ◽  
Hiroyuki Mogi ◽  
Shoji YOSHIDA ◽  
...  

Abstract We have developed a multiplex Coherent anti-Stokes Raman scattering (CARS) microscope effective for low-wavenumber measurement by combining a high-repetition supercontinuum light source of 1064 nm and an infrared high-sensitivity InGaAs diode array. This system could observe the low-wavenumber region down to 55 cm-1 with high sensitivity. In addition, using spectrum shaping and spectrum modulation techniques, we simultaneously realized a wide bandwidth (<1800 cm-1), high wavenumber resolution (9 cm-1), high efficiency, and increasing signal to noise ratio by reducing the effect of the background shape in low-wavenumber region. Spatial variation of a sulfur crystal phase transition with metastable states was visualized.


2012 ◽  
Vol 500 ◽  
pp. 383-389 ◽  
Author(s):  
Kai Wei Yang ◽  
Tian Hua Chen ◽  
Su Xia Xing ◽  
Jing Xian Li

In the System of Target Tracking Recognition, infrared sensors and visible light sensors are two kinds of the most commonly used sensors; fusion effectively for these two images can greatly enhance the accuracy and reliability of identification. Improving the accuracy of registration in infrared light and visible light images by modifying the SIFT algorithm, allowing infrared images and visible images more quickly and accurately register. The method can produce good results for registration by infrared image histogram equa-lization, reasonable to reduce the level of Gaussian blur in the pyramid establishment process of sift algorithm, appropriate adjustments to thresholds and limits the scope of direction of sub-gradient descriptor. The features are invariant to rotation, image scale and change in illumination.


2019 ◽  
Vol 7 (17) ◽  
pp. 10225-10230 ◽  
Author(s):  
Ali Imran Channa ◽  
Xin Tong ◽  
Jing-Yin Xu ◽  
Yongchen Liu ◽  
Changmeng Wang ◽  
...  

Near-infrared-emitting CuGaS2/CdS QDs with enhanced visible light absorption were developed to achieve high efficiency photoelectrochemical cells.


2020 ◽  
Vol 10 (23) ◽  
pp. 8702
Author(s):  
Dong-Min Son ◽  
Hyuk-Ju Kwon ◽  
Sung-Hak Lee

This study proposes a method of blending visible and near-infrared (NIR) images to enhance their edge details and local contrast based on the Laplacian pyramid and principal component analysis (PCA). In the proposed method, both the Laplacian pyramid and PCA are implemented to generate a radiance map. Using the PCA algorithm, the soft-mixing method and the mask-skipping filter were applied when the images were fused. The color compensation method uses the ratio between the radiance map fused by the Laplacian pyramid and the PCA algorithm and the luminance channel of the visible image to preserve the chrominance of the visible image. The results show that the proposed method improves edge details and local contrast effectively.


Author(s):  
Peilin Li ◽  
Sang-Heon Lee ◽  
Hung-Yao Hsu

In this paper, an image fusion is presented to improve the citrus identification by filtering the incoming data from two cameras. The citrus image data has been photographed by using a portable bi-camera cold mirror acquisition system. The prototype of the customized fixture has been manufactured to position and align a classical cold mirror with two CCD cameras in relative kinematic position. The algorithmic registration on the pairwise images has been bypassed by both the spatial alignment of two cameras with recourse of software calibration and the triggering synchronization in temporal during the photographing. The pairwise frames have been fused by using the Daubechies wavelets decomposition filters. The pixel level fusion index rule is proposed to combine the low pass coefficients of the visible image and the low pass coefficients of the near-infrared image convoluted by the complementary of entropy filter from the visible low pass coefficients. In the study, the fused artifact color image and the non-fused color image have been processed and compared by some classification methods such as low dimensional projection, self-organizing map and the support vector machine.


Sign in / Sign up

Export Citation Format

Share Document