Spectrum Characteristics Preserved Visible and Near-Infrared Image Fusion Algorithm

2021 ◽  
Vol 23 ◽  
pp. 306-319 ◽  
Author(s):  
Zhuo Li ◽  
Hai-Miao Hu ◽  
Wei Zhang ◽  
Shiliang Pu ◽  
Bo Li
2021 ◽  
Vol 50 (4) ◽  
pp. 228-240
Author(s):  
吉琳娜 Linna JI ◽  
郭小铭 Xiaoming GUO ◽  
杨风暴 Fengbao YANG ◽  
张雅玲 Yaling ZHANG

2018 ◽  
Vol 55 (10) ◽  
pp. 102804
Author(s):  
余越 Yu Yue ◽  
胡秀清 Hu Xiuqing ◽  
闵敏 Min Min ◽  
许廷发 Xu Tingfa ◽  
何玉青 He Yuqing ◽  
...  

2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740043 ◽  
Author(s):  
Jinling Zhao ◽  
Junjie Guo ◽  
Wenjie Cheng ◽  
Chao Xu ◽  
Linsheng Huang

A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.


Chemosensors ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 75
Author(s):  
Hyuk-Ju Kwon ◽  
Sung-Hak Lee

Image fusion combines images with different information to create a single, information-rich image. The process may either involve synthesizing images using multiple exposures of the same scene, such as exposure fusion, or synthesizing images of different wavelength bands, such as visible and near-infrared (NIR) image fusion. NIR images are frequently used in surveillance systems because they are beyond the narrow perceptual range of human vision. In this paper, we propose an infrared image fusion method that combines high and low intensities for use in surveillance systems under low-light conditions. The proposed method utilizes a depth-weighted radiance map based on intensities and details to enhance local contrast and reduce noise and color distortion. The proposed method involves luminance blending, local tone mapping, and color scaling and correction. Each of these stages is processed in the LAB color space to preserve the color attributes of a visible image. The results confirm that the proposed method outperforms conventional methods.


2021 ◽  
Vol 9 (3A) ◽  
Author(s):  
Alex Noel Joseph Raj ◽  
◽  
M. Murugappan ◽  
Arunachalam V ◽  
◽  
...  

Several applications utilizing a set of red green blue (RGB) and near infrared (NIR) images have been emerging over recent years. The present work proposes a technique of enhancing an image by combining color (RGB) and near infrared information (NIR). In order to fuse the two types of images, the NIR-channel is considered as a luminance counterpart to the visible image. International standard database (RGB-NIR Scene Dataset) is used in this work for image fusion. The objective of the paper is to present a simple and hardware efficient fusion method, where the original RGB image is converted into two different color spaces, namely, HSV and YCbCr. Later, the luminance channel of the RGB image is replaced with the near infrared channel, thereby obtaining a fused enhanced image. The above procedure is effectively implemented on FPGA using the Xilinx HLS tool. RGB-NIR dataset is used in the present work for testing the proposed image fusion algorithm, and the quality of the fused image is measured through peak signal to noise ratio (PSNR). The experimental results indicate that HSV color space is more efficient in image fusion compared to YCbCr color space based on the average PSNR values of approximately 29db for HSV and 25db for YCbCr for various images, respectively. Finally, this complete fusion algorithm is implemented on Xilinx Nexys4 FPGA board to be able to obtain real-time outputs in the form of vivid, contrasted images that are pleasing to the observers. The experimental results illustrate that the Xilinx FPGA utilizes only 50% of the available hardware resources and consumes approximately 5.3 Watts to implement the fusion process.


2021 ◽  
Author(s):  
Hongzhi Zhang ◽  
Yifan Shen ◽  
Yangyan Ou ◽  
Bo Ji ◽  
Jia He

Sign in / Sign up

Export Citation Format

Share Document