Novel image fusion method based on adaptive pulse coupled neural network and discrete multi-parameter fractional random transform

2014 ◽  
Vol 52 ◽  
pp. 91-98 ◽  
Author(s):  
Jun Lang ◽  
Zhengchao Hao
2021 ◽  
Vol 92 ◽  
pp. 107174
Author(s):  
Yang Zhou ◽  
Xiaomin Yang ◽  
Rongzhu Zhang ◽  
Kai Liu ◽  
Marco Anisetti ◽  
...  

2020 ◽  
Vol 176 ◽  
pp. 107681
Author(s):  
Di Gai ◽  
Xuanjing Shen ◽  
Haipeng Chen ◽  
Pengxiang Su

Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 570 ◽  
Author(s):  
Jingchun Piao ◽  
Yunfan Chen ◽  
Hyunchul Shin

In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.


2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


Sign in / Sign up

Export Citation Format

Share Document