Multi-focus image fusion method based on two stage of convolutional neural network

2020 ◽  
Vol 176 ◽  
pp. 107681
Author(s):  
Di Gai ◽  
Xuanjing Shen ◽  
Haipeng Chen ◽  
Pengxiang Su
2021 ◽  
Vol 92 ◽  
pp. 107174
Author(s):  
Yang Zhou ◽  
Xiaomin Yang ◽  
Rongzhu Zhang ◽  
Kai Liu ◽  
Marco Anisetti ◽  
...  

2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lei Wang ◽  
Chunhong Chang ◽  
Zhouqi Liu ◽  
Jin Huang ◽  
Cong Liu ◽  
...  

The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.


Optik ◽  
2015 ◽  
Vol 126 (20) ◽  
pp. 2508-2511 ◽  
Author(s):  
Jingjing Wang ◽  
Qian Li ◽  
Zhenhong Jia ◽  
Nikola Kasabov ◽  
Jie Yang

Sign in / Sign up

Export Citation Format

Share Document