scholarly journals Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library

2020 ◽  
Vol 6 (7) ◽  
pp. 60
Author(s):  
Rabia Zafar ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan

Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website.

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1409 ◽  
Author(s):  
Hang Liu ◽  
Hengyu Li ◽  
Jun Luo ◽  
Shaorong Xie ◽  
Yu Sun

Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.


2020 ◽  
Vol 64 ◽  
pp. 71-91 ◽  
Author(s):  
Yu Liu ◽  
Lei Wang ◽  
Juan Cheng ◽  
Chang Li ◽  
Xun Chen

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3901 ◽  
Author(s):  
Tao Pan ◽  
Jiaqin Jiang ◽  
Jian Yao ◽  
Bin Wang ◽  
Bin Tan

Multi-focus image fusion has become a very practical image processing task. It uses multiple images focused on various depth planes to create an all-in-focus image. Although extensive studies have been produced, the performance of existing methods is still limited by the inaccurate detection of the focus regions for fusion. Therefore, in this paper, we proposed a novel U-shape network which can generate an accurate decision map for the multi-focus image fusion. The Siamese encoder of our U-shape network can preserve the low-level cues with rich spatial details and high-level semantic information from the source images separately. Moreover, we introduce the ResBlocks to expand the receptive field, which can enhance the ability of our network to distinguish between focus and defocus regions. Moreover, in the bridge stage between the encoder and decoder, the spatial pyramid pooling is adopted as a global perception fusion module to capture sufficient context information for the learning of the decision map. Finally, we use a hybrid loss that combines the binary cross-entropy loss and the structural similarity loss for supervision. Extensive experiments have demonstrated that the proposed method can achieve the state-of-the-art performance.


2016 ◽  
Author(s):  
Michael Giansiracusa ◽  
Adam Lutz ◽  
Neal Messer ◽  
Soundararajan Ezekiel ◽  
Mark Alford ◽  
...  

2020 ◽  
Vol 34 (07) ◽  
pp. 12797-12804 ◽  
Author(s):  
Hao Zhang ◽  
Han Xu ◽  
Yang Xiao ◽  
Xiaojie Guo ◽  
Jiayi Ma

In this paper, we propose a fast unified image fusion network based on proportional maintenance of gradient and intensity (PMGI), which can end-to-end realize a variety of image fusion tasks, including infrared and visible image fusion, multi-exposure image fusion, medical image fusion, multi-focus image fusion and pan-sharpening. We unify the image fusion problem into the texture and intensity proportional maintenance problem of the source images. On the one hand, the network is divided into gradient path and intensity path for information extraction. We perform feature reuse in the same path to avoid loss of information due to convolution. At the same time, we introduce the pathwise transfer block to exchange information between different paths, which can not only pre-fuse the gradient information and intensity information, but also enhance the information to be processed later. On the other hand, we define a uniform form of loss function based on these two kinds of information, which can adapt to different fusion tasks. Experiments on publicly available datasets demonstrate the superiority of our PMGI over the state-of-the-art in terms of both visual effect and quantitative metric in a variety of fusion tasks. In addition, our method is faster compared with the state-of-the-art.


2015 ◽  
Vol 109 (6) ◽  
pp. 5-9 ◽  
Author(s):  
Rajvi Patel ◽  
Manali Rajput ◽  
Pramit Parekh

Author(s):  
Bin Yang ◽  
Jinying Zhong ◽  
Yuehua Li ◽  
Zhongze Chen

The aim of multi-focus image fusion is to create a synthetic all-in-focus image from several images each of which is obtained with different focus settings. However, if the resolution of source images is low, the fused images with traditional fusion method would be also in low-quality, which hinders further image analysis even the fused image is all-in-focus. This paper presents a novel joint multi-focus image fusion and super-resolution method via convolutional neural network (CNN). The first level network features of different source images are fused with the guidance of the local clarity calculated from the source images. The final high-resolution fused image is obtained with the reconstruction network filters which act like averaging filters. The experimental results demonstrate that the proposed approach can generate the fused images with better visual quality and acceptable computation efficiency as compared to other state-of-the-art works.


Sign in / Sign up

Export Citation Format

Share Document