Multi-focus image fusion: A Survey of the state of the art

2020 ◽  
Vol 64 ◽  
pp. 71-91 ◽  
Author(s):  
Yu Liu ◽  
Lei Wang ◽  
Juan Cheng ◽  
Chang Li ◽  
Xun Chen
2020 ◽  
Vol 34 (07) ◽  
pp. 12797-12804 ◽  
Author(s):  
Hao Zhang ◽  
Han Xu ◽  
Yang Xiao ◽  
Xiaojie Guo ◽  
Jiayi Ma

In this paper, we propose a fast unified image fusion network based on proportional maintenance of gradient and intensity (PMGI), which can end-to-end realize a variety of image fusion tasks, including infrared and visible image fusion, multi-exposure image fusion, medical image fusion, multi-focus image fusion and pan-sharpening. We unify the image fusion problem into the texture and intensity proportional maintenance problem of the source images. On the one hand, the network is divided into gradient path and intensity path for information extraction. We perform feature reuse in the same path to avoid loss of information due to convolution. At the same time, we introduce the pathwise transfer block to exchange information between different paths, which can not only pre-fuse the gradient information and intensity information, but also enhance the information to be processed later. On the other hand, we define a uniform form of loss function based on these two kinds of information, which can adapt to different fusion tasks. Experiments on publicly available datasets demonstrate the superiority of our PMGI over the state-of-the-art in terms of both visual effect and quantitative metric in a variety of fusion tasks. In addition, our method is faster compared with the state-of-the-art.


2021 ◽  
Vol 09 (06) ◽  
pp. 73-108
Author(s):  
Bing Li ◽  
Yong Xian ◽  
Daqiao Zhang ◽  
Juan Su ◽  
Xiaoxiang Hu ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 842 ◽  
Author(s):  
Yu Zheng ◽  
Zhuxian Zhang ◽  
Lu Feng ◽  
Peidong Zhu ◽  
Feng Zhou

Weak reflected signal is one of the main problems in a recent developing remote sensing tool—passive GNSS-based radar (GNSS radar). To address this issue, an enhanced GNSS radar imaging scheme on the basis of coherently integrating multiple satellites is proposed. In the proposed scheme, to avoid direct signal interference at surveillance antenna, the satellites that used as transmission of opportunity are in backscattering geometry model. To coherently accumulate echo signal magnitudes of the scene center in the targeted sensing region illuminated by the selected satellites, after performing the paralleled range compressions, a coordinates alignment operator is performed to the respective range domains, in which, pseudorandom noise (PRN) code phases are aligned. Thereafter, the coordinates aligned range compressed signals of the selected satellites are coherently integrated along azimuth domain so that imaging gain is improved and azimuth processing can be accomplished in only one state operation. The theoretical analysis and field proof-of-concept experimental results indicate that compared to both conventional bistatic imaging scheme and the state-of-the-art multi-image fusion scheme, the proposed scheme can provide a higher imaging gain; compared to the state-of-the-art multi-image fusion scheme, the proposed scheme has a less computational complexity and faster algorithm speed.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1409 ◽  
Author(s):  
Hang Liu ◽  
Hengyu Li ◽  
Jun Luo ◽  
Shaorong Xie ◽  
Yu Sun

Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3901 ◽  
Author(s):  
Tao Pan ◽  
Jiaqin Jiang ◽  
Jian Yao ◽  
Bin Wang ◽  
Bin Tan

Multi-focus image fusion has become a very practical image processing task. It uses multiple images focused on various depth planes to create an all-in-focus image. Although extensive studies have been produced, the performance of existing methods is still limited by the inaccurate detection of the focus regions for fusion. Therefore, in this paper, we proposed a novel U-shape network which can generate an accurate decision map for the multi-focus image fusion. The Siamese encoder of our U-shape network can preserve the low-level cues with rich spatial details and high-level semantic information from the source images separately. Moreover, we introduce the ResBlocks to expand the receptive field, which can enhance the ability of our network to distinguish between focus and defocus regions. Moreover, in the bridge stage between the encoder and decoder, the spatial pyramid pooling is adopted as a global perception fusion module to capture sufficient context information for the learning of the decision map. Finally, we use a hybrid loss that combines the binary cross-entropy loss and the structural similarity loss for supervision. Extensive experiments have demonstrated that the proposed method can achieve the state-of-the-art performance.


2020 ◽  
Vol 6 (7) ◽  
pp. 60
Author(s):  
Rabia Zafar ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan

Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website.


2007 ◽  
Vol 8 (2) ◽  
pp. 114-118 ◽  
Author(s):  
A. Ardeshir Goshtasby ◽  
Stavri Nikolov

2014 ◽  
Vol 19 ◽  
pp. 4-19 ◽  
Author(s):  
Alex Pappachen James ◽  
Belur V. Dasarathy

2017 ◽  
Vol 33 ◽  
pp. 100-112 ◽  
Author(s):  
Shutao Li ◽  
Xudong Kang ◽  
Leyuan Fang ◽  
Jianwen Hu ◽  
Haitao Yin

Sign in / Sign up

Export Citation Format

Share Document