Multi-focus Image Fusion Based on Multi-scale Gradients and Image Matting

2021 ◽  
pp. 1-1
Author(s):  
Jun Chen ◽  
Xuejiao Li ◽  
Linbo Luo ◽  
Jiayi Ma
2021 ◽  
Vol 38 (2) ◽  
pp. 247-259
Author(s):  
Asan Ihsan Abas ◽  
Nurdan Akhan Baykan

Focus is limited and singular in many image capture devices. Therefore, different focused objects at different distances are obtained in a single image taken. Image fusion can be defined as the acquisition of multiple focused objects in a single image by combining important information from two or more images into a single image. In this paper, a new multi-focus image fusion method based on Bat Algorithm (BA) is presented in a Multi-Scale Transform (MST) to overcome limitations of standard MST Transform. Firstly, a specific MST (Laplacian Pyramid or Curvelet Transform) is performed on the two source images to obtain their low-pass and high-pass bands. Secondly, optimization algorithms were used to find out optimal weights for coefficients in low-pass bands to improve the accuracy of the fusion image and finally the fused multi-focus image is reconstructed by the inverse MST. The experimental results are compared with different methods using reference and non-reference evaluation metrics to evaluate the performance of image fusion methods.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


Multi-focus image fusion has established itself as a useful tool for reducing the amount of raw data and it aims at overcoming imaging cameras’ finite depth of f ield by combining information from multiple images with the same scene. Most of existing fusion algorithms use the method of multi-scale decompositions (MSD) to fuse the s ource images. MSD-based fusion algorithms provide much better performance than the conventional fusion methods .In the image fusion algorithm based on multi-scale decomposition, how to make full use of the characteristics of coefficients to fuse images is a key problem.This paper proposed a modified contourlet transform(MCT) based on wavelets and nonsubsampled directional filter banks(NSDFB). The image is decomposed in wavelet domain,and each highpass subband of wavelets is further decomposed into multiple directional subbands by using NSDFB. The MCT has the important features of directionality and translation invariance. Furthermore, the MCT and a novel region energy strategy are exploited to perform image fusion algorithm. simulation results shows that the proposed method can the fusion results visually and also improve in objective evaluating parameters.


Author(s):  
Rajesh Dharmaraj ◽  
Christopher Durairaj Daniel Dharmaraj

Image fusion is used to intensify the quality of images by combining two images of same scene obtained from different techniques. The present work deals with the effective extraction of pixel information from the source images that hold the key to multi focus image fusion. A solely vicinity-based image matting algorithm that relies on the close pixel clusters in the input images and their trimap, is presented in this article. The pixel cluster size, N plays a significant role in deciding the identity of the unknown pixel. The distance between each unknown pixel from foreground and background pixel clusters has been computed based on minimum quasi Euclidean distance. The minimum distance ratio gives the alpha value of each unknown pixel in the image. Finally, the focus regions are blend together to obtain the resultant fused image. On perceiving the results visually and objectively, it is concluded that proposed method works better in extracting the focused pixels and improving fusion quality, compared with other existing fusion methods.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 472 ◽  
Author(s):  
Sarmad Maqsood ◽  
Umer Javed ◽  
Muhammad Mohsin Riaz ◽  
Muhammad Muzammil ◽  
Fazal Muhammad ◽  
...  

Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Lei Wang ◽  
ZhouQi Liu ◽  
Jin Huang ◽  
Cong Liu ◽  
LongBo Zhang ◽  
...  

The traditional methods for multi-focus image fusion, such as the typical multi-scale geometric analysis theory-based methods, are usually restricted by sparse representation ability and the transferring efficiency of the fusion rules for the captured features. Aiming to integrate the partially focused images into the fully focused image with high quality, the complex shearlet features-motivated generative adversarial network is constructed for multi-focus image fusion in this paper. Different from the popularly used wavelet, contourlet, and shearlet, the complex shearlet provides more flexible multiple scales, anisotropy, and directional sub-bands with the approximate shift invariance. Therefore, the features in complex shearlet domain are more effective. With of help of the generative adversarial network, the whole procedure of multi-focus fusion is modeled to be the process of adversarial learning. Finally, several experiments are implemented and the results prove that the proposed method outperforms the popularly used fusion algorithms in terms of four typical objective metrics and the comparison of visual appearance.


2019 ◽  
Vol 85 ◽  
pp. 26-35 ◽  
Author(s):  
Hafiz Tayyab Mustafa ◽  
Jie Yang ◽  
Masoumeh Zareapoor

Sign in / Sign up

Export Citation Format

Share Document