scholarly journals Multi-Focus Fused Image using Inception-Resnet V2

Multi-focus image fusion is the process of integration of pictures of the equivalent view and having various targets into one image. The direct capturing of a 3D scene image is challenging, many multi-focus image fusion techniques are involved in generating it from some images focusing at diverse depths. The two important factors for image fusion is activity level information and fusion rule. The necessity of designing local filters for extracting high-frequency details the activity level information is being implemented, and then by using various elaborated designed rules we consider clarity information of different source images which can obtain a clarity/focus map. However, earlier fusion algorithms will excerpt high-frequency facts by considering neighboring filters and by adopting various fusion conventions to achieve the fused image. However, the performance of the prevailing techniques is hardly adequate. Convolutional neural networks have recently used to solve the problem of multi-focus image fusion. By considering the deep neural network a two-stage boundary aware is proposed to address the issue in this paper. They are: (1) for extracting the entire defocus info of the two basis images deep network is suggested. (2) To handle the patches information extreme away from and close to the focused/defocused boundary, we use Inception ResNet v2. The results illustrate that the approach specified in this paper will result in an agreeable fusion image, which is superior to some of the advanced fusion algorithms in comparison with both the graphical and objective evaluations.

2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


2013 ◽  
Vol 401-403 ◽  
pp. 1381-1384 ◽  
Author(s):  
Zi Juan Luo ◽  
Shuai Ding

t is mostly difficult to get an image that contains all relevant objects in focus, because of the limited depth-of-focus of optical lenses. The multifocus image fusion method can solve the problem effectively. Nonsubsampled Contourlet transform has varying directions and multiple scales. When the Nonsubsampled contourlet transform is introduced to image fusion, the characteristics of original images are taken better and more information for fusion is obtained. A new method of multi-focus image fusion based on Nonsubsampled contourlet transform (NSCT) with the fusion rule of region statistics is proposed in this paper. Firstly, different focus images are decomposed using Nonsubsampled contourlet transform. Then low-bands are integrated using the weighted average, high-bands are integrated using region statistics rule. Next the fused image will be obtained by inverse Nonsubsampled contourlet transform. Finally the experimental results are showed and compared with those of method based on Contourlet transform. Experiments show that the approach can achieve better results than the method based on contourlet transform.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 472 ◽  
Author(s):  
Sarmad Maqsood ◽  
Umer Javed ◽  
Muhammad Mohsin Riaz ◽  
Muhammad Muzammil ◽  
Fazal Muhammad ◽  
...  

Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.


2011 ◽  
Vol 187 ◽  
pp. 775-779
Author(s):  
Dong Yan Fan

This paper focuses on the multi-focus image fusion based on wavelet transformation, making in-depth discussion and improvement on the existing algorithms from the aspect of the rules of multi-focus image fusion. Especially the parameter of the regional contrast is selected in the algorithm which can reflect better distinct features of the image frequency domain in the high-frequency coefficients fusion. And according to the fusion rule :" the part of the large regional contrast of wavelet high frequencies coefficients corresponding to the clear part of the image " , an improved algorithm is proposed for multi-focus image fusion based on wavelet transform regions contrast. Finally, Matlab is employed to simulation the algorithm.


2014 ◽  
Vol 530-531 ◽  
pp. 390-393
Author(s):  
Yong Wang

Image processing is the basis of computer vision. Aiming at some problems existed in the traditional image fusion algorithm, a novel algorithm based on shearlet and multi-decision is proposed. At first we discussed multi-focus image fusion and then we use Shearlet transform and multi-decision for image decomposition high-frequency coefficients. Finally, the fused image is obtained through inverse Shearlet transform. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach retains image detail and more clarity.


2019 ◽  
Vol 2019 ◽  
pp. 1-23 ◽  
Author(s):  
Jinjiang Li ◽  
Genji Yuan ◽  
Hui Fan

Multifocus image fusion is the merging of images of the same scene and having multiple different foci into one all-focus image. Most existing fusion algorithms extract high-frequency information by designing local filters and then adopt different fusion rules to obtain the fused images. In this paper, a wavelet is used for multiscale decomposition of the source and fusion images to obtain high-frequency and low-frequency images. To obtain clearer and complete fusion images, this paper uses a deep convolutional neural network to learn the direct mapping between the high-frequency and low-frequency images of the source and fusion images. In this paper, high-frequency and low-frequency images are used to train two convolutional networks to encode the high-frequency and low-frequency images of the source and fusion images. The experimental results show that the method proposed in this paper can obtain a satisfactory fusion image, which is superior to that obtained by some advanced image fusion algorithms in terms of both visual and objective evaluations.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 247
Author(s):  
Areeba Ilyas ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan ◽  
Marcin Grzegorzek

Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.


2021 ◽  
Author(s):  
Gebeyehu Belay Gebremeskel

Abstract This paper focused on the challenge of image fusion processing and lack of reliable image information and proposed multi-focus image fusion using discrete wavelet transforms and computer vision techniques for the fused image coefficient selection process. I made an in-depth analysis and improvement on the existing algorithms from the wavelet transform and the rules of multi-focus image fusion object features’ extractions. The wavelet transform uses authentic localization properties, and computer vision provides efficient processing time and is a powerful method to analyze object focus in the high-frequency precision and steps. The process of image fusion using wavelet transformation is the wavelet basis function and wavelet decomposition level in iterative experiments to enhance fused image information. The rules of multi-focus image fusions are the wavelet transformation on the features of the high-frequency coefficients, which enhance the fusion image features reliability on the frequency domain and regional contrast of the object.


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


Sign in / Sign up

Export Citation Format

Share Document