scholarly journals A Critical Analysis of the Multi-Focus Images Fusion Based on Discrete Wavelet Transform and Computer Vision

Author(s):  
Gebeyehu Belay Gebremeskel

Abstract This paper focused on the challenge of image fusion processing and lack of reliable image information and proposed multi-focus image fusion using discrete wavelet transforms and computer vision techniques for the fused image coefficient selection process. I made an in-depth analysis and improvement on the existing algorithms from the wavelet transform and the rules of multi-focus image fusion object features’ extractions. The wavelet transform uses authentic localization properties, and computer vision provides efficient processing time and is a powerful method to analyze object focus in the high-frequency precision and steps. The process of image fusion using wavelet transformation is the wavelet basis function and wavelet decomposition level in iterative experiments to enhance fused image information. The rules of multi-focus image fusions are the wavelet transformation on the features of the high-frequency coefficients, which enhance the fusion image features reliability on the frequency domain and regional contrast of the object.

2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2011 ◽  
Vol 187 ◽  
pp. 775-779
Author(s):  
Dong Yan Fan

This paper focuses on the multi-focus image fusion based on wavelet transformation, making in-depth discussion and improvement on the existing algorithms from the aspect of the rules of multi-focus image fusion. Especially the parameter of the regional contrast is selected in the algorithm which can reflect better distinct features of the image frequency domain in the high-frequency coefficients fusion. And according to the fusion rule :" the part of the large regional contrast of wavelet high frequencies coefficients corresponding to the clear part of the image " , an improved algorithm is proposed for multi-focus image fusion based on wavelet transform regions contrast. Finally, Matlab is employed to simulation the algorithm.


2019 ◽  
Vol 8 (4) ◽  
pp. 3765-3769

The Multifocal image fusion objective in visual sensor networks is to combine the multi-focused images of the same scene into a focused fused image with improved reliability and interpretation. However, the existing discrete wavelet-based fusion algorithms lead artifacts into the fused image due to its shift variance. But shift invariance is essential in image fusion during the reconstruction of the fused image without any loss. The Stationary Wavelet Transform is one of the most precious ones, eliminating shift variance caused by the discrete wavelet transform. And also focus measures are essential for the selection of focused objects in multi-focused images in order to get a fused image with every object in focus. Thus the advantages of Stationary wavelet transform and focus measures are considered for fusion in this paper. The proposed fusion method not only produces a focused fused image without artifacts and its performance is also good compared to other fusion methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Yingzhong Tian ◽  
Jie Luo ◽  
Wenjun Zhang ◽  
Tinggang Jia ◽  
Aiguo Wang ◽  
...  

Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS) and the Sum Modified Laplacian (SML). Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2011 ◽  
Vol 145 ◽  
pp. 119-123
Author(s):  
Ko Chin Chang

For general image capture device, it is difficult to obtain an image with every object in focus. To solve the fusion issue of multiple same view point images with different focal settings, a novel image fusion algorithm based on local energy pattern (LGP) is proposed in this paper. Firstly, each focus images is decomposed using discrete wavelet transform (DWT) separately. Secondly, to calculate LGP with the corresponding pixel and its surrounding pixels, then use LGP to compute the new coefficient of the pixel from each transformed images with our proposed weighted fusing rules. The rules use different operations in low-bands coefficients and high-bands coefficients. Finally, the generated image is reconstructed from the new subband coefficients. Moreover, the reconstructed image can represent more detailed for the obtained scene. Experimental results demonstrate that our scheme performs better than the traditional discrete cosine transform (DCT) and discrete wavelet transform (DWT) method in both visual perception and quantitative analysis.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


2020 ◽  
Vol 14 ◽  
pp. 174830262093129
Author(s):  
Zhang Zhancheng ◽  
Luo Xiaoqing ◽  
Xiong Mengyu ◽  
Wang Zhiwen ◽  
Li Kai

Medical image fusion can combine multi-modal images into an integrated higher-quality image, which can provide more comprehensive and accurate pathological information than individual image does. Traditional transform domain-based image fusion methods usually ignore the dependencies between coefficients and may lead to the inaccurate representation of source image. To improve the quality of fused image, a medical image fusion method based on the dependencies of quaternion wavelet transform coefficients is proposed. First, the source images are decomposed into low-frequency component and high-frequency component by quaternion wavelet transform. Then, a clarity evaluation index based on quaternion wavelet transform amplitude and phase is constructed and a contextual activity measure is designed. These measures are utilized to fuse the high-frequency coefficients and the choose-max fusion rule is applied to the low-frequency components. Finally, the fused image can be obtained by inverse quaternion wavelet transform. The experimental results on some brain multi-modal medical images demonstrate that the proposed method has achieved advanced fusion result.


Sign in / Sign up

Export Citation Format

Share Document