Image Fusion Algorithm Based on Wavelet Transform and Laplacian Pyramid

2013 ◽  
Vol 860-863 ◽  
pp. 2846-2849
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion is process which combine relevant information from two or more images into a single image. The aim of fusion is to extract relevant information for research. According to different application and characteristic of algorithm, image fusion algorithm could be used to improve quality of image. This paper complete compare analyze of image fusion algorithm based on wavelet transform and Laplacian pyramid. In this paper, principle, operation, steps and characteristic of fusion algorithm are summarized, advantage and disadvantage of different algorithm are compared. The fusion effects of different fusion algorithm are given by MATLAB. Experimental results shows that quality of fused image would be improve obviously.

2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2013 ◽  
Vol 373-375 ◽  
pp. 530-535 ◽  
Author(s):  
Chuan Zhu Liao ◽  
Yu Shu Liu ◽  
Ming Yan Jiang

In order to get an image with every object in focus, an image fusion process is required to fuse the images under different focal settings. In this paper, a new multifocus image fusion algorithm is proposed. The algorithm is based on Laplacian pyramid and Gabor filters. The source images are decomposed by Laplacian pyramid, then the directional edges feature and detail information can be obtained by Gabor filters. Different fusion rules are applied to the low frequency and high frequency coefficients. The experimental results show that the algorithm is simple and effective.


2015 ◽  
Vol 15 (1) ◽  
pp. 116-125 ◽  
Author(s):  
Zheng Yu ◽  
Lei Yan ◽  
Ning Han ◽  
Jinhao Liu

Abstract In this paper the image fusion algorithm based on Contourlet transform and Pulse Coupled Neural Network (PCNN) was proposed to improve the performance of the image fusion in the detection accuracy of obstacles in forests. At the same time, the wavelet transform and the Principal Component Analysis (PCA) were simulated for comparison with the proposed algorithm. Then visible and infrared thermal images were collected in a forest. The experimental results have shown that the fused images using the method proposed provided a better understanding of the reality, enhanced images’ clarity and eliminated factors which provided shelters for targets.


Today’s research era, image fusion is a actual step by step procedure to develop the visualization of any image. It integrates the essential features of more than a couple of images into a individual fused image without taking any artifacts. Multifocus image fusion has a vital key factor in fusion process where it aims to increase the depth of field using extracting focused part from different multiple focused images. In this paper multi-focus image fusion algorithm is proposed where non local mean technique is used in stationary wavelet transform (SWT) to get the sharp and smooth image. Non-local mean function analyses the pixels belonging to the blurring part and improves the image quality. The proposed work is compared with some existing methods. The results are analyzed visually as well as using performance metrics.


Author(s):  
Lei Zhang ◽  
Linna Ji ◽  
Hualong Jiang ◽  
Fengbao Yang ◽  
Xiaoxia Wang

Multi-modal image fusion can more accurately describe the features of a scene than a single image. Because of the different imaging mechanisms, the difference between multi-modal images is great, which leads to poor contrast of the fused images. Therefore, a simple and effective spatial domain fusion algorithm based on variable parameter fractional difference enhancement is proposed. Based on the characteristics of fractional difference enhancement, a variable parameter fractional difference is introduced, the multi-modal images are repeatedly enhanced, and multiple enhanced images are obtained. A correlation coefficient is applied to constrain the number of enhancement cycles. In addition, an energy contrast is used to extract the contrast features of the image, and the tangent function is simultaneously used to obtain the fusion weight to attain multiple contrast-enhanced initialization fusion images. Finally, the weighted average is applied to obtain the final fused image. Experimental results demonstrate that the proposed fusion algorithm can effectively preserve the contrast features between images and improve the quality of fused images.


Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 879 ◽  
Author(s):  
Bicao Li ◽  
Runchuan Li ◽  
Zhoufeng Liu ◽  
Chunlei Li ◽  
Zongmin Wang

In the technologies, increasing attention is being paid to image fusion; nevertheless, how to objectively assess the quality of fused images and the performance of different fusion algorithms is of significance. In this paper, we propose a novel objective non-reference measure for evaluating image fusion. This metric employs the properties of Arimoto entropy, which is a generalization of Shannon entropy, measuring the amount of information that the fusion image contains about two input images. Preliminary experiments on multi-focus images and multi-modal images using the average fusion algorithm, contrast pyramid, principal component analysis, laplacian pyramid, guided filtering and discrete cosine transform have been implemented. In addition, a comparison has been conducted with other relevant quality metrics of image fusion such as mutual information, normalized mutual information, Tsallis divergence and the Petrovic measure. The experimental results illustrate that our presented metric correlates better with the subjective criteria of these fused images.


2016 ◽  
Vol 15 (4) ◽  
pp. 6698-6701
Author(s):  
Navjot Kaur ◽  
Navneet Kaur

Image fusion is a process to combine two or more images so that fused image becomes more informative than input images. Fusion process provides the spectral and spatial information of image. But main problem occurs of computational time when high resolution images are fused. So this paper describe a new algorithm that is based on wavelet transform in which transform is applied after forming the image into different blocks. This algorithm divides the complete image into different blocks andthen comparing the images by finding the mean square error.By using the threshold value wavelet transform is applied to require block. The transformed blocks are fused by using different fusion algorithms like averaging method, maximum or minimum pixel replacement fusion algorithm. By applying inverse of wavelet transform fused image is constructed which is more informative than the input images. The quality of fused image is find out by comparing the fused image by the original image by finding mean square error and peak signal to noise ratio. The whole process of fusion is applied on the complete image and also by using blocking method then by finding the time parameters it can be conclude that the proposed algorithm reduces the computational time by 10 times to the existence method.


2021 ◽  
pp. 3228-3236
Author(s):  
Nada Jasim Habeeb

Combining multi-model images of the same scene that have different focus distances can produce clearer and sharper images with a larger depth of field. Most available image fusion algorithms are superior in results. However, they did not take into account the focus of the image. In this paper a fusion method is proposed to increase the focus of the fused image and to achieve highest quality image using the suggested focusing filter and Dual Tree-Complex Wavelet Transform. The focusing filter consist of a combination of two filters, which are Wiener filter and a sharpening filter. This filter is used before the fusion operation using Dual Tree-Complex Wavelet Transform. The common fusion rules, which are the average-fusion rule and maximum-fusion rule, were used to obtain the fused image. In the experiment, using the focus operators, the performance of the proposed fusion algorithm was compared with the existing algorithms. The results showed that the proposed method is better than these fusion methods in terms of the focus and quality. 


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sign in / Sign up

Export Citation Format

Share Document