scholarly journals Discrete Wavelet Transform Based Image Fusion Using Unsharp Masking

2019 ◽  
Vol 64 (2) ◽  
pp. 211-220
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays the result of infrared and visible image fusion has been utilized in significant applications like military, surveillance, remote sensing and medical imaging applications. Discrete wavelet transform based image fusion using unsharp masking is presented. DWT is used for decomposing input images (infrared, visible). Approximation and detailed coefficients are generated. For improving contrast unsharp masking has been applied on approximation coefficients. Then for merging approximation coefficients produced after unsharp masking average fusion rule is used. The rule that is used for merging detailed coefficients is max fusion rule. Finally, IDWT is used for generating a fused image. The result produced using the proposed fusion method is providing good contrast and also giving better performance results in reference to mean, entropy and standard deviation when compared with existing techniques.

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


Author(s):  
Jianhua Liu ◽  
Peng Geng ◽  
Hongtao Ma

Purpose This study aims to obtain the more precise decision map to fuse the source images by Coefficient significance method. In the area of multifocus image fusion, the better decision map is very important the fusion results. In the processing of distinguishing the well-focus part with blur part in an image, the edge between the parts is more difficult to be processed. Coefficient significance is very effective in generating the better decision map to fuse the multifocus images. Design/methodology/approach The energy of Laplacian is used in the approximation coefficients of redundant discrete wavelet transform. On the other side, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient. Findings Due to the shift-variance of the redundant discrete wavelet and the effectiveness of fusion rule, the presented fusion method is superior to the region energy in harmonic cosine wavelet domain, pixel significance with the cross bilateral filter and multiscale geometry analysis method of Ripplet transform. Originality/value In redundant discrete wavelet domain, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient of source images.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


Author(s):  
Girraj Prasad Rathor ◽  
Sanjeev Kumar Gupta

Image fusion based on different wavelet transform is the most commonly used image fusion method, which fuses the source pictures data in wavelet space as per some fusion rules. But, because of the uncertainties of the source images contributions to the fused image, to design a good fusion rule to incorporate however much data as could reasonably be expected into the fused picture turns into the most vital issue. On the other hand, adaptive fuzzy logic is the ideal approach to determine uncertain issues, yet it has not been utilized as a part of the outline of fusion rule. A new fusion technique based on wavelet transform and adaptive fuzzy logic is introduced in this chapter. After doing wavelet transform to source images, it computes the weight of each source images coefficients through adaptive fuzzy logic and then fuses the coefficients through weighted averaging with the processed weights to acquire a combined picture: Mutual Information, Peak Signal to Noise Ratio, and Mean Square Error as criterion.


2011 ◽  
Vol 145 ◽  
pp. 119-123
Author(s):  
Ko Chin Chang

For general image capture device, it is difficult to obtain an image with every object in focus. To solve the fusion issue of multiple same view point images with different focal settings, a novel image fusion algorithm based on local energy pattern (LGP) is proposed in this paper. Firstly, each focus images is decomposed using discrete wavelet transform (DWT) separately. Secondly, to calculate LGP with the corresponding pixel and its surrounding pixels, then use LGP to compute the new coefficient of the pixel from each transformed images with our proposed weighted fusing rules. The rules use different operations in low-bands coefficients and high-bands coefficients. Finally, the generated image is reconstructed from the new subband coefficients. Moreover, the reconstructed image can represent more detailed for the obtained scene. Experimental results demonstrate that our scheme performs better than the traditional discrete cosine transform (DCT) and discrete wavelet transform (DWT) method in both visual perception and quantitative analysis.


2006 ◽  
Vol 2 (4) ◽  
pp. 411-417 ◽  
Author(s):  
Bahram Javidi ◽  
Cuong Manh Do ◽  
Seung-Hyun Hong ◽  
Takanori Nomura

Sign in / Sign up

Export Citation Format

Share Document