A Novel Image Fusion Approach Based on Rough Set and Wavelet Analysis

2010 ◽  
Vol 439-440 ◽  
pp. 1069-1074 ◽  
Author(s):  
Zhi Yong Zhu

The goal of image fusion is to combine a high-quality image from multi-image about the same object. The paper presents an image fusion scheme based on wavelet transform and rough set. Firstly, the two images are decomposed by orthogonal wavelet; the image’s wavelet coefficients are got. Comparing with the two image’s wavelet coefficients, wavelet coefficients’ matrix is composed of maximum absolute value, the fused image is obtained by the inverse wavelet transform. The last section of the paper verifies the method by experiment and gets the good experimental results.

2013 ◽  
Vol 427-429 ◽  
pp. 1807-1812
Author(s):  
Ming Wei Sheng ◽  
Yong Jie Pang ◽  
Hai Huang ◽  
Tie Dong Zhang

The main purpose of underwater image fusion is to combine multi-images about the same object into a high-quality image with abundant information. A new underwater image fusion scheme based on Biorthogonal wavelet transform was presented, which is suitable to underwater computer vision system of AUV. Firstly, median filter algorithm was involved for improving the quality and contrast of two source underwater blurred images. Secondly, the different-position-focused underwater images were decomposed by Biorthogonal wavelet and the wavelet coefficients were acquired for reconstructing the fusion image. Finally, the fused image was constructed using the low-frequency and high-frequency domain fusion rules. By adopting a series of experiments for the underwater images fusion, an integrated underwater image with visible outline and distinguishable inner details was obtained.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


Author(s):  
Girraj Prasad Rathor ◽  
Sanjeev Kumar Gupta

Image fusion based on different wavelet transform is the most commonly used image fusion method, which fuses the source pictures data in wavelet space as per some fusion rules. But, because of the uncertainties of the source images contributions to the fused image, to design a good fusion rule to incorporate however much data as could reasonably be expected into the fused picture turns into the most vital issue. On the other hand, adaptive fuzzy logic is the ideal approach to determine uncertain issues, yet it has not been utilized as a part of the outline of fusion rule. A new fusion technique based on wavelet transform and adaptive fuzzy logic is introduced in this chapter. After doing wavelet transform to source images, it computes the weight of each source images coefficients through adaptive fuzzy logic and then fuses the coefficients through weighted averaging with the processed weights to acquire a combined picture: Mutual Information, Peak Signal to Noise Ratio, and Mean Square Error as criterion.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


2013 ◽  
Vol 722 ◽  
pp. 478-481
Author(s):  
Wei Dong Zhu ◽  
Wei Shen ◽  
Xin Ru Tu

In order to gain a new image with high spatial resolution and abundant spectrum by fusing panchromatic and multispectral images, a novel fusion algorithm based on the generation Bandelet is presented, and the fusion rule of Bandelet coefficients reposes on maximum absolute value of frequence. Fusion experiments based on new method, IHS and wavelet transform are carried out with panchromatic and multispectral images of Landsat-7. The experimental results show that fused image of new method is more excellent. Image edges are more distinct, and prove that Bandelet transform has the character of tracking image edges adaptively.


2016 ◽  
Vol 16 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Deepak Gambhir ◽  
Meenu Manchanda

Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.


2011 ◽  
Vol 204-210 ◽  
pp. 1419-1422 ◽  
Author(s):  
Yong Yang

Image fusion is to combine several different source images to form a new image by using a certain method. Recent studies show that among a variety of image fusion algorithms, the wavelet-based method is more effective. In the wavelet-based method, the key technique is the fusion scheme, which can decide the final fused result. This paper presents a novel fusion scheme that integrates the wavelet decomposed coefficients in a quite separate way when fusing images. The method is formed by considering the different physical meanings of the coefficients in both the low frequency and high frequency bands. The fused results were compared with several existing fusion methods and evaluated by three measures of performance. The experimental results can demonstrate that the proposed method can achieve better performance than conventional image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document