scholarly journals Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules

2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Yingzhong Tian ◽  
Jie Luo ◽  
Wenjun Zhang ◽  
Tinggang Jia ◽  
Aiguo Wang ◽  
...  

Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS) and the Sum Modified Laplacian (SML). Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.

2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2016 ◽  
Vol 16 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Deepak Gambhir ◽  
Meenu Manchanda

Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Yong Yang ◽  
Song Tong ◽  
Shuying Huang ◽  
Pan Lin

Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.


The principal resolution of the image fusion is to merging indication from different images; CT (Computed Tomography) scan and an MRI (Magnetic Resonance Imaging) and to obtain more informative image. In this paper various transform based fusion methods like; discrete wavelet transform (DWT) and two specialisms of discrete cosine transform (DCT); DCT variance and DCT variance with consistency verification (DCT variance with CV) and stationary wavelet transform (SWT) image fusion procedures are instigated and associated in terms of image evidence. Fused outcomes attained from these fusion techniques are evaluated through distinctive evaluation metrics. A fused result accomplished from DCT variance with CV followed by DCT variance out performs DWT and SWT based image fusion methodologies. The potentiality of DCT features creates value-added evidence in the output fused image trailed by fused results proficient from DWT and SWT based image fusion methods. The discrete cosine transforms (DCT) stranded methods of image fusion are auxiliary accurate and concert leaning in real time solicitations by energy forte of DCT originated ideologies of stationary images. In this effort, a glowing systematic practice for fusion of multi-focus images based on DCT and its flavors are obtainable and demonstrated that DCT grounded fused outcomes exceed other fusion methodologies


Author(s):  
Radha N. ◽  
T.Ranga Babu

<p>In this paper, multifocus image fusion using quarter shift dual tree complex wavelet transform is proposed. Multifocus image fusion is a technique that combines the partially focused regions of multiple images of the same scene into a fully focused fused image. Directional selectivity and shift invariance properties are essential to produce a high quality fused image. However conventional wavelet based fusion algorithms introduce the ringing artifacts into fused image due to lack of shift invariance and poor directionality. The quarter shift dual tree complex wavelet transform has proven to be an effective multi-resolution transform for image fusion with its directional and shift invariant properties. Experimentation with this transform led to the conclusion that the proposed method not only produce sharp details (focused regions) in fused image due to its good directionality but also removes artifacts with its shift invariance in order to get high quality fused image. Proposed method performance is compared with traditional fusion methods in terms of objective measures. </p>


2019 ◽  
Vol 8 (4) ◽  
pp. 3765-3769

The Multifocal image fusion objective in visual sensor networks is to combine the multi-focused images of the same scene into a focused fused image with improved reliability and interpretation. However, the existing discrete wavelet-based fusion algorithms lead artifacts into the fused image due to its shift variance. But shift invariance is essential in image fusion during the reconstruction of the fused image without any loss. The Stationary Wavelet Transform is one of the most precious ones, eliminating shift variance caused by the discrete wavelet transform. And also focus measures are essential for the selection of focused objects in multi-focused images in order to get a fused image with every object in focus. Thus the advantages of Stationary wavelet transform and focus measures are considered for fusion in this paper. The proposed fusion method not only produces a focused fused image without artifacts and its performance is also good compared to other fusion methods.


2021 ◽  
Author(s):  
Gebeyehu Belay Gebremeskel

Abstract This paper focused on the challenge of image fusion processing and lack of reliable image information and proposed multi-focus image fusion using discrete wavelet transforms and computer vision techniques for the fused image coefficient selection process. I made an in-depth analysis and improvement on the existing algorithms from the wavelet transform and the rules of multi-focus image fusion object features’ extractions. The wavelet transform uses authentic localization properties, and computer vision provides efficient processing time and is a powerful method to analyze object focus in the high-frequency precision and steps. The process of image fusion using wavelet transformation is the wavelet basis function and wavelet decomposition level in iterative experiments to enhance fused image information. The rules of multi-focus image fusions are the wavelet transformation on the features of the high-frequency coefficients, which enhance the fusion image features reliability on the frequency domain and regional contrast of the object.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


Sign in / Sign up

Export Citation Format

Share Document