scholarly journals Image Focus Enhancement Using Focusing Filter and DT-CWT Based Image Fusion

2021 ◽  
pp. 3228-3236
Author(s):  
Nada Jasim Habeeb

Combining multi-model images of the same scene that have different focus distances can produce clearer and sharper images with a larger depth of field. Most available image fusion algorithms are superior in results. However, they did not take into account the focus of the image. In this paper a fusion method is proposed to increase the focus of the fused image and to achieve highest quality image using the suggested focusing filter and Dual Tree-Complex Wavelet Transform. The focusing filter consist of a combination of two filters, which are Wiener filter and a sharpening filter. This filter is used before the fusion operation using Dual Tree-Complex Wavelet Transform. The common fusion rules, which are the average-fusion rule and maximum-fusion rule, were used to obtain the fused image. In the experiment, using the focus operators, the performance of the proposed fusion algorithm was compared with the existing algorithms. The results showed that the proposed method is better than these fusion methods in terms of the focus and quality. 

2016 ◽  
Vol 16 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Deepak Gambhir ◽  
Meenu Manchanda

Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.


2017 ◽  
Vol 10 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Abdallah Bengueddoudj ◽  
Zoubeida Messali ◽  
Volodymyr Mosorov

In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT). The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP) approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA) is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


2017 ◽  
pp. 389-412
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


2017 ◽  
Vol 11 (2) ◽  
pp. 163-169 ◽  
Author(s):  
Yan Sun ◽  
Ling Jiang

This paper puts forward a new color multi-focus image fusion algorithm based on fuzzy theory and dual-tree complex wavelet transform for the purpose of removing uncertainty when choosing sub-band coefficients in the smooth regions. Luminance component is the weighted average of the three color channels in the IHS color space and it is not sensitive to noise. According to the characteristics, luminance component was chosen as the measurement to calculate the focus degree. After separating the luminance component and spectrum component, Fisher classification and fuzzy theory were chosen as the fusion rules to conduct the choice of the coefficients after the dual-tree complex wavelet transform. So fusion color image could keep the natural color information as much as possible. This method could solve the problem of color distortion in the traditional algorithms. According to the simulation results, the proposed algorithm obtained better visual effects and objective quantitative indicators.


Today’s research era, image fusion is a actual step by step procedure to develop the visualization of any image. It integrates the essential features of more than a couple of images into a individual fused image without taking any artifacts. Multifocus image fusion has a vital key factor in fusion process where it aims to increase the depth of field using extracting focused part from different multiple focused images. In this paper multi-focus image fusion algorithm is proposed where non local mean technique is used in stationary wavelet transform (SWT) to get the sharp and smooth image. Non-local mean function analyses the pixels belonging to the blurring part and improves the image quality. The proposed work is compared with some existing methods. The results are analyzed visually as well as using performance metrics.


Author(s):  
Radha N. ◽  
T.Ranga Babu

<p>In this paper, multifocus image fusion using quarter shift dual tree complex wavelet transform is proposed. Multifocus image fusion is a technique that combines the partially focused regions of multiple images of the same scene into a fully focused fused image. Directional selectivity and shift invariance properties are essential to produce a high quality fused image. However conventional wavelet based fusion algorithms introduce the ringing artifacts into fused image due to lack of shift invariance and poor directionality. The quarter shift dual tree complex wavelet transform has proven to be an effective multi-resolution transform for image fusion with its directional and shift invariant properties. Experimentation with this transform led to the conclusion that the proposed method not only produce sharp details (focused regions) in fused image due to its good directionality but also removes artifacts with its shift invariance in order to get high quality fused image. Proposed method performance is compared with traditional fusion methods in terms of objective measures. </p>


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


Sign in / Sign up

Export Citation Format

Share Document