scholarly journals Oil exploration oriented multi-sensor image fusion algorithm

Open Physics ◽  
2017 ◽  
Vol 15 (1) ◽  
pp. 188-196
Author(s):  
Zhang Xiaobing ◽  
Zhou Wei ◽  
Song Mengfei

AbstractIn order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

2017 ◽  
Vol 11 (2) ◽  
pp. 163-169 ◽  
Author(s):  
Yan Sun ◽  
Ling Jiang

This paper puts forward a new color multi-focus image fusion algorithm based on fuzzy theory and dual-tree complex wavelet transform for the purpose of removing uncertainty when choosing sub-band coefficients in the smooth regions. Luminance component is the weighted average of the three color channels in the IHS color space and it is not sensitive to noise. According to the characteristics, luminance component was chosen as the measurement to calculate the focus degree. After separating the luminance component and spectrum component, Fisher classification and fuzzy theory were chosen as the fusion rules to conduct the choice of the coefficients after the dual-tree complex wavelet transform. So fusion color image could keep the natural color information as much as possible. This method could solve the problem of color distortion in the traditional algorithms. According to the simulation results, the proposed algorithm obtained better visual effects and objective quantitative indicators.


2017 ◽  
Vol 10 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Abdallah Bengueddoudj ◽  
Zoubeida Messali ◽  
Volodymyr Mosorov

In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT). The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP) approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA) is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.


2021 ◽  
pp. 3228-3236
Author(s):  
Nada Jasim Habeeb

Combining multi-model images of the same scene that have different focus distances can produce clearer and sharper images with a larger depth of field. Most available image fusion algorithms are superior in results. However, they did not take into account the focus of the image. In this paper a fusion method is proposed to increase the focus of the fused image and to achieve highest quality image using the suggested focusing filter and Dual Tree-Complex Wavelet Transform. The focusing filter consist of a combination of two filters, which are Wiener filter and a sharpening filter. This filter is used before the fusion operation using Dual Tree-Complex Wavelet Transform. The common fusion rules, which are the average-fusion rule and maximum-fusion rule, were used to obtain the fused image. In the experiment, using the focus operators, the performance of the proposed fusion algorithm was compared with the existing algorithms. The results showed that the proposed method is better than these fusion methods in terms of the focus and quality. 


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


Sign in / Sign up

Export Citation Format

Share Document