WELLBORE IMAGES DIGITAL FUSION: BEYOND SINGLE-SENSOR PHYSICAL CONSTRAINTS

2021 ◽  
Author(s):  
Simone Di Santo ◽  
◽  
Nadege Bize-Forest ◽  
Isabelle Le Nir ◽  
Carlos Maeso ◽  
...  

In the modern oilfield, borehole images can be considered as the minimally representative element of any well-planned geological model/interpretation. In the same borehole it is common to acquire multiple images using different physics and/or resolutions. The challenge for any petro-technical expert is to extract detailed information from several images simultaneously without losing the petrophysical information of the formation. This work shows an innovative approach to combine several borehole images into one new multi-dimensional fused and high-resolution image that allows, at a glance, a petrophysical and geological qualitative interpretation while maintaining quantitative measurement properties. The new image is created by applying color mathematics and advanced image fusion techniques: At the first stage low resolution LWD nuclear images are merged into one multichannel or multiphysics image that integrates all petrophysical measurement’s information of each single input image. A specific transfer function was developed, it normalizes the input measurements into color intensity that, combined into an RGB (red-green-blue) color space, is visualized as a full-color image. The strong and bilateral connection between measurements and colors enables processing that can be used to produce ad-hoc secondary images. In a second stage the multiphysics image resolution is increased by applying a specific type of image fusion: Pansharpening. The goal is to inject details and texture present in a high-resolution image into the low resolution multiphysics image without compromising the petrophysical measurements. The pansharpening algorithm was especially developed for the borehole images application and compared with other established sharpening methods. The resulting high-resolution multiphysics image integrates all input measurements in the form of RGB colors and the texture from the high-resolution image. The image fusion workflow has been tested using LWD GR, density, photo-electric factor images and a high-resolution resistivity image. Image fusion is an innovative method that extends beyond physical constraints of single sensors: the result is a unique image dataset that contains simultaneously geological and petrophysical information at the highest resolution. This work will also give examples of applications of the new fused image.

Fractals ◽  
2011 ◽  
Vol 19 (03) ◽  
pp. 347-354 ◽  
Author(s):  
CHING-JU CHEN ◽  
SHU-CHEN CHENG ◽  
Y. M. HUANG

This study discussed the application of a fractal interpolation method in satellite image data reconstruction. It used low-resolution images as the source data for fractal interpolation reconstruction. Using this approach, a high-resolution image can be reconstructed when there is only a low-resolution source image available. The results showed that the high-resolution image data from fractal interpolation can effectively enhance the sharpness of the border contours. Implementing fractal interpolation on an insufficient image resolution image can avoid jagged edges and mosaic when enlarging the image, as well as improve the visibility of object features in the region of interest. The proposed approach can thus be a useful tool in land classification by satellite images.


Author(s):  
Lung-Chun Chang ◽  
Yueh-Jyun Lee ◽  
Hui-Yun Hu ◽  
Yu-Ching Hsu ◽  
Yi-Syuan Wu

To obtain high resolution images, some low resolution images must be processed and enhanced. In the literature, the mapping from the low resolution image to the high resolution image is a linear system and it is only enlarged by an integer scale. This paper presents a real scaling algorithm for image resolution enhancement. Using a virtual magnifier, an image resolution can be enhanced by a real scale number. Experimental results demonstrate that the proposed algorithm has a high quality for the enlarged image in the human visual system.


Author(s):  
R. S. Hansen ◽  
D. W. Waldram ◽  
T. Q. Thai ◽  
R. B. Berke

Abstract Background High-resolution Digital Image Correlation (DIC) measurements have previously been produced by stitching of neighboring images, which often requires short working distances. Separately, the image processing community has developed super resolution (SR) imaging techniques, which improve resolution by combining multiple overlapping images. Objective This work investigates the novel pairing of super resolution with digital image correlation, as an alternative method to produce high-resolution full-field strain measurements. Methods First, an image reconstruction test is performed, comparing the ability of three previously published SR algorithms to replicate a high-resolution image. Second, an applied translation is compared against DIC measurement using both low- and super-resolution images. Third, a ring sample is mechanically deformed and DIC strain measurements from low- and super-resolution images are compared. Results SR measurements show improvements compared to low-resolution images, although they do not perfectly replicate the high-resolution image. SR-DIC demonstrates reduced error and improved confidence in measuring rigid body translation when compared to low resolution alternatives, and it also shows improvement in spatial resolution for strain measurements of ring deformation. Conclusions Super resolution imaging can be effectively paired with Digital Image Correlation, offering improved spatial resolution, reduced error, and increased measurement confidence.


Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


2006 ◽  
Vol 72 (5) ◽  
pp. 565-572 ◽  
Author(s):  
Andreja Ŝvab ◽  
Kriŝtof Oŝtir

Sign in / Sign up

Export Citation Format

Share Document