data conditioning
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 20)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Khalid Obaid ◽  
Muhammad Aamir ◽  
Tarek Yehia Nafie ◽  
Omar Aly ◽  
Widad Krissat ◽  
...  

Abstract Rock physics/seismic inversion is a powerful tool that deliver information about intra-wells rocks elastic attributes and reservoir properties such as porosity, saturation and rock lithology classification. In principle, inversion is like an engine that should be fueled by proper input quality of both seismic and well data. As for the well data, sonic and density logs measure the rock properties a few inches from the borehole. Reliability of sonic transit-time and bulk density logs can be affected by large and rapid variation in the diameter and shape of the borehole cross-section, as well as the process of drilling fluid invasion. The basic assumption for acoustic well logs editing and conditioning is to use other recorded logs (not affected by bad-hole conditions) in a Multivariate-Regression Algorithm. In addition, Fluid Substitution was implemented to correct for the mud invasion that affects the acoustic and elastic properties based on the PVT data for fluid properties computation. The logs were then quality checked by multiple cross-plotting comparisons to the standard Rock-physics trends templates. As for seismic data, there are several factors affecting the quality of surface seismic data including the presence of residual noise and multiples contamination that caused improper amplitude balancing. Optimizing the seismic data processing for the inversion studies require reviewing and conditioning the seismic gathers and pre-stack volumes, guided by a deterministic seismic-to-well tie analysis after every major stage of the processing sequence. The applied processes are mainly consisting of Curvelet domain noise attenuation to attenuate residual noise. This was followed by high resolution Radon anti-multiple to attenuate residual surface multiples and Extended interbed multiple prediction to attenuate interbed multiples. In addition, Offset dependent amplitude and spectral balancing were applied to maintain the seismic amplitudes fidelity. This paper will illustrate a case from Abu Dhabi where data conditioning results improved the Hydrocarbon saturated carbonates vs brine saturated carbonate and the lithology predictions, leading to optimizing field development plans and drilling operations.


2021 ◽  
pp. 159-168
Author(s):  
Muneer Abdalla

The Paleocene reservoir formations of the Northwest Sirte Basin in North-central, Libya contains chaotic and mound-shaped seismic geometries that may have an impact on the performance of the reservoirs. It is crucial to characterize and interpret these complex geometries for future field development. Therefore, this study was utilized numerous seismic attributes to characterize and enhance the interpretation of the chaotic and mounded geometries. Data conditioning represented by spectral whitening and median filter was first applied to enhance the quality of the seismic data and remove random noise resulted from data acquisition and processing. It provided high-resolution seismic data and better-displayed edges and sedimentological features. Variance, root mean square (RMS), curvature, and envelope attributes were computed from the post-stack 3D seismic data to better visualize and interpret the chaotic and mound-like seismic geometries. Based on the seismic attribute analysis, the chaotic facies were interpreted as barrier reefs forming the margins of an isolated carbonate platform, whereas the small-scale mound-shaped facies was interpreted as patch reefs developed on the platform interior. Data conditioning methods and seismic attribute analysis that were applied to the 3-D seismic data have effectively improved the detection and interpretation of the chaotic and mounded facies in the study area. Keywords: Carbonate buildup, data conditioning, seismic attributes, Sirte Basin, Libya


2021 ◽  
Vol 2 (2) ◽  
pp. 1245-1256
Author(s):  
Viviana Bernal-Benítez ◽  
Juan Gómez-Malagón ◽  
Camilo Pardo-Beainy

The following article presents the techniques developed for the treatment of images obtained from the modeling process with AERMOD of data on the immission concentration of PM10 particulate matter. This data conditioning was carried out in order to generate, through the analysis of dispersion images, isolines that identify and quantify the areas of the Sogamoso Valley where PM10 concentrations occur from emissions from the limestone firing process in Nobsa, Boyacá. It is a first approach to the prediction of the dispersion phenomenon and the spatio-temporal determination of the influence that this immission has on the air quality conditions of a region.


2020 ◽  
Vol 8 (4) ◽  
pp. T927-T940
Author(s):  
Satinder Chopra ◽  
Ritesh Kumar Sharma ◽  
James Keay

The Delaware and Midland Basins are multistacked plays with production being drawn from different zones. Of the various prospective zones in the Delaware Basin, the Bone Spring and Wolfcamp Formations are the most productive and thus are the most drilled zones. To understand the reservoirs of interest and identify the hydrocarbon sweet spots, a 3D seismic inversion project was undertaken in the northern part of the Delaware Basin in 2018. We have examined the reservoir characterization exercise for this dataset in two parts. In addition to a brief description of the geology, we evaluate the challenges faced in performing seismic inversion for characterizing multistacked plays. The key elements that lend confidence in seismic inversion and the quantitative predictions made therefrom are well-to-seismic ties, proper data conditioning, robust initial models, and adequate parameterization of inversion analysis. We examine the limitations of a conventional approach associated with these individual steps and determine how to overcome them. Later work will first elaborate on the uncertainties associated with input parameters required for executing rock-physics analysis and then evaluate the proposed robust statistical approach for defining the different lithofacies.


2020 ◽  
Vol 38 (6) ◽  
pp. 2558-2578
Author(s):  
Honggeun Jo ◽  
Javier E Santos ◽  
Michael J Pyrcz

Rule-based reservoir modeling methods integrate geological depositional process concepts to generate reservoir models that capture realistic geologic features for improved subsurface predictions and uncertainty models to support development decision making. However, the robust and direct conditioning of these models to subsurface data, such as well logs, core descriptions, and seismic inversions and interpretations, remains as an obstacle for the broad application as a standard subsurface modeling technology. We implement a machine learning-based method for fast and flexible data conditioning of rule-based models. This study builds on a rule-based modeling method for deep-water lobe reservoirs. The model has three geological inputs: (1) the depositional element geometry, (2) the compositional exponent for element stacking pattern, and (3) the distribution of petrophysical properties with hierarchical trends conformable to the surfaces. A deep learning-based workflow is proposed for robust and non-iterative data conditioning. First, a generative adversarial network learns salient geometric features from the ensemble of the training rule-based models. Then, a new rule-based model is generated and a mask is applied to remove the model near local data along the well trajectories. Last, semantic image inpainting restores the mask with the optimum generative adversarial network realization that is consistent with both local data and the surrounding model. For the deep-water lobe example, the generative adversarial network learns the primary geological spatial features to generate reservoir realizations that reproduce hierarchical trend as well as the surface geometries and stacking pattern. Moreover, the trained generative adversarial network explores the latent reservoir manifold and identifies the ensemble of models to represent an uncertainty model. Semantic image inpainting determines the optimum replacement for the near-data mask that is consistent with the local data and the rest of the model. This work results in subsurface models that accurately reproduce reservoir heterogeneity, continuity, and spatial distribution of petrophysical parameters while honoring the local well data constraints.


First Break ◽  
2020 ◽  
Vol 38 (6) ◽  
pp. 71-77
Author(s):  
P.C.H. Veeken ◽  
A. Kashubin ◽  
D. Curia ◽  
Y. Davydenko ◽  
I.I. Priezzhev

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 204 ◽  
Author(s):  
Roberta Galici ◽  
Laura Ordile ◽  
Michele Marchesi ◽  
Andrea Pinna ◽  
Roberto Tonelli

We present a novel strategy, based on the Extract, Transform and Load (ETL) process, to collect data from a blockchain, elaborate and make it available for further analysis. The study aims to satisfy the need for increasingly efficient data extraction strategies and effective representation methods for blockchain data. For this reason, we conceived a system to make scalable the process of blockchain data extraction and clustering, and to provide a SQL database which preserves the distinction between transaction and addresses. The proposed system satisfies the need to cluster addresses in entities, and the need to store the extracted data in a conventional database, making possible the data analysis by querying the database. In general, ETL processes allow the automation of the operation of data selection, data collection and data conditioning from a data warehouse, and produce output data in the best format for subsequent processing or for business. We focus on the Bitcoin blockchain transactions, which we organized in a relational database to distinguish between the input section and the output section of each transaction. We describe the implementation of address clustering algorithms specific for the Bitcoin blockchain and the process to collect and transform data and to load them in the database. To balance the input data rate with the elaboration time, we manage blockchain data according to the lambda architecture. To evaluate our process, we first analyzed the performances in terms of scalability, and then we checked its usability by analyzing loaded data. Finally, we present the results of a toy analysis, which provides some findings about blockchain data, focusing on a comparison between the statistics of the last year of transactions, and previous results of historical blockchain data found in the literature. The ETL process we realized to analyze blockchain data is proven to be able to perform a reliable and scalable data acquisition process, whose result makes stored data available for further analysis and business.


2020 ◽  
Author(s):  
Maryam Bagheri ◽  
Haoran Zhao ◽  
Manyang Sun ◽  
Li Huang ◽  
Srinath Madasu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document