Uncertainty in reservoir modeling

2015 ◽  
Vol 3 (2) ◽  
pp. SQ7-SQ19 ◽  
Author(s):  
Michael J. Pyrcz ◽  
Christopher D. White

Uncertainty is due to incomplete and imprecise knowledge as a result of limited sampling of the subsurface heterogeneities. Well data and seismic data have incomplete coverage and finite resolution. The interpretations are uncertain. Reservoirs are heterogeneous and difficult to predict away from wells. Ignoring uncertainty and locking in important model parameters and choices amounts to an assumption of perfect knowledge and is generally an unacceptable approach. Uncertainty must be explicitly modeled. Understanding the (1) sources of uncertainty, (2) methods to represent uncertainty, (3) the formalisms of uncertainty, and (4) uncertainty modeling methods and workflows were essential for the integration of all reservoir information sources and providing good models for decision making in the presence of uncertainty.

Solid Earth ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 1597-1619 ◽  
Author(s):  
Carla Patricia Bárbara ◽  
Patricia Cabello ◽  
Alexandre Bouche ◽  
Ingrid Aarnes ◽  
Carlos Gordillo ◽  
...  

Abstract. Structural uncertainty is a key parameter affecting the accuracy of the information contained in static and dynamic reservoir models. However, quantifying and assessing its real impact on reservoir property distribution, in-place volume estimates and dynamic simulation has always been a challenge. Due to the limitation of the existing workflows and time constraints, the exploration of all potential geological configurations matching the interpreted data has been limited to a small number of scenarios, making the future field development decisions uncertain. We present a case study in the Lubina and Montanazo mature oil fields (Western Mediterranean) in which the structural uncertainty in the seismic interpretation of faults and horizons has been captured using modern reservoir modeling workflows. We model the fault and horizon uncertainty by means of two workflows: the manually interpreted and the constant uncertainty cases. In the manually interpreted case, the zones of ambiguity in the position of horizons and faults are defined as locally varying envelopes around the best interpretation, whose dimensions mainly vary according to the frequency content of the seismic data, lateral variations of amplitudes along reflectors, and how the reflectors terminate around faults when fault reflections are not present in the seismic image. In the constant case, the envelope dimensions are kept constant for each horizon and each fault. Both faults and horizons are simulated within their respective uncertainty envelopes as provided to the user. In all simulations, conditioning to available well data is ensured. Stochastic simulation was used to obtain 200 realizations for each uncertainty modeling workflow. The realizations were compared in terms of gross rock volumes above the oil–water contact considering three scenarios at the depths of the contact. The results show that capturing the structural uncertainty in the picking of horizons and faults in seismic data has a relevant impact on the volume estimation. The models predict percentage differences in the mean gross rock volume with respect to best-estimate interpretation up to 7 % higher and 12 % lower (P10 and P90). The manually interpreted uncertainty workflow reports narrower gross rock volume predictions and more consistent results from the simulated structural models than the constant case. This work has also revealed that, for the Lubina and Montanazo fields, the fault uncertainty associated with the major faults that bound the reservoir laterally strongly affects the gross rock volume predicted. The multiple realizations obtained are geologically consistent with the available data, and their differences in geometry and dimensions of the reservoir allow us to improve the understanding of the reservoir structure. The uncertainty modeling workflows applied are easy to design and allow us to update the models when required. This work demonstrates that knowledge of the data and the sources of uncertainty is important to set up the workflows correctly. Further studies can combine other sources of uncertainty in the modeling process to improve the risk assessment.


2019 ◽  
Author(s):  
Carla Patricia Bárbara ◽  
Patricia Cabello ◽  
Alexandre Bouche ◽  
Ingrid Aarnes ◽  
Carlos Gordillo ◽  
...  

Abstract. Structural uncertainty is a key parameter affecting the accuracy of the information contained in static and dynamic reservoir models. However, quantifying and assessing its real impact on reservoir property distribution, in-place volume estimates and dynamic simulation has always been a challenge. Due to the limitation of the existing workflows and time constraints, the exploration of all potential geological configurations matching the interpreted data has been limited to a small number of scenarios, making the future field-development decisions uncertain. We present a case study in the Lubina and Montanazo mature oil fields (Western Mediterranean) in which the structural uncertainty in the seismic interpretation of faults and horizons has been captured using modern reservoir modeling workflows. We model the fault and horizon uncertainty by means of two workflows, the manually interpreted and the constant uncertainty cases. In the manually interpreted case, the zones of ambiguity in the position of horizons and faults are defined as locally varying envelopes around the best interpretation, whose dimensions vary according to the diffractions and amplitudes of the seismic data throughout the surface interpretation. In the constant case, the envelope dimensions are kept constant for each horizon and each fault. Both faults and horizons are simulated within their respective uncertainty envelopes as provided to the user. In all simulations, conditioning to available well data is ensured. Stochastic simulation was used to obtain 200 realizations for each uncertainty modeling workflow. The realizations were compared in terms of gross rock volumes above the oil-water contact considering three scenarios in the depths of the contact. The results show that capturing the structural uncertainty in the picking of horizons and faults in seismic data has a relevant impact on the volume estimation. The models predict percentage differences in the mean gross rock volume with respect to best estimate interpretation up to 16 % higher and 22 % lower. The manually interpreted uncertainty workflow reports narrower gross rock volume predictions and more consistent results from the simulated structural models than the constant case. This work has also revealed that, for the Lubina and Montanazo fields, the fault uncertainty associated with the major faults that bound laterally the reservoir strongly affect the GRV predicted. The multiple realizations obtained are geologically consistent with the available data and their differences in geometry and dimensions of the reservoir allows us to improve the understanding of the reservoir structure. The uncertainty modeling workflows applied are easy to design and allow to update the models when required. This work demonstrates that knowledge of the data and the sources of uncertainty is important to set up the workflows correctly. Further studies can combine other sources of uncertainty in the modeling process to improve the risk assessment.


2015 ◽  
Vol 3 (4) ◽  
pp. SAC91-SAC98 ◽  
Author(s):  
Adrian Pelham

Interpreters need to screen and select the most geologically robust inversion products from increasingly larger data volumes, particularly in the absence of significant well control. Seismic processing and inversion routines are devised to provide reliable elastic parameters ([Formula: see text] and [Formula: see text]) from which the interpreter can predict the fluid and lithology properties. Seismic data modeling, for example, the Shuey approximations and the convolution inversion models, greatly assist in the parameterization of the processing flows within acceptable uncertainty limits and in establishing a measure of the reliability of the processing. Joint impedance facies inversion (Ji-Fi®) is a new inversion methodology that jointly inverts for acoustic impedance and seismic facies. Seismic facies are separately defined in elastic space ([Formula: see text] and [Formula: see text]), and a dedicated low-frequency model per facies is used. Because Ji-Fi does not need well data from within the area to define the facies or depth trends, wells from outside the area or theoretical constraints may be used. More accurate analyses of the reliability of the inversion products are a key advance because the results of the Ji-Fi lithology prediction may then be quantitatively and independently assessed at well locations. We used a novel visual representation of a confusion matrix to quantitatively assess the sensitivity and uncertainty in the results when compared with facies predicted from the depth trends and well-elastic parameters and the well-log lithologies observed. Thus, using simple models and the Ji-Fi inversion technique, we had an improved, quantified understanding of our data, the processes that had been applied, the parameterization, and the inversion results. Rock physics could further transform the elastic properties to more reservoir-focused parameters: volume of shale and porosity, volumes of facies, reservoir property uncertainties — all information required for interpretation and reservoir modeling.


2019 ◽  
Vol 38 (6) ◽  
pp. 474-479
Author(s):  
Mohamed G. El-Behiry ◽  
Said M. Dahroug ◽  
Mohamed Elattar

Seismic reservoir characterization becomes challenging when reservoir thickness goes beyond the limits of seismic resolution. Geostatistical inversion techniques are being considered to overcome the resolution limitations of conventional inversion methods and to provide an intuitive understanding of subsurface uncertainty. Geostatistical inversion was applied on a highly compartmentalized area of Sapphire gas field, offshore Nile Delta, Egypt, with the aim of understanding the distribution of thin sands and their impact on reservoir connectivity. The integration of high-resolution well data with seismic partial-angle-stack volumes into geostatistical inversion has resulted in multiple elastic property realizations at the desired resolution. The multitude of inverted elastic properties are analyzed to improve reservoir characterization and reflect the inversion nonuniqueness. These property realizations are then classified into facies probability cubes and ranked based on pay sand volumes to quantify the volumetric uncertainty in static reservoir modeling. Stochastic connectivity analysis was also applied on facies models to assess the possible connected volumes. Sand connectivity analysis showed that the connected pay sand volume derived from the posterior mean of property realizations, which is analogous to deterministic inversion, is much smaller than the volumes generated by any high-frequency realization. This observation supports the role of thin interbed reservoirs in facilitating connectivity between the main sand units.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2018 ◽  
Vol 10 (1) ◽  
pp. 174-191 ◽  
Author(s):  
Majid Khan ◽  
Yike Liu ◽  
Asam Farid ◽  
Muhammad Owais

Abstract Regional seismic reflection profiles and deep exploratory wells have been used to characterize the subsurface structural trends and seismo-stratigraphic architecture of the sedimentary successions in offshore Indus Pakistan. To improve the data quality, we have reprocessed the seismic data by applying signal processing scheme to enhance the reflection continuity for obtaining better results. Synthetic seismograms have been used to identify and tie the seismic reflections to the well data. The seismic data revealed tectonically controlled, distinct episodes of normal faulting representing rifting during Mesozoic and transpression at Late Eocene time. A SW-NE oriented anticlinal type push up structure is observed resulted from the basement reactivation and recent transpression along Indian Plate margin. The structural growth of this particular pushup geometry was computed. Six mappable seismic sequences have been identified on seismic records. In general, geological formations are at shallow depths towards northwest due to basement blocks uplift. A paleoshelf is also identified on seismic records overlain by Cretaceous sediments, which is indicative of Indian-African Plates rifting at Jurassic time. The seismic interpretation reveals that the structural styles and stratigraphy of the region were significantly affected by the northward drift of the Indian Plate, post-rifting, and sedimentation along its western margin during Middle Cenozoic. A considerable structural growth along the push up geometry indicates present day transpression in the margin sediments. The present comprehensive interpretation can help in understanding the complex structures in passive continental margins worldwide that display similar characteristics but are considered to be dominated by rifting and drifting tectonics.


2021 ◽  
Author(s):  
Eduardo Emilio Sanchez-Leon ◽  
Natascha Brandhorst ◽  
Bastian Waldowski ◽  
Ching Pui Hung ◽  
Insa Neuweiler ◽  
...  

<p>The success of data assimilation systems strongly depends on the suitability of the generated ensembles. While in theory data assimilation should correct the states of an ensemble of models, especially if model parameters are included in the update, its effectiveness will depend on many factors, such as ensemble size, ensemble spread, and the proximity of the prior ensemble simulations to the data. In a previous study, we generated an ensemble-based data-assimilation framework to update model states and parameters of a coupled land surface-subsurface model. As simulation system we used the Terrestrial Systems Modeling Platform TerrSysMP, with the community land-surface model (CLM) coupled to the subsurface model Parflow. In this work, we used the previously generated ensemble to assess the effect of uncertain input forcings (i.e. precipitation), unknown subsurface parameterization, and/or plant physiology in data assimilation. The model domain covers a rectangular area of 1×5km<sup>2</sup>, with a uniform depth of 50m. The subsurface material is divided into four units, and the top soil layers consist of three different soil types with different vegetation. Streams are defined along three of the four boundaries of the domain. For data assimilation, we used the TerrsysMP PDAF framework. We defined a series of data assimilation experiments in which sources of uncertainty were considered individually, and all additional settings of the ensemble members matched those of the reference. To evaluate the effect of all sources of uncertainty combined, we designed an additional test in which the input forcings, subsurface parameters, and the leaf area index of the ensemble were all perturbed. In all these tests, the reference model had homogenous subsurface units and the same grid resolution as all models of the ensemble. We used point measurements of soil moisture in all data assimilation experiments. We concluded that precipitation dominates the dynamics of the simulations, and perturbing the precipitation fields for the ensemble have a major impact in the performance of the assimilation. Still, considerable improvements are observed compared to open-loop simulations. In contrast, the effect of variable plant physiology was minimal, with no visible improvement in relevant fluxes such as evapotranspiration. As expected, improved ensemble predictions are propagated longer in time when parameters are included in the update.</p>


2021 ◽  
Author(s):  
Matteo Berti ◽  
Alessandro Simoni

<p>Rainfall is the most significant factor for debris flows triggering. Water is needed to saturate the soil, initiate the sediment motion (regardless of the mobilization mechanism) and transform the solid debris into a fluid mass that can move rapidly downslope. This water is commonly provided by rainfall or rainfall and snowmelt. Consequently, most warning systems rely on the use of rainfall thresholds to predict debris flow occurrence. Debris flows thresholds are usually empirically-derived from the rainfall records that caused past debris flows in a certain area, using a combination of selected precipitation measurements (such as event rainfall P, duration D, or average intensity I) that describe critical rainfall conditions. Recent years have also seen a growing interest in the use of coupled hydrological and slope stability models to derive physically-based thresholds for shallow landslide initiation.</p><p>In both cases, rainfall thresholds are affected by significant uncertainty. Sources of uncertainty include: measurement errors; spatial variability of the rainfall field; incomplete or uncertain debris flow inventory; subjective definition of the “rainfall event”; use of subjective criteria to define the critical conditions; uncertainty in model parameters (for physically-based approaches). Rainfall measurement is widely recognized as a main source of uncertainty due to the extreme time-space variability that characterize intense rainfall events in mountain areas. However, significant errors can also arise by inaccurate information reported in landslide inventories on the timing of debris flows, or by the criterion used to define triggering intensities.</p><p>This study analyzes the common sources of uncertainty associated to rainfall thresholds for debris flow occurrence and discusses different methods to quantify them. First, we give an overview of the various approaches used in the literature to measure the uncertainty caused by random errors or procedural defects. These approaches are then applied to debris flows using real data collected in the Dolomites (Northen Alps, Itay), in order to estimate the variabilty of each single factor (precipitation, triggering timing, triggering intensity..). Individual uncertainties are then combined to obtain the overall uncertain of the rainfall threshold, which can be calculated using the classical method of “summation in quadrature” or a more effective approach based on Monte Carlo simulations. The uncertainty budget allows to identify the biggest contributors to the final variability and it is also useful to understand if this variability can be reduced to make our thresholds more precise.</p><p> </p>


2021 ◽  
Vol 19 (3) ◽  
pp. 125-138
Author(s):  
S. Inichinbia ◽  
A.L. Ahmed

This paper presents a rigorous but pragmatic and data driven approach to the science of making seismic-to-well ties. This pragmatic  approach is consistent with the interpreter’s desire to correlate geology to seismic information by the use of the convolution model,  together with least squares matching techniques and statistical measures of fit and accuracy to match the seismic data to the well data. Three wells available on the field provided a chance to estimate the wavelet (both in terms of shape and timing) directly from the seismic and also to ascertain the level of confidence that should be placed in the wavelet. The reflections were interpreted clearly as hard sand at H1000 and soft sand at H4000. A synthetic seismogram was constructed and matched to a real seismic trace and features from the well are correlated to the seismic data. The prime concept in constructing the synthetic is the convolution model, which represents a seismic reflection signal as a sequence of interfering reflection pulses of different amplitudes and polarity but all of the same shape. This pulse shape is the seismic wavelet which is formally, the reflection waveform returned by an isolated reflector of unit strength at the target  depth. The wavelets are near zero phase. The goal and the idea behind these seismic-to-well ties was to obtain information on the sediments, calibration of seismic processing parameters, correlation of formation tops and seismic reflectors, and the derivation of a  wavelet for seismic inversion among others. Three seismic-to-well ties were done using three partial angle stacks and basically two formation tops were correlated. Keywords: seismic, well logs, tie, synthetics, angle stacks, correlation,


2021 ◽  
Author(s):  
Daniel Asante Otchere ◽  
David Hodgetts ◽  
Tarek Arbi Omar Ganat ◽  
Najeeb Ullah ◽  
Alidu Rashid

Abstract Understanding and characterizing the behaviour of the subsurface by combining it with a suitable statistical method gives a higher level of confidence in the reservoir model produced. Interpolation of porosity and permeability data with minimum error and high accuracy is, therefore, essential in reservoir modeling. The most widely used interpolation algorithm, kriging, with enough well data is the best linear unbiased estimator. This research sought to compare the applicability and competitiveness of inverse distance weighting (IDW) method using power index of 1, 2 and 4 to kriging when there is sparse data, due to time and budget constraints, to calculate hydrocarbon volumes in a fluvial-deltaic reservoir. Interpolation results, estimated from descriptive statistics, were insignificant and showed similar prediction accuracy and consistency but IDW with power index of 1 indicated the least error estimation and higher accuracy. The assessment of hydrocarbon volume calculations also showed a marginal difference below 0.08 between IDW power index of 1 and kriging in the reservoir zones. Reservoir segments cross-validation and correlation analysis results indicate IDW to have no significant difference to kriging with absolute errors of 3% for recoverable oil and 0.7% for recoverable gas. Grid upscaling, which usually causes a loss of geological features and extreme porosity values, did not impact the results but rather complemented the robustness of IDW in both fine and coarse grid upscale. With IDW exhibiting least errors and higher accuracy, the volumetric and statistical results confirm that when there are fewer well data in a fluvial-deltaic reservoir, the suitable spatial interpolation choice should be IDW method with a power index of 1.


Sign in / Sign up

Export Citation Format

Share Document