scholarly journals Formulating the history matching problem with consistent error statistics

Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3289
Author(s):  
Emil N. Musakaev ◽  
Sergey P. Rodionov ◽  
Nail G. Musakaev

A three-dimensional numerical hydrodynamic model fairly accurately describes the processes of developing oil and gas fields, and has good predictive properties only if there are high-quality input data and comprehensive information about the reservoir. However, under conditions of high uncertainty of the input data, measurement errors, significant time and resource costs for processing and analyzing large amounts of data, the use of such models may be unreasonable and can lead to ill-posed problems: either the uniqueness of the solution or its stability is violated. A well-known method for dealing with these problems is regularization or the method of adding some additional a priori information. In contrast to full-scale modeling, currently there is active development of reduced-physics models, which are used, first of all, in conditions when it is required to make an operational decision, and computational resources are limited. One of the most popular simplified models is the material balance model, which makes it possible to directly capture the relationship between reservoir pressure, flow rates and the integral reservoir characteristics. In this paper, it is proposed to consider a hierarchical approach when solving the problem of oil field waterflooding control using material balance models in successive approximations: first for the field as a whole, then for hydrodynamically connected blocks of the field, then for wells. When moving from one level of model detailing to the next, the modeling results from the previous levels of the hierarchy are used in the form of additional regularizing information, which ultimately makes it possible to correctly solve the history matching problem (identification of the filtration model) in conditions of incomplete input information.


2021 ◽  
Author(s):  
Xindan Wang ◽  
Yin Zhang ◽  
Abhijit Dandekar ◽  
Yudou Wang

Abstract Chemical flooding has been widely used to enhance oil recovery after conventional waterflooding. However, it is always a challenge to model chemical flooding accurately since many of the model parameters of the chemical flooding cannot be measured accurately in the lab and even some parameters cannot be obtained from the lab. Recently, the ensemble-based assisted history matching techniques have been proven to be efficient and effective in simultaneously estimating multiple model parameters. Therefore, this study validates the effectiveness of the ensemble-based method in estimating model parameters for chemical flooding simulation, and the half-iteration EnKF (HIEnKF) method has been employed to conduct the assisted history matching. In this work, five surfactantpolymer (SP) coreflooding experiments have been first conducted, and the corresponding core scale simulation models have been built to simulate the coreflooding experiments. Then the HIEnKF method has been applied to calibrate the core scale simulation models by assimilating the observed data including cumulative oil production and pressure drop from the corresponding coreflooding experiments. The HIEnKF method has been successively applied to simultaneously estimate multiple model parameters, including porosity and permeability fields, relative permeabilities, polymer viscosity curve, polymer adsorption curve, surfactant interfacial tension (IFT) curve and miscibility function curve, for the SP flooding simulation model. There exists a good agreement between the updated simulation results and observation data, indicating that the updated model parameters are appropriate to characterize the properties of the corresponding porous media and the fluid flow properties in it. At the same time, the effectiveness of the ensemble-based assisted history matching method in chemical enhanced oil recovery (EOR) simulation has been validated. Based on the validated simulation model, numerical simulation tests have been conducted to investigate the influence of injection schemes and operating parameters of SP flooding on the ultimate oil recovery performance. It has been found that the polymer concentration, surfactant concentration and slug size of SP flooding have a significant impact on oil recovery, and these parameters need to be optimized to achieve the maximum economic benefit.


2021 ◽  
Author(s):  
Bjørn Egil Ludvigsen ◽  
Mohan Sharma

Abstract Well performance calibration after history matching a reservoir simulation model ensures that the wells give realistic rates during the prediction phase. The calibration involves adjusting well model parameters to match observed production rates at specified backpressure(s). This process is usually very time consuming such that the traditional approaches using one reservoir model with hundreds of high productivity wells would take months to calibrate. The application of uncertainty-centric workflows for reservoir modeling and history matching results in many acceptable matches for phase rates and flowing bottom-hole pressure (BHP). This makes well calibration even more challenging for an ensemble of large number of simulation models, as the existing approaches are not scalable. It is known that Productivity Index (PI) integrates reservoir and well performance where most of the pressure drop happens in one to two grid blocks around well depending upon the model resolution. A workflow has been setup to fix transition by calibrating PI for each well in a history matched simulation model. Simulation PI can be modified by changing permeability-thickness (Kh), skin, or by applying PI multiplier as a correction. For a history matched ensemble with a range in water-cut and gas-oil ratio, the proposed workflow involves running flowing gradient calculations for a well corresponding to observed THP and simulated rates for different phases to calculate target BHP. A PI Multiplier is then calculated for that well and model that would shift simulation BHP to target BHP as local update to reduce the extent of jump. An ensemble of history matched models with a range in water-cut and gas-oil ratio have a variation in required BHPs unique to each case. With the well calibration performed correctly, the jump observed in rates while switching from history to prediction can be eliminated or significantly reduced. The prediction thus results in reliable rates if wells are run on pressure control and reliable plateau if the wells are run on group control. This reduces the risk of under/over-predicting ultimate hydrocarbon recovery from field and the project's cashflow. Also, this allows running sensitivities to backpressure, tubing design, and other equipment constraints to optimize reservoir performance and facilities design. The proposed workflow, which dynamically couple reservoir simulation and well performance modeling, takes a few seconds to run for a well, making it fit-for-purpose for a large ensemble of simulation models with a large number of wells.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3317-3331
Author(s):  
Pipat Likanapaisal ◽  
Hamdi A. Tchelepi

Summary In general, a probabilistic framework for a modeling process involves two uncertainty spaces: model parameters and state variables (or predictions). The two uncertainty spaces in reservoir simulation are connected by the governing equations of flow and transport in porous media in the form of a reservoir simulator. In a forward problem (or a predictive run), the reservoir simulator directly maps the uncertainty space of the model parameters to the uncertainty space of the state variables. Conversely, an inverse problem (or history matching) aims to improve the descriptions of the model parameters by using the measurements of state variables. However, we cannot solve the inverse problem directly in practice. Numerous algorithms, including Kriging-based inversion and the ensemble Kalman filter (EnKF) and its many variants, simplify the system by using a linear assumption. The purpose of this paper is to improve the integration of measurement errors in the history-matching algorithms that rely on the linear assumption. The statistical moment equation (SME) approach with the Kriging-based inversion algorithm is used to illustrate several practical examples. In the Motivation section, an example of pressure conditioning has a measurement that contains no additional information because of its significant measurement error. This example highlights the inadequacy of the current method that underestimates the conditional uncertainty for both model parameters and predictions. Accordingly, we derive a new formula that recognizes the absence of additional information and preserves the unconditional uncertainty. We believe this to be the consistent behavior to integrate measurement errors. Other examples are used to validate the new formula with both linear and nonlinear (i.e., the saturation equation) problems, with single and multiple measurements, and with different configurations of measurement errors. For broader applications, we also develop an equivalent formula for algorithms in the Monte Carlo simulation (MCS) approach, such as EnKF and ensemble smoother (ES).


1977 ◽  
Vol 17 (01) ◽  
pp. 42-56 ◽  
Author(s):  
A.H. Dogru ◽  
T.N. Dixon ◽  
T.F. Edgar

Abstract Methods of nonlinear regression theory were applied to the reservoir history-matching problem to determine the effect of erroneous problem to determine the effect of erroneous parameter estimates obtained from well testing parameter estimates obtained from well testing on the future prediction of reservoir pressures. Two examples were studied: well testing in a radial one-dimensional slightly compressible reservoir and in an undersaturated, two-dimensional, heterogeneous oil field. The reservoir parameters of permeability, porosity, external radius, and pore volume were considered, and the effects of pore volume were considered, and the effects of measurement error, test time, and flow rate on the confidence limits were computed. Introduction The operation of a reservoir simulator requires accurate estimates of the reservoir properties. However, the simulation parameters, such as permeability, porosity, and reservoir geometry, are permeability, porosity, and reservoir geometry, are usually unknown unless coring and physical property analysis have been undertaken. Because of the cost of these procedures, it is more desirable to use the pressures measured at the well during a well test pressures measured at the well during a well test and indirectly compute the important parameters of the system. By using history matching of the test data to obtain the system parameters, the future pressure behavior of the reservoir can be predicted pressure behavior of the reservoir can be predictedSeveral studies on history matching have indicated that the welltest approach for determining the reservoir parameters often suffers from incorrect and nonunique parameter estimates. The factors that affect the parameter estimation can be classified as model errors, observability, measurement errors or noise, history time, test procedure, and optimization procedure. Model errors arise from the inaccuracy of the model and the numerical integration. For example, a reservoir simulator is only a reasonable approximation for flow through porous media. Solution of a model equation by numerical means also introduces roundoff and discretization errors. Observability of the system plays an important role in estimating the reservoir parameters. Depending on the location of the well and the number of data points, it may not be possible to determine uniquely all reservoir parameters from the measurements made at that well. Observability is strictly a function of the reservoir model used. At a given well, pressure measurements may only reflect the values of the parameters in specific zones of the reservoir. If a specific zone away from the well does not affect the measured pressure, then the system is not observable at that particular location. A rigorous definition of observability can be found in other papers. Measurement errors in the pressures and flow rates are another source of unrealistic parameter estimates. Longer history times always give more information about the reservoir as long as the system remains in a dynamic state. The nature of the system input (well flow rate) also affects the accuracy of the estimates and predictions. The final source of incorrect parameter estimates arises because the history-matching problem, posed mathematically, is usually a nonlinear programming problem that must be solved computationally. Such problem that must be solved computationally. Such a problem yields multiple extrema that often can lead to a relative minimum (rather than a global minimum) in the numerical search for the smallest matching error. Also, the magnitude of the objective function can be quite insensitive to the parameters selected, thus causing the optimization procedure to terminate prematurely. The above factors control the history-matching process; with actual data, it is usually impossible process; with actual data, it is usually impossible to identify the exact contributions of each factor to the errors in the parameter estimates. Since a certain amount of error will be introduced into the estimated parameters from the history-matching process, it is parameters from the history-matching process, it is useful to study the magnitude of this error resulting from various sources under controlled simulation conditions. Also, it is important to determine how the errors in the parameters are reflected in the future predictions of the pressures. SPEJ P. 42


Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4290
Author(s):  
Dongmei Zhang ◽  
Yuyang Zhang ◽  
Bohou Jiang ◽  
Xinwei Jiang ◽  
Zhijiang Kang

Reservoir history matching is a well-known inverse problem for production prediction where enormous uncertain reservoir parameters of a reservoir numerical model are optimized by minimizing the misfit between the simulated and history production data. Gaussian Process (GP) has shown promising performance for assisted history matching due to the efficient nonparametric and nonlinear model with few model parameters to be tuned automatically. Recently introduced Gaussian Processes proxy models and Variogram Analysis of Response Surface-based sensitivity analysis (GP-VARS) uses forward and inverse Gaussian Processes (GP) based proxy models with the VARS-based sensitivity analysis to optimize the high-dimensional reservoir parameters. However, the inverse GP solution (GPIS) in GP-VARS are unsatisfactory especially for enormous reservoir parameters where the mapping from low-dimensional misfits to high-dimensional uncertain reservoir parameters could be poorly modeled by GP. To improve the performance of GP-VARS, in this paper we propose the Gaussian Processes proxy models with Latent Variable Models and VARS-based sensitivity analysis (GPLVM-VARS) where Gaussian Processes Latent Variable Model (GPLVM)-based inverse solution (GPLVMIS) instead of GP-based GPIS is provided with the inputs and outputs of GPIS reversed. The experimental results demonstrate the effectiveness of the proposed GPLVM-VARS in terms of accuracy and complexity. The source code of the proposed GPLVM-VARS is available at https://github.com/XinweiJiang/GPLVM-VARS.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


2018 ◽  
Vol 108 (01-02) ◽  
pp. 41-46
Author(s):  
F. Vogel ◽  
M. Tiffe ◽  
M. Metzger ◽  
D. Prof. Biermann

Bei der Auslegung verknüpfter Prozessschritte zur Herstellung von Bauteilen mit gezielt eingestellten Eigenschaften finden vermehrt FE-basierte Simulationssysteme Anwendung, um den Aufwand experimenteller Untersuchungen insbesondere im Hinblick auf den gesteigerten Einsatz innovativer Werkstoffkonzepte gering zu halten. Im Folgenden wird die Ausarbeitung von Konzepten zur Anpassung von Parametern zur Materialmodellierung sowie zur Verknüpfung von Einzelsimulationen der Prozesskette erläutert.   Regarding the increased application of innovative material concepts in sequential process steps for manufacturing components with tailored properties, the FE-analysis can be used to reduce the effort of experimental investigations. In this article, the development of concepts for the adjustment of simulation model parameters and the conjunction of process chain single simulations are described.


2021 ◽  
Author(s):  
Guohua Gao ◽  
Jeroen Vink ◽  
Fredrik Saaf ◽  
Terence Wells

Abstract When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods e.g., Markov chain Monte Carlo (MCMC), are very expensive (e.g., MCMC) while others are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian Mixture Model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the CPU time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration and prevent their reappearance using a dedicated filter. To prevent overfitting, we only add a new Gaussian component if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation, e.g., reducing the CPU time by a factor of 800 to 7300 for problems we tested, which makes it quite attractive for large scale history matching problems.


1983 ◽  
Vol 245 (5) ◽  
pp. R664-R672 ◽  
Author(s):  
S. Hurwitz ◽  
S. Fishman ◽  
A. Bar ◽  
M. Pines ◽  
G. Riesenfeld ◽  
...  

The system that regulates plasma calcium in the bird has been formalized into a model based on a series of differential equations and solved by computer simulation. Bone, kidney, and intestine have been considered as the control subsystems, with parathyroid hormone and 1,25-dihydroxycholecalciferol as the regulating hormones. The parameters used in the simulation model have been computed either from published results or by specifically designed experiments described here. For the estimation of parameters, an iterative procedure has been developed that was designed to minimize the sum of square errors between observed and system-simulated values. Parameters of 1,25-dihydroxycholecalciferol metabolism were experimentally obtained from the kinetic behavior of the 3H-labeled hormone in rachitic birds after a single dose. Model parameters have been adjusted using the results of in vivo calcium loading and validated by an EDTA infusion experiment. The simulation model has been used to study the hierarchy of the activities of the three control subsystems and of the regulating hormones, at different calcium intakes. Positive or negative errors in plasma calcium resulted in an asymmetry in the activities of the controlling systems, bone and kidney, whereas the intestine is characterized by its relatively long response time.


Sign in / Sign up

Export Citation Format

Share Document