The Effect of Model Error Identification on the Fast Reservoir Simulation by Capacitance-Resistance Model

SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3349-3365
Author(s):  
Azadeh Mamghaderi ◽  
Babak Aminshahidy ◽  
Hamid Bazargan

Summary Using fast and reliable proxies instead of sophisticated and time-consuming reservoir simulators is of great importance in reservoir management. The capacitance-resistance model (CRM) as a fast proxy has been widely used in this area. However, the inadequacy of this proxy for simplifying complex reservoirs with a limited number of parameters has not been addressed appropriately in related works in the literature. In this study, potential uncertainties in the modeling of the waterflooding process in the reservoir by the producer-based version of CRM (CRMP) are formulated, leading to embedding a new error-related term into the original formulation of the proxy. Considering a general form of the model error to represent both white and colored noises, a system of a CRMP-error equation is introduced analytically to deal with any type of intrinsic model imperfection. Two approaches are developed for the problem solution including the following: tuning the additional error-related parameters as a complementary stage of a classical history-matching procedure, and updating these parameters simultaneously with the original model parameters in a data-assimilation approach over model training time. To validate the model and show the effectiveness of both solution schemes, the injection and production data of a water-injection procedure in a three-layered reservoir model are used. Results show that the error-related parameters can be matched successfully along with the model original variables either in a routine model calibration procedure or in a data-assimilation approach by using the ensemble-based Kalman filter (EnKF) method. Comparing the average of the obtained range for the liquid rate as the problem output with true data demonstrates the effectiveness of considering model error. This leads to substantial improvement of the results compared with the case of applying the original model without considering the error term.

2019 ◽  
Vol 23 (6) ◽  
pp. 1331-1347 ◽  
Author(s):  
Miguel Alfonzo ◽  
Dean S. Oliver

Abstract It is common in ensemble-based methods of history matching to evaluate the adequacy of the initial ensemble of models through visual comparison between actual observations and data predictions prior to data assimilation. If the model is appropriate, then the observed data should look plausible when compared to the distribution of realizations of simulated data. The principle of data coverage alone is, however, not an effective method for model criticism, as coverage can often be obtained by increasing the variability in a single model parameter. In this paper, we propose a methodology for determining the suitability of a model before data assimilation, particularly aimed for real cases with large numbers of model parameters, large amounts of data, and correlated observation errors. This model diagnostic is based on an approximation of the Mahalanobis distance between the observations and the ensemble of predictions in high-dimensional spaces. We applied our methodology to two different examples: a Gaussian example which shows that our shrinkage estimate of the covariance matrix is a better discriminator of outliers than the pseudo-inverse and a diagonal approximation of this matrix; and an example using data from the Norne field. In this second test, we used actual production, repeat formation tester, and inverted seismic data to evaluate the suitability of the initial reservoir simulation model and seismic model. Despite the good data coverage, our model diagnostic suggested that model improvement was necessary. After modifying the model, it was validated against the observations and is now ready for history matching to production and seismic data. This shows that the proposed methodology for the evaluation of the adequacy of the model is suitable for large realistic problems.


SPE Journal ◽  
2020 ◽  
Vol 25 (02) ◽  
pp. 951-968 ◽  
Author(s):  
Minjie Lu ◽  
Yan Chen

Summary Owing to the complex nature of hydrocarbon reservoirs, the numerical model constructed by geoscientists is always a simplified version of reality: for example, it might lack resolution from discretization and lack accuracy in modeling some physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when “perfect” model parameters are used as model inputs is known as “model error”. Even in a situation when the model is a perfect representation of reality, the inputs to the model are never completely known. During a typical model calibration procedure, only a subset of model inputs is adjusted to improve the agreement between model responses and historical data. The remaining model inputs that are not calibrated and are likely fixed at incorrect values result in model error in a similar manner as the imperfect model scenario. Assimilation of data without accounting for model error can result in the incorrect adjustment to model parameters, the underestimation of prediction uncertainties, and bias in forecasts. In this paper, we investigate the benefit of recognizing and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated “total error” (a combination of model error and observation error) is estimated from the data residual after a standard history-matching using the Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the estimation of model parameters and quantification of prediction uncertainty. We first illustrate the method using a synthetic 2D five-spot example, where some model errors are deliberately introduced, and the results are closely examined against the known “true” model. Then, the Norne field case is used to further evaluate the method. The Norne model has previously been history-matched using the LM-EnRML (Chen and Oliver 2014), where cell-by-cell properties (permeability, porosity, net-to-gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water/oil contacts, and relative permeability function are adjusted to honor historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, the proper use of localization, and heuristic adjustment of data noise to account for modeling error. In this paper, we improve the last aspect by quantitatively estimating model error using residual analysis.


Author(s):  
Matthew J. Hoffman ◽  
Elizabeth M. Cherry

Modelling of cardiac electrical behaviour has led to important mechanistic insights, but important challenges, including uncertainty in model formulations and parameter values, make it difficult to obtain quantitatively accurate results. An alternative approach is combining models with observations from experiments to produce a data-informed reconstruction of system states over time. Here, we extend our earlier data-assimilation studies using an ensemble Kalman filter to reconstruct a three-dimensional time series of states with complex spatio-temporal dynamics using only surface observations of voltage. We consider the effects of several algorithmic and model parameters on the accuracy of reconstructions of known scroll-wave truth states using synthetic observations. In particular, we study the algorithm’s sensitivity to parameters governing different parts of the process and its robustness to several model-error conditions. We find that the algorithm can achieve an acceptable level of error in many cases, with the weakest performance occurring for model-error cases and more extreme parameter regimes with more complex dynamics. Analysis of the poorest-performing cases indicates an initial decrease in error followed by an increase when the ensemble spread is reduced. Our results suggest avenues for further improvement through increasing ensemble spread by incorporating additive inflation or using a parameter or multi-model ensemble. This article is part of the theme issue ‘Uncertainty quantification in cardiac and cardiovascular modelling and simulation’.


Author(s):  
Clara Sophie Draper

AbstractThe ensembles used in the NOAA National Centers for Environmental Prediction (NCEP) global data assimilation and numerical weather prediction (NWP) system are under-dispersed at and near the land surface, preventing their use in ensemble-based land data assimilation. Comparison to offline (land-only) data assimilation ensemble systems suggests that while the relevant atmospheric fields are under-dispersed in NCEP’s system, this alone cannot explain the under-dispersed land component, and an additional scheme is required to explicitly account for land model error. This study then investigates several schemes for perturbing the soil (moisture and temperature) states in NCEP’s system, qualitatively comparing the induced ensemble spread to independent estimates of the forecast error standard deviation in soil moisture, soil temperature, 2m temperature, and 2m humidity. Directly adding perturbations to the soil states, as is commonly done in offline systems, generated unrealistic spatial patterns in the soil moisture ensemble spread. Application of a Stochastically Perturbed Physics Tendencies scheme to the soil states is inherently limited in the amount of soil moisture spread that it can induce. Perturbing the land model parameters, in this case vegetation fraction, generated a realistic distribution in the ensemble spread, while also inducing perturbations in the land (soil states) and atmosphere (2m states) that are consistent with errors in the land/atmosphere fluxes. The parameter perturbation method is then recommended for NCEP’s ensemble system, and it is currently being refined within the development of an ensemble-based coupled land/atmosphere data assimilation for NCEP’s NWP system.


SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2195-2207 ◽  
Author(s):  
Duc H. Le ◽  
Alexandre A. Emerick ◽  
Albert C. Reynolds

Summary Recently, Emerick and Reynolds (2012) introduced the ensemble smoother with multiple data assimilations (ES-MDA) for assisted history matching. With computational examples, they demonstrated that ES-MDA provides both a better data match and a better quantification of uncertainty than is obtained with the ensemble Kalman filter (EnKF). However, similar to EnKF, ES-MDA can experience near ensemble collapse and results in too many extreme values of rock-property fields for complex problems. These negative effects can be avoided by a judicious choice of the ES-MDA inflation factors, but, before this work, the optimal inflation factors could only be determined by trial and error. Here, we provide two automatic procedures for choosing the inflation factor for the next data-assimilation step adaptively as the history match proceeds. Both methods are motivated by knowledge of regularization procedures—the first is intuitive and heuristical; the second is motivated by existing theory on the regularization of least-squares inverse problems. We illustrate that the adaptive ES-MDA algorithms are superior to the original ES-MDA algorithm by history matching three-phase-flow production data for a complicated synthetic problem in which the reservoir-model parameters include the porosity, horizontal and vertical permeability fields, depths of the initial fluid contacts, and the parameters of power-law permeability curves.


SPE Journal ◽  
2016 ◽  
Vol 21 (04) ◽  
pp. 1413-1424 ◽  
Author(s):  
Yuqing Chang ◽  
Andreas S. Stordal ◽  
Randi Valestrand

Summary Data assimilation with ensemble-based inversion methods was successfully applied for parameter estimation in reservoir models. However, in certain complex-reservoir models, it remains challenging to estimate the model parameters and to preserve the geological realism simultaneously. In particular, when handling special-reservoir model parameters such as facies types concerning fluvial channels, one must realize that geological realism becomes one of the key concerns. The main objective of this work is to address these issues for a complex field with a newly extended version of a recently proposed facies-parameterization approach coupled with an ensemble-based data assimilation method. The proposed workflow combines the new facies parameterization and the adaptive gaussian mixture (AGM) filter into the data assimilation framework for channelized reservoirs. To handle discrete-facies parameters, we combine probability maps and truncated Gaussian fields to obtain a continuous parameterization of the facies fields. For the data assimilation, we use the AGM filter, which is an efficient history matching approach that incorporates a resampling routine that allows us to regenerate facies fields with information from the updated probability maps. This work flow is evaluated, for the first time, on a complex field case—the Brugge field. This reservoir model consists of layers with complex channelized structures and layers characterized by reservoir properties generated with variograms. With limited prior knowledge on the facies model, this work flow is shown to be able to preserve the channel continuity while reducing the reservoir model uncertainty with AGM. When applied to a complex reservoir, the proposed work flow provides a geologically consistent and realistic reservoir model that leads to improved capability of predicting subsurface flow behaviors.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3300-3316 ◽  
Author(s):  
Muzammil H. Rammay ◽  
Ahmed H. Elsheikh ◽  
Yan Chen

Summary In this work, we evaluate different algorithms to account for model errors while estimating the model parameters, especially when the model discrepancy (used interchangeably with “model error”) is large. In addition, we introduce two new algorithms that are closely related to some of the published approaches under consideration. Considering all these algorithms, the first calibration approach (base case scenario) relies on Bayesian inversion using iterative ensemble smoothing with annealing schedules without any special treatment for the model error. In the second approach, the residual obtained after calibration is used to iteratively update the total error covariance combining the effects of both model errors and measurement errors. In the third approach, the principal component analysis (PCA)-based error model is used to represent the model discrepancy during history matching. This leads to a joint inverse problem in which both the model parameters and the parameters of a PCA-based error model are estimated. For the joint inversion within the Bayesian framework, prior distributions have to be defined for all the estimated parameters, and the prior distribution for the PCA-based error model parameters are generally hard to define. In this study, the prior statistics of the model discrepancy parameters are estimated using the outputs from pairs of high-fidelity and low-fidelity models generated from the prior realizations. The fourth approach is similar to the third approach; however, an additional covariance matrix of difference between a PCA-based error model and the corresponding actual realizations of prior error is added to the covariance matrix of the measurement error. The first newly introduced algorithm (fifth approach) relies on building an orthonormal basis for the misfit component of the error model, which is obtained from a difference between the PCA-based error model and the corresponding actual realizations of the prior error. The misfit component of the error model is subtracted from the data residual (difference between observations and model outputs) to eliminate the incorrect relative contribution to the prediction from the physical model and the error model. In the second newly introduced algorithm (sixth approach), we use the PCA-based error model as a physically motivated bias correction term and an iterative update of the covariance matrix of the total error during history matching. All the algorithms are evaluated using three forecasting measures, and the results show that a good parameterization of the error model is needed to obtain a good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e., fourth, fifth, sixth) outperform the other methods in terms of the quality of estimated model parameters and the prediction capability of the calibrated imperfect models.


SPE Journal ◽  
2010 ◽  
Vol 16 (02) ◽  
pp. 331-342 ◽  
Author(s):  
Hemant A. Phale ◽  
Dean S. Oliver

Summary When the ensemble Kalman filter (EnKF) is used for history matching, the resulting updates to reservoir properties sometimes exceed physical bounds, especially when the problem is highly nonlinear. Problems of this type are often encountered during history matching compositional models using the EnKF. In this paper, we illustrate the problem using an example in which the updated molar density of CO2 in some regions is observed to take negative values while molar densities of the remaining components are increased. Standard truncation schemes avoid negative values of molar densities but do not address the problem of increased molar densities of other components. The results can include a spurious increase in reservoir pressure with a subsequent inability to maintain injection. In this paper, we present a method for constrained EnKF (CEnKF), which takes into account the physical constraints on the plausible values of state variables during data assimilation. In the proposed method, inequality constraints are converted to a small number of equality constraints, which are used as virtual observations for calibrating the model parameters within plausible ranges. The CEnKF method is tested on a 2D compositional model and on a highly heterogeneous three-phase-flow reservoir model. The effect of the constraints on mass conservation is illustrated using a 1D Buckley-Leverett flow example. Results show that the CEnKF technique is able to enforce the nonnegativity constraints on molar densities and the bound constraints on saturations (all phase saturations must be between 0 and 1) and achieve a better estimation of reservoir properties than is obtained using only truncation with the EnKF.


2019 ◽  
Vol 147 (5) ◽  
pp. 1429-1445 ◽  
Author(s):  
Yuchu Zhao ◽  
Zhengyu Liu ◽  
Fei Zheng ◽  
Yishuai Jin

Abstract We performed parameter estimation in the Zebiak–Cane model for the real-world scenario using the approach of ensemble Kalman filter (EnKF) data assimilation and the observational data of sea surface temperature and wind stress analyses. With real-world data assimilation in the coupled model, our study shows that model parameters converge toward stable values. Furthermore, the new parameters improve the real-world ENSO prediction skill, with the skill improved most by the parameter of the highest climate sensitivity (gam2), which controls the strength of anomalous upwelling advection term in the SST equation. The improved prediction skill is found to be contributed mainly by the improvement in the model dynamics, and second by the improvement in the initial field. Finally, geographic-dependent parameter optimization further improves the prediction skill across all the regions. Our study suggests that parameter optimization using ensemble data assimilation may provide an effective strategy to improve climate models and their real-world climate predictions in the future.


Sign in / Sign up

Export Citation Format

Share Document