Improved Estimation and Forecasting Through Residual-Based Model Error Quantification

SPE Journal ◽  
2020 ◽  
Vol 25 (02) ◽  
pp. 951-968 ◽  
Author(s):  
Minjie Lu ◽  
Yan Chen

Summary Owing to the complex nature of hydrocarbon reservoirs, the numerical model constructed by geoscientists is always a simplified version of reality: for example, it might lack resolution from discretization and lack accuracy in modeling some physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when “perfect” model parameters are used as model inputs is known as “model error”. Even in a situation when the model is a perfect representation of reality, the inputs to the model are never completely known. During a typical model calibration procedure, only a subset of model inputs is adjusted to improve the agreement between model responses and historical data. The remaining model inputs that are not calibrated and are likely fixed at incorrect values result in model error in a similar manner as the imperfect model scenario. Assimilation of data without accounting for model error can result in the incorrect adjustment to model parameters, the underestimation of prediction uncertainties, and bias in forecasts. In this paper, we investigate the benefit of recognizing and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated “total error” (a combination of model error and observation error) is estimated from the data residual after a standard history-matching using the Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the estimation of model parameters and quantification of prediction uncertainty. We first illustrate the method using a synthetic 2D five-spot example, where some model errors are deliberately introduced, and the results are closely examined against the known “true” model. Then, the Norne field case is used to further evaluate the method. The Norne model has previously been history-matched using the LM-EnRML (Chen and Oliver 2014), where cell-by-cell properties (permeability, porosity, net-to-gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water/oil contacts, and relative permeability function are adjusted to honor historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, the proper use of localization, and heuristic adjustment of data noise to account for modeling error. In this paper, we improve the last aspect by quantitatively estimating model error using residual analysis.

Author(s):  
Muzammil Hussain Rammay ◽  
Ahmed H. Elsheikh ◽  
Yan Chen

AbstractIterative ensemble smoothers have been widely used for calibrating simulators of various physical systems due to the relatively low computational cost and the parallel nature of the algorithm. However, iterative ensemble smoothers have been designed for perfect models under the main assumption that the specified physical models and subsequent discretized mathematical models have the capability to model the reality accurately. While significant efforts are usually made to ensure the accuracy of the mathematical model, it is widely known that the physical models are only an approximation of reality. These approximations commonly introduce some type of model error which is generally unknown and when the models are calibrated, the effects of the model errors could be smeared by adjusting the model parameters to match historical observations. This results in a bias estimated parameters and as a consequence might result in predictions with questionable quality. In this paper, we formulate a flexible iterative ensemble smoother, which can be used to calibrate imperfect models where model errors cannot be neglected. We base our method on the ensemble smoother with multiple data assimilation (ES-MDA) as it is one of the most widely used iterative ensemble smoothing techniques. In the proposed algorithm, the residual (data mismatch) is split into two parts. One part is used to derive the parameter update and the second part is used to represent the model error. The proposed method is quite general and relaxes many of the assumptions commonly introduced in the literature. We observe that the proposed algorithm has the capability to reduce the effect of model bias by capturing the unknown model errors, thus improving the quality of the estimated parameters and prediction capacity of imperfect physical models.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3300-3316 ◽  
Author(s):  
Muzammil H. Rammay ◽  
Ahmed H. Elsheikh ◽  
Yan Chen

Summary In this work, we evaluate different algorithms to account for model errors while estimating the model parameters, especially when the model discrepancy (used interchangeably with “model error”) is large. In addition, we introduce two new algorithms that are closely related to some of the published approaches under consideration. Considering all these algorithms, the first calibration approach (base case scenario) relies on Bayesian inversion using iterative ensemble smoothing with annealing schedules without any special treatment for the model error. In the second approach, the residual obtained after calibration is used to iteratively update the total error covariance combining the effects of both model errors and measurement errors. In the third approach, the principal component analysis (PCA)-based error model is used to represent the model discrepancy during history matching. This leads to a joint inverse problem in which both the model parameters and the parameters of a PCA-based error model are estimated. For the joint inversion within the Bayesian framework, prior distributions have to be defined for all the estimated parameters, and the prior distribution for the PCA-based error model parameters are generally hard to define. In this study, the prior statistics of the model discrepancy parameters are estimated using the outputs from pairs of high-fidelity and low-fidelity models generated from the prior realizations. The fourth approach is similar to the third approach; however, an additional covariance matrix of difference between a PCA-based error model and the corresponding actual realizations of prior error is added to the covariance matrix of the measurement error. The first newly introduced algorithm (fifth approach) relies on building an orthonormal basis for the misfit component of the error model, which is obtained from a difference between the PCA-based error model and the corresponding actual realizations of the prior error. The misfit component of the error model is subtracted from the data residual (difference between observations and model outputs) to eliminate the incorrect relative contribution to the prediction from the physical model and the error model. In the second newly introduced algorithm (sixth approach), we use the PCA-based error model as a physically motivated bias correction term and an iterative update of the covariance matrix of the total error during history matching. All the algorithms are evaluated using three forecasting measures, and the results show that a good parameterization of the error model is needed to obtain a good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e., fourth, fifth, sixth) outperform the other methods in terms of the quality of estimated model parameters and the prediction capability of the calibrated imperfect models.


2014 ◽  
Vol 17 (02) ◽  
pp. 244-256 ◽  
Author(s):  
Yan Chen ◽  
Dean S. Oliver

Summary Although ensemble-based data-assimilation methods such as the ensemble Kalman filter (EnKF) and the ensemble smoother have been extensively used for the history matching of synthetic models, the number of applications of ensemble-based methods for history matching of field cases is extremely limited. In most of the published field cases in which the ensemble-based methods were used, the number of wells and the types of data to be matched were relatively small. As a result, it may not be clear to practitioners how a real history-matching study would be accomplished with ensemble-based methods. In this paper, we describe the application of the iterative ensemble smoother to the history matching of the Norne field, a North Sea field, with a moderately large number of wells, a variety of data types, and a relatively long production history. Particular attention is focused on the problems of the identification of important variables, the generation of an initial ensemble, the plausibility of results, and the efficiency of minimization. We also discuss the challenges encountered in the use of the ensemble-based method for complex-field case studies that are not typically encountered in synthetic cases. The Norne field produces from an oil-and-gas reservoir discovered in 1991 offshore Norway. The full-field model consists of four main fault blocks that are in partial communication and many internal faults with uncertain connectivity in each fault block. There have been 22 producers and 9 injectors in the field. Water-alternating-gas injection is used as the depletion strategy. Production rates of oil, gas, and water of 22 producers from 1997 to 2006 and repeat-formation-tester (RFT) pressure from 14 different wells are available for model calibration. The full-field simulation model has 22 layers, each with a dimension of 46 × 112 cells. The total number of active cells is approximately 45,000. The Levenberg-Marquardt form of the iterative ensemble smoother (LM-EnRML) is used for history matching. The model parameters that are updated include permeability, porosity, and net-to-gross (ntg) ratio at each gridblock; vertical transmissibility at each gridblock for six layers; transmissibility multipliers of 53 faults; endpoint water and gas relative permeability of four different reservoir zones; depth of water/oil contacts; and transmissibility multipliers between a few main fault blocks. The total number of model parameters is approximately 150,000. Distance-based localization is used to regularize the updates from LM-EnRML. LM-EnRML is able to achieve improved data match compared with the manually history-matched model after three iterations. Updates from LM-EnRML do not introduce artifacts in the property fields as in the manually history-matched model. The automated workflow is also much less labor-intensive than that for manual history matching.


Water ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2161
Author(s):  
Ruicheng Zhang ◽  
Nianqing Zhou ◽  
Xuemin Xia ◽  
Guoxian Zhao ◽  
Simin Jiang

Multicomponent reactive transport modeling is a powerful tool for the comprehensive analysis of coupled hydraulic and biochemical processes. The performance of the simulation model depends on the accuracy of related model parameters whose values are usually difficult to determine from direct measurements. In this situation, estimates of these uncertain parameters can be obtained by solving inverse problems. In this study, an efficient data assimilation method, the iterative local updating ensemble smoother (ILUES), is employed for the joint estimation of hydraulic parameters, biochemical parameters and contaminant source characteristics in the sequential biodegradation process of tetrachloroethene (PCE). In the framework of the ILUES algorithm, parameter estimation is realized by updating local ensemble with the iterative ensemble smoother (IES). To better explore the parameter space, the original ILUES algorithm is modified by determining the local ensemble partly with a linear ranking selection scheme. Numerical case studies based on the sequential biodegradation of PCE are then used to evaluate the performance of the ILUES algorithm. The results show that the ILUES algorithm is able to achieve an accurate joint estimation of related model parameters in the reactive transport model.


2019 ◽  
Author(s):  
Patrick N. Raanes ◽  
Andreas S. Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm, without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity, and (b) that the ensemble does not loose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz-96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


Sign in / Sign up

Export Citation Format

Share Document