scholarly journals Evaluating prior predictions of production and seismic data

2019 ◽  
Vol 23 (6) ◽  
pp. 1331-1347 ◽  
Author(s):  
Miguel Alfonzo ◽  
Dean S. Oliver

Abstract It is common in ensemble-based methods of history matching to evaluate the adequacy of the initial ensemble of models through visual comparison between actual observations and data predictions prior to data assimilation. If the model is appropriate, then the observed data should look plausible when compared to the distribution of realizations of simulated data. The principle of data coverage alone is, however, not an effective method for model criticism, as coverage can often be obtained by increasing the variability in a single model parameter. In this paper, we propose a methodology for determining the suitability of a model before data assimilation, particularly aimed for real cases with large numbers of model parameters, large amounts of data, and correlated observation errors. This model diagnostic is based on an approximation of the Mahalanobis distance between the observations and the ensemble of predictions in high-dimensional spaces. We applied our methodology to two different examples: a Gaussian example which shows that our shrinkage estimate of the covariance matrix is a better discriminator of outliers than the pseudo-inverse and a diagonal approximation of this matrix; and an example using data from the Norne field. In this second test, we used actual production, repeat formation tester, and inverted seismic data to evaluate the suitability of the initial reservoir simulation model and seismic model. Despite the good data coverage, our model diagnostic suggested that model improvement was necessary. After modifying the model, it was validated against the observations and is now ready for history matching to production and seismic data. This shows that the proposed methodology for the evaluation of the adequacy of the model is suitable for large realistic problems.

2019 ◽  
Author(s):  
Jing Wang ◽  
Guigen Nie ◽  
Shengjun Gao ◽  
Changhu Xue

Abstract. Landslide displacement prediction has great practical engineering significance to landslide stability evaluation and early warning. The evolution of landslide is a complex dynamic process, applying classical prediction method will result in significant error. Data assimilation method offers a new way to merge multi-source data with the model. However, data assimilation is still deficient in the ability to meet the demand of dynamic landslide system. In this paper, simultaneous state-parameter estimation (SSPE) using particle filter-based data assimilation is applied to predict displacement of the landslide. Landslide SSPE assimilation strategy can make use of time-series displacements and hydrological information for the joint estimation of landslide displacement and model parameters, which can improve the performance considerably. We select Xishan Village, Sichuan province, China as experiment site to test SSPE assimilation strategy. Based on the comparison of actual monitoring data with prediction values, results strongly suggest the effectiveness and feasibility of SSPE assimilation strategy in short-term landslide displacement estimation.


SPE Journal ◽  
2010 ◽  
Vol 15 (04) ◽  
pp. 1077-1088 ◽  
Author(s):  
F.. Sedighi ◽  
K.D.. D. Stephen

Summary Seismic history matching is the process of modifying a reservoir simulation model to reproduce the observed production data in addition to information gained through time-lapse (4D) seismic data. The search for good predictions requires that many models be generated, particularly if there is an interaction between the properties that we change and their effect on the misfit to observed data. In this paper, we introduce a method of improving search efficiency by estimating such interactions and partitioning the set of unknowns into noninteracting subspaces. We use regression analysis to identify the subspaces, which are then searched separately but simultaneously with an adapted version of the quasiglobal stochastic neighborhood algorithm. We have applied this approach to the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contains a large number of barriers that affect flow at different times during production, and their transmissibilities are highly uncertain. We find that we can successfully represent the misfit function as a second-order polynomial dependent on changes in barrier transmissibility. First, this enables us to identify the most important barriers, and, second, we can modify their transmissibilities efficiently by searching subgroups of the parameter space. Once the regression analysis has been performed, we reduce the number of models required to find a good match by an order of magnitude. By using 4D seismic data to condition saturation and pressure changes in history matching effectively, we have gained a greater insight into reservoir behavior and have been able to predict flow more accurately with an efficient inversion tool. We can now determine unswept areas and make better business decisions.


2016 ◽  
Vol 52 (3) ◽  
pp. 1-4 ◽  
Author(s):  
A. Bacchus ◽  
A. Tounzi ◽  
J.-P. Argaud ◽  
B. Bouriquet ◽  
M. Biet ◽  
...  

2006 ◽  
Vol 7 (3) ◽  
pp. 548-565 ◽  
Author(s):  
Jasper A. Vrugt ◽  
Hoshin V. Gupta ◽  
BreanndánÓ Nualláin ◽  
Willem Bouten

Abstract Operational flood forecasting requires that accurate estimates of the uncertainty associated with model-generated streamflow forecasts be provided along with the probable flow levels. This paper demonstrates a stochastic ensemble implementation of the Sacramento model used routinely by the National Weather Service for deterministic streamflow forecasting. The approach, the simultaneous optimization and data assimilation method (SODA), uses an ensemble Kalman filter (EnKF) for recursive state estimation allowing for treatment of streamflow data error, model structural error, and parameter uncertainty, while enabling implementation of the Sacramento model without major modification to its current structural form. Model parameters are estimated in batch using the shuffled complex evolution metropolis stochastic-ensemble optimization approach (SCEM-UA). The SODA approach was implemented using parallel computing to handle the increased computational requirements. Studies using data from the Leaf River, Mississippi, indicate that forecast performance improvements on the order of 30% to 50% can be realized even with a suboptimal implementation of the filter. Further, the SODA parameter estimates appear to be less biased, which may increase the prospects for finding useful regionalization relationships.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3349-3365
Author(s):  
Azadeh Mamghaderi ◽  
Babak Aminshahidy ◽  
Hamid Bazargan

Summary Using fast and reliable proxies instead of sophisticated and time-consuming reservoir simulators is of great importance in reservoir management. The capacitance-resistance model (CRM) as a fast proxy has been widely used in this area. However, the inadequacy of this proxy for simplifying complex reservoirs with a limited number of parameters has not been addressed appropriately in related works in the literature. In this study, potential uncertainties in the modeling of the waterflooding process in the reservoir by the producer-based version of CRM (CRMP) are formulated, leading to embedding a new error-related term into the original formulation of the proxy. Considering a general form of the model error to represent both white and colored noises, a system of a CRMP-error equation is introduced analytically to deal with any type of intrinsic model imperfection. Two approaches are developed for the problem solution including the following: tuning the additional error-related parameters as a complementary stage of a classical history-matching procedure, and updating these parameters simultaneously with the original model parameters in a data-assimilation approach over model training time. To validate the model and show the effectiveness of both solution schemes, the injection and production data of a water-injection procedure in a three-layered reservoir model are used. Results show that the error-related parameters can be matched successfully along with the model original variables either in a routine model calibration procedure or in a data-assimilation approach by using the ensemble-based Kalman filter (EnKF) method. Comparing the average of the obtained range for the liquid rate as the problem output with true data demonstrates the effectiveness of considering model error. This leads to substantial improvement of the results compared with the case of applying the original model without considering the error term.


SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2195-2207 ◽  
Author(s):  
Duc H. Le ◽  
Alexandre A. Emerick ◽  
Albert C. Reynolds

Summary Recently, Emerick and Reynolds (2012) introduced the ensemble smoother with multiple data assimilations (ES-MDA) for assisted history matching. With computational examples, they demonstrated that ES-MDA provides both a better data match and a better quantification of uncertainty than is obtained with the ensemble Kalman filter (EnKF). However, similar to EnKF, ES-MDA can experience near ensemble collapse and results in too many extreme values of rock-property fields for complex problems. These negative effects can be avoided by a judicious choice of the ES-MDA inflation factors, but, before this work, the optimal inflation factors could only be determined by trial and error. Here, we provide two automatic procedures for choosing the inflation factor for the next data-assimilation step adaptively as the history match proceeds. Both methods are motivated by knowledge of regularization procedures—the first is intuitive and heuristical; the second is motivated by existing theory on the regularization of least-squares inverse problems. We illustrate that the adaptive ES-MDA algorithms are superior to the original ES-MDA algorithm by history matching three-phase-flow production data for a complicated synthetic problem in which the reservoir-model parameters include the porosity, horizontal and vertical permeability fields, depths of the initial fluid contacts, and the parameters of power-law permeability curves.


2021 ◽  
Author(s):  
Bjørn Egil Ludvigsen ◽  
Mohan Sharma

Abstract Well performance calibration after history matching a reservoir simulation model ensures that the wells give realistic rates during the prediction phase. The calibration involves adjusting well model parameters to match observed production rates at specified backpressure(s). This process is usually very time consuming such that the traditional approaches using one reservoir model with hundreds of high productivity wells would take months to calibrate. The application of uncertainty-centric workflows for reservoir modeling and history matching results in many acceptable matches for phase rates and flowing bottom-hole pressure (BHP). This makes well calibration even more challenging for an ensemble of large number of simulation models, as the existing approaches are not scalable. It is known that Productivity Index (PI) integrates reservoir and well performance where most of the pressure drop happens in one to two grid blocks around well depending upon the model resolution. A workflow has been setup to fix transition by calibrating PI for each well in a history matched simulation model. Simulation PI can be modified by changing permeability-thickness (Kh), skin, or by applying PI multiplier as a correction. For a history matched ensemble with a range in water-cut and gas-oil ratio, the proposed workflow involves running flowing gradient calculations for a well corresponding to observed THP and simulated rates for different phases to calculate target BHP. A PI Multiplier is then calculated for that well and model that would shift simulation BHP to target BHP as local update to reduce the extent of jump. An ensemble of history matched models with a range in water-cut and gas-oil ratio have a variation in required BHPs unique to each case. With the well calibration performed correctly, the jump observed in rates while switching from history to prediction can be eliminated or significantly reduced. The prediction thus results in reliable rates if wells are run on pressure control and reliable plateau if the wells are run on group control. This reduces the risk of under/over-predicting ultimate hydrocarbon recovery from field and the project's cashflow. Also, this allows running sensitivities to backpressure, tubing design, and other equipment constraints to optimize reservoir performance and facilities design. The proposed workflow, which dynamically couple reservoir simulation and well performance modeling, takes a few seconds to run for a well, making it fit-for-purpose for a large ensemble of simulation models with a large number of wells.


2020 ◽  
Vol 177 ◽  
pp. 373-385
Author(s):  
Daiwa Satoh ◽  
Seiji Tsutsumi ◽  
Miki Hirabayashi ◽  
Kaname Kawatsu ◽  
Toshiya Kimura

Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


Sign in / Sign up

Export Citation Format

Share Document