History Matching the Full Norne Field Model Using Seismic and Production Data

SPE Journal ◽  
2019 ◽  
Vol 24 (04) ◽  
pp. 1452-1467 ◽  
Author(s):  
Rolf J. Lorentzen ◽  
Xiaodong Luo ◽  
Tuhin Bhakta ◽  
Randi Valestrand

Summary In this paper, we use a combination of acoustic impedance and production data for history matching the full Norne Field. The purpose of the paper is to illustrate a robust and flexible work flow for assisted history matching of large data sets. We apply an iterative ensemble-based smoother, and the traditional approach for assisted history matching is extended to include updates of additional parameters representing rock clay content, which has a significant effect on seismic data. Further, for seismic data it is a challenge to properly specify the measurement noise, because the noise level and spatial correlation between measurement noise are unknown. For this purpose, we apply a method based on image denoising for estimating the spatially correlated (colored) noise level in the data. For the best possible evaluation of the workflow performance, all data are synthetically generated in this study. We assimilate production data and seismic data sequentially. First, the production data are assimilated using traditional distance-based localization, and the resulting ensemble of reservoir models is then used when assimilating seismic data. This procedure is suitable for real field applications, because production data are usually available before seismic data. If both production data and seismic data are assimilated simultaneously, the high number of seismic data might dominate the overall history-matching performance. The noise estimation for seismic data involves transforming the observations to a discrete wavelet domain. However, the resulting data do not have a clear spatial position, and the traditional distance-based localization schemes used to avoid spurious correlations and underestimated uncertainty (because of limited ensemble size), are not possible to apply. Instead, we use a localization scheme that is based on correlations between observations and parameters that does not rely on physical position for model variables or data. This method automatically adapts to each observation and iteration. The results show that we reduce data mismatch for both production and seismic data, and that the use of seismic data reduces estimation errors for porosity, permeability, and net-to-gross ratio (NTG). Such improvements can provide useful information for reservoir management and planning for additional drainage strategies.

SPE Journal ◽  
2018 ◽  
Vol 23 (05) ◽  
pp. 1496-1517 ◽  
Author(s):  
Chaohui Chen ◽  
Guohua Gao ◽  
Ruijian Li ◽  
Richard Cao ◽  
Tianhong Chen ◽  
...  

Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.


2014 ◽  
Author(s):  
Gerard J.P. Joosten ◽  
Asli Altintas ◽  
Gijs Van Essen ◽  
Jorn Van Doren ◽  
Paul Gelderblom ◽  
...  

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


SPE Journal ◽  
2016 ◽  
Vol 21 (05) ◽  
pp. 1793-1812 ◽  
Author(s):  
C.. Chen ◽  
G.. Gao ◽  
B. A. Ramirez ◽  
J. C. Vink ◽  
A. M. Girardi

Summary Assisted history matching (AHM) of a channelized reservoir is still a very-challenging task because it is very difficult to gradually deform the discrete facies in an automated fashion, while preserving geological realism. In this paper, a pluri-principal-component-analysis (PCA) method, which supports PCA with a pluri-Gaussian model, is proposed to reconstruct geological and reservoir models with multiple facies. PCA extracts the major geological features from a large collection of training channelized models and generates gridblock-based properties and real-valued (i.e., noninteger-valued) facies. The real-valued facies are mapped to discrete facies indicators according to rock-type rules (RTRs) that determine the fraction of each facies and neighboring connections between different facies. Pluri-PCA preserves the main (or principal) features of both geological and geostatistical characteristics of the prior models. A new method is also proposed to automatically build the RTRs with an ensemble of training realizations. An AHM work flow is developed by integrating pluri-PCA with a derivative-free optimization algorithm. This work flow is validated on a synthetic model with four facies types and a real-field channelized model with three facies types, and it is applied to update both the facies model and the reservoir model by conditioning to production data and/or hard data. The models generated by pluri-PCA preserve the major geological/geostatistical descriptions of the original training models. This has great potential for practical applications in large-scale history matching and uncertainty quantification.


2006 ◽  
Vol 9 (05) ◽  
pp. 502-512 ◽  
Author(s):  
Arne Skorstad ◽  
Odd Kolbjornsen ◽  
Asmund Drottning ◽  
Havar Gjoystdal ◽  
Olaf K. Huseby

Summary Elastic seismic inversion is a tool frequently used in analysis of seismic data. Elastic inversion relies on a simplified seismic model and generally produces 3D cubes for compressional-wave velocity, shear-wave velocity, and density. By applying rock-physics theory, such volumes may be interpreted in terms of lithology and fluid properties. Understanding the robustness of forward and inverse techniques is important when deciding the amount of information carried by seismic data. This paper suggests a simple method to update a reservoir characterization by comparing 4D-seismic data with flow simulations on an existing characterization conditioned on the base-survey data. The ability to use results from a 4D-seismic survey in reservoir characterization depends on several aspects. To investigate this, a loop that performs independent forward seismic modeling and elastic inversion at two time stages has been established. In the workflow, a synthetic reservoir is generated from which data are extracted. The task is to reconstruct the reservoir on the basis of these data. By working on a realistic synthetic reservoir, full knowledge of the reservoir characteristics is achieved. This makes the evaluation of the questions regarding the fundamental dependency between the seismic and petrophysical domains stronger. The synthetic reservoir is an ideal case, where properties are known to an accuracy never achieved in an applied situation. It can therefore be used to investigate the theoretical limitations of the information content in the seismic data. The deviations in water and oil production between the reference and predicted reservoir were significantly decreased by use of 4D-seismic data in addition to the 3D inverted elastic parameters. Introduction It is well known that the information in seismic data is limited by the bandwidth of the seismic signal. 4D seismics give information on the changes between base and monitor surveys and are consequently an important source of information regarding the principal flow in a reservoir. Because of its limited resolution, the presence of a thin thief zone can be observed only as a consequence of flow, and the exact location will not be found directly. This paper addresses the question of how much information there is in the seismic data, and how this information can be used to update the model for petrophysical reservoir parameters. Several methods for incorporating 4D-seismic data in the reservoir-characterization workflow for improving history matching have been proposed earlier. The 4D-seismic data and the corresponding production data are not on the same scale, but they need to be combined. Huang et al. (1997) proposed a simulated annealing method for conditioning these data, while Lumley and Behrens (1997) describe a workflow loop in which the 4D-seismic data are compared with those computed from the reservoir model. Gosselin et al. (2003) give a short overview of the use of 4D-seismic data in reservoir characterization and propose using gradient-based methods for history matching the reservoir model on seismic and production data. Vasco et al. (2004) show that 4D data contain information of large-scale reservoir-permeability variations, and they illustrate this in a Gulf of Mexico example.


Sign in / Sign up

Export Citation Format

Share Document