Combining Saturation Changes and 4D Seismic for Updating Reservoir Characterizations

2006 ◽  
Vol 9 (05) ◽  
pp. 502-512 ◽  
Author(s):  
Arne Skorstad ◽  
Odd Kolbjornsen ◽  
Asmund Drottning ◽  
Havar Gjoystdal ◽  
Olaf K. Huseby

Summary Elastic seismic inversion is a tool frequently used in analysis of seismic data. Elastic inversion relies on a simplified seismic model and generally produces 3D cubes for compressional-wave velocity, shear-wave velocity, and density. By applying rock-physics theory, such volumes may be interpreted in terms of lithology and fluid properties. Understanding the robustness of forward and inverse techniques is important when deciding the amount of information carried by seismic data. This paper suggests a simple method to update a reservoir characterization by comparing 4D-seismic data with flow simulations on an existing characterization conditioned on the base-survey data. The ability to use results from a 4D-seismic survey in reservoir characterization depends on several aspects. To investigate this, a loop that performs independent forward seismic modeling and elastic inversion at two time stages has been established. In the workflow, a synthetic reservoir is generated from which data are extracted. The task is to reconstruct the reservoir on the basis of these data. By working on a realistic synthetic reservoir, full knowledge of the reservoir characteristics is achieved. This makes the evaluation of the questions regarding the fundamental dependency between the seismic and petrophysical domains stronger. The synthetic reservoir is an ideal case, where properties are known to an accuracy never achieved in an applied situation. It can therefore be used to investigate the theoretical limitations of the information content in the seismic data. The deviations in water and oil production between the reference and predicted reservoir were significantly decreased by use of 4D-seismic data in addition to the 3D inverted elastic parameters. Introduction It is well known that the information in seismic data is limited by the bandwidth of the seismic signal. 4D seismics give information on the changes between base and monitor surveys and are consequently an important source of information regarding the principal flow in a reservoir. Because of its limited resolution, the presence of a thin thief zone can be observed only as a consequence of flow, and the exact location will not be found directly. This paper addresses the question of how much information there is in the seismic data, and how this information can be used to update the model for petrophysical reservoir parameters. Several methods for incorporating 4D-seismic data in the reservoir-characterization workflow for improving history matching have been proposed earlier. The 4D-seismic data and the corresponding production data are not on the same scale, but they need to be combined. Huang et al. (1997) proposed a simulated annealing method for conditioning these data, while Lumley and Behrens (1997) describe a workflow loop in which the 4D-seismic data are compared with those computed from the reservoir model. Gosselin et al. (2003) give a short overview of the use of 4D-seismic data in reservoir characterization and propose using gradient-based methods for history matching the reservoir model on seismic and production data. Vasco et al. (2004) show that 4D data contain information of large-scale reservoir-permeability variations, and they illustrate this in a Gulf of Mexico example.

2014 ◽  
Author(s):  
Gerard J.P. Joosten ◽  
Asli Altintas ◽  
Gijs Van Essen ◽  
Jorn Van Doren ◽  
Paul Gelderblom ◽  
...  

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


2011 ◽  
Vol 14 (05) ◽  
pp. 621-633 ◽  
Author(s):  
Alireza Kazemi ◽  
Karl D. Stephen ◽  
Asghar Shams

Summary History matching of a reservoir model is always a difficult task. In some fields, we can use time-lapse (4D) seismic data to detect production-induced changes as a complement to more conventional production data. In seismic history matching, we predict these data and compare to observations. Observed time-lapse data often consist of relative measures of change, which require normalization. We investigate different normalization approaches, based on predicted 4D data, and assess their impact on history matching. We apply the approach to the Nelson field in which four surveys are available over 9 years of production. We normalize the 4D signature in a number of ways. First, we use predictions of 4D signature from vertical wells that match production, and we derive a normalization function. As an alternative, we use crossplots of the full-field prediction against observation. Normalized observations are used in an automatic-history-matching process, in which the model is updated. We analyze the results of the two normalization approaches and compare against the case of just using production data. The result shows that when we use 4D data normalized to wells, we obtain 49% reduced misfit along with 36% improvement in predictions. Also over the whole reservoir, 8 and 7% reduction of misfits for 4D seismic are obtained in history and prediction periods, respectively. When we use only production data, the production history match is improved to a similar degree (45%), but in predictions, the improvement is only 25% and the 4D seismic misfit is 10% worse. Finding the unswept areas in the reservoir is always a challenge in reservoir management. By using 4D data in history matching, we can better predict reservoir behavior and identify regions of remaining oil.


SPE Journal ◽  
2016 ◽  
Vol 22 (03) ◽  
pp. 985-1010 ◽  
Author(s):  
Xiaodong Luo ◽  
Tuhin Bhakta ◽  
Morten Jakobsen ◽  
Geir Nævdal

Summary In this work, we propose an ensemble 4D-seismic history-matching framework for reservoir characterization. Compared with similar existing frameworks in the reservoir-engineering community, the proposed one consists of some relatively new ingredients, in terms of the type of seismic data in choice, wavelet multiresolution analysis for the chosen seismic-data and related-data noise estimation, and the use of recently developed iterative ensemble history-matching algorithms. Typical seismic data used for history matching, such as acoustic impedance, are inverted quantities, whereas extra uncertainties may arise during the inversion processes. In the proposed framework, we avoid such intermediate inversion processes. In addition, we also adopt wavelet-based sparse representation to reduce data size. Concretely, we use intercept and gradient attributes derived from amplitude vs. angle (AVA) data, apply multilevel discrete wavelet transforms (DWTs) to attribute data, and estimate noise level of resulting wavelet coefficients. We then select the wavelet coefficients above a certain threshold value, and history match these leading wavelet coefficients with an iterative ensemble smoother (iES). As a proof-of-concept study, we apply the proposed framework to a 2D synthetic case originated from a 3D Norne field model. The reservoir-model variables to be estimated are permeability (PERMX) and porosity (PORO) at each active gridblock. A rock-physics model is used to calculate seismic parameters (velocity and density) from reservoir properties (porosity, fluid saturation, and pressure); then, reflection coefficients are generated with a linearized AVA equation that involves velocity and density. AVA data are obtained by computing the convolution between reflection coefficients and a Ricker wavelet function. The multiresolution analysis applied to the AVA attributes helps to obtain a good estimation of noise level and substantially reduce the data size. We compare history-matching performance in three scenarios: (S1) with production data only, (S2) with seismic data only, and (S3) with both production and seismic data. In either Scenario S2 or Scenario S3, we also inspect two sets of experiments, one with the original seismic data (full-data experiment) and the other adopting sparse representation (sparse-data experiment). Our numerical results suggest that, in this particular case study, the use of production data largely improves the estimation of permeability, but has little effect on the estimation of porosity. Using seismic data only improves the estimation of porosity, but not that of permeability. In contrast, using both production and 4D-seismic data improves the estimation accuracies of both porosity and permeability. Moreover, in either Scenario S2 or Scenario S3, provided that a certain stopping criterion is equipped in the iES, adopting sparse representation results in better history-matching performance than using the original data set.


2022 ◽  
Vol 41 (1) ◽  
pp. 40-46
Author(s):  
Öz Yilmaz ◽  
Kai Gao ◽  
Milos Delic ◽  
Jianghai Xia ◽  
Lianjie Huang ◽  
...  

We evaluate the performance of traveltime tomography and full-wave inversion (FWI) for near-surface modeling using the data from a shallow seismic field experiment. Eight boreholes up to 20-m depth have been drilled along the seismic line traverse to verify the accuracy of the P-wave velocity-depth model estimated by seismic inversion. The velocity-depth model of the soil column estimated by traveltime tomography is in good agreement with the borehole data. We used the traveltime tomography model as an initial model and performed FWI. Full-wave acoustic and elastic inversions, however, have failed to converge to a velocity-depth model that desirably should be a high-resolution version of the model estimated by traveltime tomography. Moreover, there are significant discrepancies between the estimated models and the borehole data. It is understandable why full-wave acoustic inversion would fail — land seismic data inherently are elastic wavefields. The question is: Why does full-wave elastic inversion also fail? The strategy to prevent full-wave elastic inversion of vertical-component geophone data trapped in a local minimum that results in a physically implausible near-surface model may be cascaded inversion. Specifically, we perform traveltime tomography to estimate a P-wave velocity-depth model for the near-surface and Rayleigh-wave inversion to estimate an S-wave velocity-depth model for the near-surface, then use the resulting pairs of models as the initial models for the subsequent full-wave elastic inversion. Nonetheless, as demonstrated by the field data example here, the elastic-wave inversion yields a near-surface solution that still is not in agreement with the borehole data. Here, we investigate the limitations of FWI applied to land seismic data for near-surface modeling.


Sign in / Sign up

Export Citation Format

Share Document