Time-lapse seismic history matching with an iterative ensemble smoother and deep convolutional autoencoder

Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.


2021 ◽  
pp. 1-53
Author(s):  
Matthew Bray ◽  
Jakob Utley ◽  
Yanrui Ning ◽  
Angela Dang ◽  
Jacquelyn Daves ◽  
...  

Enhanced hydrocarbon recovery is essential for continued economic development of unconventional reservoirs. Our study focuses on dynamic characterization of the Niobrara and Codell Formations in Wattenberg Field through the development and analysis of a full integrated reservoir model. We demonstrate the effectiveness of hydraulic fracturing and production with two seismic monitor surveys, surface microseismic, completion data, and production data. The two monitor surveys were recorded after stimulation, and again after two years of production. Identification of reservoir deformation due to hydraulic fracturing and production improves reservoir models by mapping non-stimulated and non-producing zones. Monitoring these time-variant changes improves the prediction capability of reservoir models, which in turn leads to improved well and stage placement. We quantify dynamic reservoir changes with time-lapse P-wave seismic data utilizing pre-stack inversion, and velocity-independent layer stripping for velocity and attenuation changes within the Niobrara and Codell reservoirs. A 3D geomechanical model and production data are history matched, and a simulation is run for two years of production. Results are integrated with time-lapse seismic data to illustrate the effects of hydraulic fracturing and production. Our analyses illustrate that chalk facies have significantly higher hydraulic fracture efficiency and production performance than marl facies. Additionally, structural and hydraulic complexity associated with faults generate spatial variability in a well’s total production.


2019 ◽  
Vol 7 (3) ◽  
pp. SE123-SE130
Author(s):  
Yang Xue ◽  
Mariela Araujo ◽  
Jorge Lopez ◽  
Kanglin Wang ◽  
Gautam Kumar

Time-lapse (4D) seismic is widely deployed in offshore operations to monitor improved oil recovery methods including water flooding, yet its value for enhanced well and reservoir management is not fully realized due to the long cycle times required for quantitative 4D seismic data assimilation into dynamic reservoir models. To shorten the cycle, we have designed a simple inversion workflow to estimate reservoir property changes directly from 4D attribute maps using machine-learning (ML) methods. We generated tens of thousands of training samples by Monte Carlo sampling from the rock-physics model within reasonable ranges of the relevant parameters. Then, we applied ML methods to build the relationship between the reservoir property changes and the 4D attributes, and we used the learnings to estimate the reservoir property changes given the 4D attribute maps. The estimated reservoir property changes (e.g., water saturation changes) can be used to analyze injection efficiency, update dynamic reservoir models, and support reservoir management decisions. We can reduce the turnaround time from months to days, allowing early engagements with reservoir engineers to enhance integration. This accelerated data assimilation removes a deterrent for the acquisition of frequent 4D surveys.


SPE Journal ◽  
2017 ◽  
Vol 22 (04) ◽  
pp. 1261-1279 ◽  
Author(s):  
Shingo Watanabe ◽  
Jichao Han ◽  
Gill Hetz ◽  
Akhil Datta-Gupta ◽  
Michael J. King ◽  
...  

Summary We present an efficient history-matching technique that simultaneously integrates 4D repeat seismic surveys with well-production data. This approach is particularly well-suited for the calibration of the reservoir properties of high-resolution geologic models because the seismic data are areally dense but sparse in time, whereas the production data are finely sampled in time but spatially averaged. The joint history matching is performed by use of streamline-based sensitivities derived from either finite-difference or streamline-based flow simulation. For the most part, earlier approaches have focused on the role of saturation changes, but the effects of pressure have largely been ignored. Here, we present a streamline-based semianalytic approach for computing model-parameter sensitivities, accounting for both pressure and saturation effects. The novelty of the method lies in the semianalytic sensitivity computations, making it computationally efficient for high-resolution geologic models. The approach is implemented by use of a finite-difference simulator incorporating the detailed physics. Its efficacy is demonstrated by use of both synthetic and field applications. For both the synthetic and the field cases, the advantages of incorporating the time-lapse variations are clear, seen through the improved estimation of the permeability distribution, the pressure profile, the evolution of the fluid saturation, and the swept volumes.


2011 ◽  
Vol 14 (05) ◽  
pp. 621-633 ◽  
Author(s):  
Alireza Kazemi ◽  
Karl D. Stephen ◽  
Asghar Shams

Summary History matching of a reservoir model is always a difficult task. In some fields, we can use time-lapse (4D) seismic data to detect production-induced changes as a complement to more conventional production data. In seismic history matching, we predict these data and compare to observations. Observed time-lapse data often consist of relative measures of change, which require normalization. We investigate different normalization approaches, based on predicted 4D data, and assess their impact on history matching. We apply the approach to the Nelson field in which four surveys are available over 9 years of production. We normalize the 4D signature in a number of ways. First, we use predictions of 4D signature from vertical wells that match production, and we derive a normalization function. As an alternative, we use crossplots of the full-field prediction against observation. Normalized observations are used in an automatic-history-matching process, in which the model is updated. We analyze the results of the two normalization approaches and compare against the case of just using production data. The result shows that when we use 4D data normalized to wells, we obtain 49% reduced misfit along with 36% improvement in predictions. Also over the whole reservoir, 8 and 7% reduction of misfits for 4D seismic are obtained in history and prediction periods, respectively. When we use only production data, the production history match is improved to a similar degree (45%), but in predictions, the improvement is only 25% and the 4D seismic misfit is 10% worse. Finding the unswept areas in the reservoir is always a challenge in reservoir management. By using 4D data in history matching, we can better predict reservoir behavior and identify regions of remaining oil.


SPE Journal ◽  
2012 ◽  
Vol 18 (01) ◽  
pp. 159-171 ◽  
Author(s):  
Mario Trani ◽  
Rob Arts ◽  
Olwijn Leeuwenburgh

Summary Time-lapse seismic data provide information on the dynamics of multiphase reservoir fluid flow in places where no production data from wells are available. This information, in principle, could be used to estimate unknown reservoir properties. However, the amount, resolution, and character of the data have long posed significant challenges for quantitative use in assisted-history-matching workflows. Previous studies, therefore, have generally investigated methods for updating single models with reduced parameter-uncertainty space. Recent developments in ensemble-based history-matching methods have shown the feasibility of multimodel history and matching of production data while maintaining a full uncertainty description. Here, we introduce a robust and flexible reparameterization for interpreted fluid fronts or seismic attribute isolines that extends these developments to seismic history matching. The seismic data set is reparameterized, in terms of arrival times, at observed front positions, thereby significantly reducing the number of data while retaining essential information. A simple 1D example is used to introduce the concepts of the approach. A synthetic 3D example, with spatial complexity that is typical for many waterfloods, is examined in detail. History-matching cases based on both separate and combined use of production and seismic data are examined. It is shown that consistent multimodel history matches can be obtained without the need for reduction of the parameter space or for localization of the impact of observations. The quality of forecasts based on the history-matched models is evaluated by simulating both expected production and saturation changes throughout the field for a fixed operating strategy. It is shown that bias and uncertainty in the forecasts of production both at existing wells and in the flooded area are reduced considerably when both production and seismic data are incorporated. The proposed workflow, therefore, enables better decisions on field developments that require optimal placement of infill wells.


Sign in / Sign up

Export Citation Format

Share Document