A Physics-Based Data-Driven Model for History Matching, Prediction, and Characterization of Waterflooding Performance

SPE Journal ◽  
2017 ◽  
Vol 23 (02) ◽  
pp. 367-395 ◽  
Author(s):  
Zhenyu Guo ◽  
Albert C. Reynolds ◽  
Hui Zhao

Summary We develop and use a new data-driven model for assisted history matching of production data from a reservoir under waterflood and apply the history-matched model to predict future reservoir performance. Although the model is developed from production data and requires no prior knowledge of rock-property fields, it incorporates far more fundamental physics than that of the popular capacitance–resistance model (CRM). The new model also represents a substantial improvement on an interwell-numerical-simulation model (INSIM) that was presented previously in a paper coauthored by the latter two authors of the current paper. The new model, which is referred to as INSIM-FT, eliminates the three deficiencies of the original data-driven INSIM. The new model uses more interwell connections than INSIM to increase the fidelity of history matching and predictions and replaces the ad hoc computation procedure for computing saturation that is used in INSIM by a theoretically sound front-tracking procedure. Because of the introduction of a front-tracking method for the calculation of saturation, the new model is referred to as INSIM-FT. We compare the performance of CRM, INSIM, and INSIM-FT in two synthetic examples. INSIM-FT is also tested in a field example.

2001 ◽  
Vol 4 (06) ◽  
pp. 502-508 ◽  
Author(s):  
W.J. Milliken ◽  
A.S. Emanuel ◽  
A. Chakravarty

Summary The use of 3D streamline methodologies as an alternative to finite-difference (FD) simulation has become more common in the oil industry during the past few years. When the assumptions for its application are satisfied, results from streamline simulation compare very well with those from FD and typically require less than 10% of the central processing unit (CPU) resources. The speed of 3D streamline simulation (3DSM) lends itself not just to simulation, but also to other components of the reservoir simulation work process. This characteristic is particularly true of history matching. History matching is frequently the most tedious and time-consuming part of a reservoir simulation study. In this paper, we describe a novel method that uses 3D streamline paths to assist in history matching either 3D streamline or FD models. We designated this technique Assisted History Matching (AHM) to distinguish it from automated history-matching techniques. In this manuscript, we describe this technique and its application to three reservoir simulation studies. The example models range in size from 105 to 106 gridblocks and contain as many as several hundred wells. These applications have led to refinements of the AHM methodology, the incorporation of several new algorithms, and some insights into the processes typically employed in history matching. Introduction The advent of powerful geostatistical modeling techniques has led to the development of very large (>107 cells) geocellular reservoir models. These models capture, in greater detail than before, the heterogeneity in porosity, permeability, and lithology that is critical to accurate simulation of reservoir performance. Three-dimensional streamline simulation has received considerable attention over the past several years because of its potential as an alternative to traditional FD methods for the simulation of these very large models. While 3DSM is a powerful simulation tool, it also has a number of other uses. The speed of 3DSM is ideal for such applications as geologic/geostatistical model screening,1 reservoir scoping, and history matching (the focus of this paper). In this manuscript, we describe the technique and present three example reservoir applications that demonstrate its utility. The AHM Technique The models used in reservoir simulation today contain details of structure and heterogeneity that are orders of magnitude greater than those used just 10 years ago. However, there is still (and probably always will be) a large degree of uncertainty in the property descriptions. Geologic data are typically scattered and imprecise. Laboratory measurements of core properties, for example, often show an order of magnitude variation in permeability for any given porosity and several orders of magnitude variation over the data set. Upscaling replaces geologic detail with estimates of effective properties for aggregated data, placing another level of approximation on the resulting model. It is unlikely that any geologic model will match the observed reservoir performance perfectly, and history matching continues to be the technique by which the adjustments are made to the geologic model to achieve a match between model and historical reservoir performance. Ref. 2 provides a good presentation of traditional history-matching techniques. History matching by definition is an ill-posed problem: there are more unknowns than there are constraints to the problem. Indeed, any reservoir simulation engineer knows that there is always more than one way to history match a given reservoir model. It is the responsibility of the simulation engineer to make only those changes that are consistent with the reservoir geology. AHM was designed to facilitate these changes. As defined here, AHM is different from automated history matching and traditional history-matching techniques. Generically, traditional history matching involves five key steps:Simulation and identification of the difference between model predictions and observed performance.Determination of the gridblocks in the model that require change.Designation of the property(ies) that requires change and what those changes are.Implementation of the changes in the simulation input data.Iteration on the above steps until a satisfactory match is achieved. The two principal uncertainties in this process lie in Steps 2 and 3, both of which are empirical and tedious and frequently involve ad hoc decisions that have an unknown impact on the ultimate results. AHM is designed to simplify this process and uses 3DSM to facilitate Steps 2 and 3 and thus minimize the ad hoc nature of the process. AHM uses an underlying 3DSM model to determine the streamline paths in the reservoir. These streamlines describe the principal flow paths in the model and represent the paths along which the fluids in the model flow from source (injector or aquifer) to sink (producer). By tracing all the streamlines from a given well, the gridblocks through which the fluids flow to that well are identified. This process, in essence, replaces Step 2 with a process that is rooted in the fluid-flow calculation. Once these gridblocks are identified, changes can be performed according to any (geologically reasonable) algorithm desired. Here, a simple program that largely replaces Step 4 carries this out. Fig. 1 illustrates the concept. The AHM process is based on the assumption that history matching is achieved by altering the geologic properties along the flow paths connecting a producing well to its flow source. The source may be a water injector, gas injector, aquifer, or gas cap; however, the drive mechanism must be a displacement along a definable path. Because the technique relies upon identification of the flow paths, it is assumed that the grid is sufficiently detailed to resolve the flow paths. In very coarse grids, a single gridblock may intersect the flow to several wells, and satisfactory history matching in this case may not be possible with AHM. For streamline-simulation models, the calculation model provides the path directly. For FD simulation, a streamline model incorporating the same structure and geologic parameters as the simulation model is used to calculate the streamlines defining the flow paths.


2021 ◽  
Author(s):  
Cornelis Veeken

Abstract This paper presents a fit-for-purpose gas well performance model that utilizes a minimum set of inflow and outflow performance parameters, and demonstrates the use of this model to describe real-time well performance, to compare well performance over time and between wells, and to generate production forecasts in support of well interventions. The inflow and outflow parameters are directly related to well-known reservoir and well properties, and can be calibrated against common well surveillance and production data. By adopting this approach, engineers develop a better appreciation of the magnitude and uncertainty of gas well and reservoir performance parameters.


2017 ◽  
Vol 21 (2) ◽  
pp. 315-333 ◽  
Author(s):  
Addy Satija ◽  
Celine Scheidt ◽  
Lewis Li ◽  
Jef Caers

2021 ◽  
Author(s):  
Zhenzhen Wang ◽  
Jincong He ◽  
William J. Milliken ◽  
Xian-Huan Wen

Abstract Full-physics models in history matching and optimization can be computationally expensive since these problems usually require hundreds of simulations or more. We have previously implemented a physics-based data-driven network model with a commercial simulator that serves as a surrogate without the need to build the 3-D geological model. In this paper, we reconstruct the network model to account for complex reservoir conditions of mature fields and successfully apply it to a diatomite reservoir in the San Joaquin Valley (SJV) for rapid history matching and optimization. The reservoir is simplified into a network of 1-D connections between well perforations. These connections are discretized into grid blocks and the grid properties are calibrated to historical production data. Elevation change, saturation distribution, capillary pressure, and relative permeability are accounted for to best represent the mature field conditions. To simulate this physics-based network model through a commercial simulator, an equivalent 2-D Cartesian model is designed where rows correspond to the above-mentioned connections. Thereafter, the history matching can be performed with the Ensemble Smoother with Multiple Data Assimilation (ESMDA) algorithm under a sequential iterative process. A representative model after history matching is then employed for well control optimization. The network model methodology has been successfully applied to the waterflood optimization for a 56-well sector model of a diatomite reservoir in the SJV. History matching result shows that the network model honors field-level production history and gives reasonable matches for most of the wells, including pressure and flow rate. The calibrated ensemble from the last iteration of history matching yields a satisfactory production prediction, which is verified by the remaining historical data. For well control optimization, we select the P50 model to maximize the Net Present Value (NPV) in 5 years under provided well/field constraints. This confirms that the calibrated network model is accurate enough for production forecasts and optimization. The use of a commercial simulator in the network model provided flexibility to account for complex physics, such as elevation difference between wells, saturation non-equilibrium, and strong capillary pressure. Unlike traditional big-loop workflow that relies on a detailed characterization of geological models, the proposed network model only requires production data and can be built and updated rapidly. The model also runs much faster (tens of seconds) than a full-physics model due to the employment of much fewer grid blocks. To our knowledge, this is the first time this physics-based data-driven network model is applied with a commercial simulator on a field waterflood case. Unlike approaches developed with analytic solutions, the use of commercial simulator makes it feasible to be further extended for complex processes, e.g., thermal or compositional flow. It serves as an useful surrogate model for both fast and reliable decision-making in reservoir management.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4836
Author(s):  
Liping Zhang ◽  
Yifan Hu ◽  
Qiuhua Tang ◽  
Jie Li ◽  
Zhixiong Li

In modern manufacturing industry, the methods supporting real-time decision-making are the urgent requirement to response the uncertainty and complexity in intelligent production process. In this paper, a novel closed-loop scheduling framework is proposed to achieve real-time decision making by calling the appropriate data-driven dispatching rules at each rescheduling point. This framework contains four parts: offline training, online decision-making, data base and rules base. In the offline training part, the potential and appropriate dispatching rules with managers’ expectations are explored successfully by an improved gene expression program (IGEP) from the historical production data, not just the available or predictable information of the shop floor. In the online decision-making part, the intelligent shop floor will implement the scheduling scheme which is scheduled by the appropriate dispatching rules from rules base and store the production data into the data base. This approach is evaluated in a scenario of the intelligent job shop with random jobs arrival. Numerical experiments demonstrate that the proposed method outperformed the existing well-known single and combination dispatching rules or the discovered dispatching rules via metaheuristic algorithm in term of makespan, total flow time and tardiness.


2014 ◽  
Vol 58 ◽  
pp. 72-82 ◽  
Author(s):  
M.R. Pivello ◽  
M.M. Villar ◽  
R. Serfaty ◽  
A.M. Roma ◽  
A. Silveira-Neto

Sign in / Sign up

Export Citation Format

Share Document