Compressible Streamlines and Three-Phase History Matching

SPE Journal ◽  
2007 ◽  
Vol 12 (04) ◽  
pp. 475-485 ◽  
Author(s):  
Hao Cheng ◽  
Adedayo Stephen Oyerinde ◽  
Akhil Datta-Gupta ◽  
William J. Milliken

Summary Reconciling high-resolution geologic models to field production history is still by far the most time-consuming aspect of the workflow for both geoscientists and engineers. Recently, streamline-based assisted and automatic history-matching techniques have shown great potential in this regard, and several field applications have demonstrated the feasibility of the approach. However, most of these applications have been limited to two-phase water/oil flow under incompressible or slightly compressible conditions. We propose an approach to history matching three-phase flow using a novel compressible streamline formulation and streamline-derived analytic sensitivities. First, we use a generalized streamline model to account for compressible flow by introducing an "effective density" of total fluids along streamlines. This density term rigorously captures changes in fluid volumes with pressure and is easily traced along streamlines. A density-dependent source term in the saturation equation further accounts for the pressure effects during saturation calculations along streamlines. Our approach preserves the 1D nature of the saturation equation and all the associated advantages of the streamline approach with only minor modifications to existing streamline models. Second, we analytically compute parameter sensitivities that define the relationship between the reservoir properties and the production response, viz. water-cut and gas/oil ratio (GOR). These sensitivities are an integral part of history matching, and streamline models permit efficient computation of these sensitivities through a single flow simulation. Finally, for history matching, we use a generalized travel-time inversion that has been shown to be robust because of its quasilinear properties and converges in only a few iterations. The approach is very fast and avoids much of the subjective judgment and time-consuming trial-and-error inherent in manual history matching. We demonstrate the power and utility of our approach using both synthetic and field-scale examples. The synthetic case is used to validate our method. It entails the joint integration of water cut and gas/oil ratios (GORs) from a nine-spot pattern in reconstructing a reference permeability field. The field-scale example is a modified version of the ninth SPE comparative study and consists of 25 producers, 1 injector, and aquifer influx. Starting with a prior geologic model, we integrate water-cut and GOR history by the generalized travel-time inversion. Our approach is very fast and preserves the geologic continuity. Introduction Integration of production data typically requires the minimization of a predefined data misfit and penalty terms to match the observed and calculated production response (Oliver 1994; Vasco et al. 1999; Datta-Gupta et al. 2001; Reis et al. 2000; Landa et al. 1996; Anterion et al. 1989; Wu et al. 1999; Wang and Kovscek 2000; Sahni and Horne 2005). There are several approaches to such minimization, and these can be broadly classified into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods (Oliver 1994). The derivative-free approaches such as simulated annealing and genetic algorithm require numerous flow simulations and can be computationally prohibitive for field-scale applications with very large numbers of parameters. Gradient-based methods have been widely used for automatic history matching, although the rate of convergence of these methods is typically slower than that of the sensitivity-based methods, such as the Gauss-Newton or the LSQR method (Vega et al. 2004). An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of the three following categories: perturbation method, direct method, and adjoint state methods. The perturbation approach is the simplest and requires the fewest changes to an existing code. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, this can be computationally prohibitive for reservoir models with many parameters. In the direct, or sensitivity-equation, method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients (Vasco et al. 1999). Because there is one equation for each parameter, this approach can require the same amount of work. A variation of this method, called the gradient simulator method, utilizes the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all parameters and needs to be decomposed only once (Anterion et al. 1989). Thus, sensitivity computation for each parameter now requires a matrix-vector multiplication. This method obviously represents a significant improvement, but still can be computationally demanding for large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be significantly smaller in number compared to the sensitivity equations. The adjoint equations are obtained by minimizing the production data misfit with flow equations as constraint, and the implementation of the method can be quite complex and cumbersome for multiphase flow applications (Wu et al. 1999). Furthermore, the number of adjoint solutions will generally depend on the amount of production data and thus can be restrictive for field-scale applications.

2005 ◽  
Vol 8 (05) ◽  
pp. 426-436 ◽  
Author(s):  
Hao Cheng ◽  
Arun Kharghoria ◽  
Zhong He ◽  
Akhil Datta-Gupta

Summary We propose a novel approach to history matching finite-difference models that combines the advantages of streamline models with the versatility of finite-difference simulation. Current streamline models are limited in their ability to incorporate complex physical processes and cross-streamline mechanisms in a computationally efficient manner. A unique feature of streamline models is their ability to analytically compute the sensitivity of the production data with respect to reservoir parameters using a single flow simulation. These sensitivities define the relationship between changes in production response because of small changes in reservoir parameters and, thus, form the basis for many history-matching algorithms. In our approach, we use the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. First, the velocity field from the finite-difference model is used to compute streamline trajectories, time of flight, and parameter sensitivities. The sensitivities are then used in an inversion algorithm to update the reservoir model during finite-difference simulation. The use of a finite-difference model allows us to account for detailed process physics and compressibility effects. Although the streamline-derived sensitivities are only approximate, they do not seem to noticeably impact the quality of the match or the efficiency of the approach. For history matching, we use a generalized travel-time inversion (GTTI) that is shown to be robust because of its quasilinear properties and that converges in only a few iterations. The approach is very fast and avoids many of the subjective judgments and time-consuming trial-and-error steps associated with manual history matching. We demonstrate the power and utility of our approach with a synthetic example and two field examples. The first one is from a CO2 pilot area in the Goldsmith San Andreas Unit (GSAU), a dolomite formation in west Texas with more than 20 years of waterflood production history. The second example is from a Middle Eastern reservoir and involves history matching a multimillion-cell geologic model with 16 injectors and 70 producers. The final model preserved all of the prior geologic constraints while matching 30 years of production history. Introduction Geological models derived from static data alone often fail to reproduce the field production history. Reconciling geologic models to the dynamic response of the reservoir is critical to building reliable reservoir models. Classical history-matching procedures whereby reservoir parameters are adjusted manually by trial and error can be tedious and often yield a reservoir description that may not be realistic or consistent with the geologic interpretation. In recent years, several techniques have been developed for integrating production data into reservoir models. Integration of dynamic data typically requires a least-squares-based minimization to match the observed and calculated production response. There are several approaches to such minimization, and these can be classified broadly into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods. The derivative-free approaches, such as simulated annealing or genetic algorithms, require numerous flow simulations and can be computationally prohibitive for field-scale applications. Gradient-based methods have been used widely for automatic history matching, although the convergence rates of these methods are typically slower than the sensitivity-based methods such as the Gauss-Newton or the LSQR method. An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. These sensitivities are simply partial derivatives that define the change in production response because of small changes in reservoir parameters. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of three categories: perturbation method, direct method, and adjoint-state methods. Conceptually, the perturbation approach is the simplest and requires the fewest changes in an existing code. Sensitivities are estimated simply by perturbing the model parameters one at a time by a small amount and then computing the corresponding production response. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, it can be computationally prohibitive for reservoir models with many parameters. In the direct or sensitivity equation method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients. Because there is one equation for each parameter, this approach requires the same amount of work. A variation of this method, called the gradient simulator method, uses the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all the parameters and needs to be decomposed only once. Thus, sensitivity computation for each parameter now requires a matrix/vector multiplication. This method can also be computationally expensive for a large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be quite cumbersome for multiphase-flow applications. Furthermore, the number of adjoint solutions will generally depend on the amount of production data and, thus, the length of the production history.


2009 ◽  
Vol 12 (04) ◽  
pp. 528-541 ◽  
Author(s):  
Adedayo Oyerinde ◽  
Akhil Datta-Gupta ◽  
William J. Milliken

Summary Streamline-based assisted and automatic history matching techniques have shown great potential in reconciling high resolution geologic models to production data. However, a major drawback of these approaches has been incompressibility or slight compressibility assumptions that have limited applications to two-phase water/oil displacements only. Recent generalization of streamline models to compressible flow has greatly expanded the scope and applicability of streamline-based history matching, in particular for three-phase flow. In our previous work, we calibrated geologic models to production data by matching the water cut (WCT) and gas/oil ratio (GOR) using the generalized travel-time inversion (GTTI) technique. For field applications, however, the highly nonmonotonic profile of the GOR data often presents a challenge to this technique. In this work we present a transformation of the field production data that makes it more amenable to GTTI. Further, we generalize the approach to incorporate bottomhole flowing pressure during three-phase history matching. We examine the practical feasibility of the method using a field-scale synthetic example (SPE-9 comparative study) and a field application. The field case is a highly faulted, west-African reservoir with an underlying aquifer. The reservoir is produced under depletion with three producers, and over thirty years of production history. The simulation model has several pressure/volume/temperature (PVT) and special core analysis (SCAL) regions and more than 100,000 cells. The GTTI is shown to be robust because of its quasilinear properties as demonstrated by the WCT and GOR match for a period of 30 years of production history.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 418-430 ◽  
Author(s):  
Karl D. Stephen ◽  
Juan Soldo ◽  
Colin Macbeth ◽  
Mike A. Christie

Summary Time-lapse (or 4D) seismic is increasingly being used as a qualitative description of reservoir behavior for management and decision-making purposes. When combined quantitatively with geological and flow modeling as part of history matching, improved predictions of reservoir production can be obtained. Here, we apply a method of multiple-model history matching based on simultaneous comparison of spatial data offered by seismic as well as individual well-production data. Using a petroelastic transform and suitable rescaling, forward-modeled simulations are converted into predictions of seismic impedance attributes and compared to observed data by calculation of a misfit. A similar approach is applied to dynamic well data. This approach improves on gradient-based methods by avoiding entrapment in local minima. We demonstrate the method by applying it to the UKCS Schiehallion reservoir, updating the operator's model. We consider a number of parameters to be uncertain. The reservoir's net to gross is initially updated to better match the observed baseline acoustic impedance derived from the RMS amplitudes of the migrated stack. We then history match simultaneously for permeability, fault transmissibility multipliers, and the petroelastic transform parameters. Our results show a good match to the observed seismic and well data with significant improvement to the base case. Introduction Reservoir management requires tools such as simulation models to predict asset behavior. History matching is often employed to alter these models so that they compare favorably to observed well rates and pressures. This well information is obtained at discrete locations and thus lacks the areal coverage necessary to accurately constrain dynamic reservoir parameters such as permeability and the location and effect of faults. Time-lapse seismic captures the effect of pressure and saturation on seismic impedance attributes, giving 2D maps or 3D volumes of the missing information. The process of seismic history matching attempts to overlap the benefits of both types of information to improve estimates of the reservoir model parameters. We first present an automated multiple-model history-matching method that includes time-lapse seismic along with production data, based on an integrated workflow (Fig. 1). It improves on the classical approach, wherein the engineer manually adjusts parameters in the simulation model. Our method also improves on gradient-based methods, such as Steepest Descent, Gauss-Newton, and Levenberg-Marquardt algorithms (e.g., Lépine et al. 1999;Dong and Oliver 2003; Gosselin et al. 2003; Mezghani et al. 2004), which are good at finding local likelihood maxima but can fail to find the global maximum. Our method is also faster than stochastic methods such as genetic algorithms and simulated annealing, which often require more simulations and may have slower convergence rates. Finally, multiple models are generated, enabling posterior uncertainty analysis in a Bayesian framework (as in Stephen and MacBeth 2006a).


Author(s):  
M. Bugra Akin ◽  
Wolfgang Sanz ◽  
Paul Pieringer

This paper presents the application of a viscous adjoint method in the optimization of the endwall contour of a turning mid turbine frame (TMTF). The adjoint method is a gradient based optimization method that allows for the computation of the complete gradient information by solving the governing flow equations and their corresponding adjoint equations only once per function of interest (objective and constraints), so that the computation time of the optimization is nearly independent of the number of parameters. With the use of a greater number of parameters a more detailed definition of endwall contours is possible, so that an optimum can be approached more precisely. A Navier-Stokes flow solver coupled with Menter’s SST k–ω turbulence model is utilized for the CFD simulations, whereas the adjoint formulation is based on the constant eddy viscosity approximation for turbulence. The total pressure ratio is used as the objective function in the optimization. The effect of contouring on the secondary flows is evaluated and the performance of the axisymmetric TMTF is calculated and compared with the optimized design.


SPE Journal ◽  
2010 ◽  
Vol 16 (02) ◽  
pp. 307-317 ◽  
Author(s):  
Yanfen Zhang ◽  
Dean S. Oliver

Summary The increased use of optimization in reservoir management has placed greater demands on the application of history matching to produce models that not only reproduce the historical production behavior but also preserve geological realism and quantify forecast uncertainty. Geological complexity and limited access to the subsurface typically result in a large uncertainty in reservoir properties and forecasts. However, there is a systematic tendency to underestimate such uncertainty, especially when rock properties are modeled using Gaussian random fields. In this paper, we address one important source of uncertainty: the uncertainty in regional trends by introducing stochastic trend coefficients. The multiscale parameters including trend coefficients and heterogeneities can be estimated using the ensemble Kalman filter (EnKF) for history matching. Multiscale heterogeneities are often important, especially in deepwater reservoirs, but are generally poorly represented in history matching. In this paper, we describe a method for representing and updating multiple scales of heterogeneity in the EnKF. We tested our method for updating these variables using production data from a deepwater field whose reservoir model has more than 200,000 unknown parameters. The match of reservoir simulator forecasts to real field data using a standard application of EnKF had not been entirely satisfactory because it was difficult to match the water cut of a main producer in the reservoir. None of the realizations of the reservoir exhibited water breakthrough using the standard parameterization method. By adding uncertainty in large-scale trends of reservoir properties, the ability to match the water cut and other production data was improved substantially. The results indicate that an improvement in the generation of the initial ensemble and in the variables describing the property fields gives an improved history match with plausible geology. The multiscale parameterization of property fields reduces the tendency to underestimate uncertainty while still providing reservoir models that match data.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 431-442 ◽  
Author(s):  
Xian-Huan Wen ◽  
Wen H. Chen

Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.


SPE Journal ◽  
2012 ◽  
Vol 17 (02) ◽  
pp. 402-417 ◽  
Author(s):  
A.A.. A. Awotunde ◽  
R.N.. N. Horne

Summary In history matching, one of the challenges in the use of gradient-based Newton algorithms (e.g., Gauss-Newton and Leven-berg-Marquardt) in solving the inverse problem is the huge cost associated with the computation of the sensitivity matrix. Although the Newton type of algorithm gives faster convergence than most other gradient-based inverse solution algorithms, its use is limited to small- and medium-scale problems in which the sensitivity coefficients are easily and quickly computed. Modelers often use less-efficient algorithms (e.g., conjugate-gradient and quasi-Newton) to model large-scale problems because these algorithms avoid the direct computation of sensitivity coefficients. To find a direction of descent, such algorithms often use less-precise curvature information that would be contained in the gradient of the objective function. Using a sensitivity matrix gives more-complete information about the curvature of the function; however, this comes with a significant computational cost for large-scale problems. An improved adjoint-sensitivity computation is presented for time-dependent partial-differential equations describing multiphase flow in hydrocarbon reservoirs. The method combines the wavelet parameterization of data space with adjoint-sensitivity formulation to reduce the cost of computing sensitivities. This reduction in cost is achieved by reducing the size of the linear system of equations that are typically solved to obtain the sensitivities. This cost-saving technique makes solving an inverse problem with algorithms (e.g., Levenberg-Marquardt and Gauss-Newton) viable for large multiphase-flow history-matching problems. The effectiveness of this approach is demonstrated for two numerical examples involving multiphase flow in a reservoir with several production and injection wells.


Sign in / Sign up

Export Citation Format

Share Document