scholarly journals Stochastic optimization strategies applied to the OLYMPUS benchmark

2019 ◽  
Vol 24 (6) ◽  
pp. 1943-1958 ◽  
Author(s):  
V. L. S. Silva ◽  
M. A. Cardoso ◽  
D. F. B. Oliveira ◽  
R. J. de Moraes

AbstractIn this work, we discuss the application of stochastic optimization approaches to the OLYMPUS case, a benchmark challenge which seeks the evaluation of different techniques applied to well control and field development optimization. For that matter, three exercises have been proposed, namely, (i) well control optimization; (ii) field development optimization; and (iii) joint optimization. All applications were performed considering the so-called OLYMPUS case, a synthetic reservoir model with geological uncertainty provided by TNO (Fonseca 2018). Firstly, in the well control exercise, we successfully applied an ensemble-based approximate gradient method in a robust optimization formulation. Secondly, we solve the field development exercise using a genetic algorithm framework designed with special features for the problem of interest. Finally, in order to evaluate further gains, a sequential optimization approach was employed, in which we run one more well control optimization based on the optimal well locations. Even though we utilize relatively well-known techniques in our studies, we describe the necessary adaptations to the algorithms that enable their successful applications to real-life scenarios. Significant gains in the expected net present value are obtained: in exercise (i) a gain of 7% with respect to reactive control; for exercise (ii) a gain of 660% with respect to a initial well placement based on an engineering approach; and for (iii) an extra gain of 3% due to an additional well control optimization after the well placement optimization. All these gains are obtained with an affordable computational cost via the extensive utilization of high-performance computing (HPC) infrastructure. We also apply a scenario reduction technique to exercise (i), with similar gains obtained in the full ensemble optimization, however, with substantially inferior computational cost. In conclusion, we demonstrate how the state-of-the-art optimization technology available in the model-based reservoir management literature can be successfully applied to field development optimization via the conscious utilization of HPC facilities.

SPE Journal ◽  
2019 ◽  
Vol 24 (03) ◽  
pp. 912-950
Author(s):  
Abeeb A. Awotunde

Summary This paper evaluates the effectiveness of six dimension-reduction approaches. The approaches considered are the constant-control (Const) approach, the piecewise-constant (PWC) approach, the trigonometric approach, the Bessel-function (Bess) approach, the polynomial approach, and the data-decomposition approach. The approaches differ in their mode of operation, but they all reduce the number of parameters required in well-control optimization problems. Results show that the PWC approach performs better than other approaches on many problems, but yields widely fluctuating well controls over the field-development time frame. The trigonometric approach performed well on all the problems and yields controls that vary smoothly over time.


Author(s):  
Cuthbert Shang Wui Ng ◽  
Ashkan Jahanbani Ghahfarokhi ◽  
Menad Nait Amar

AbstractWith the aid of machine learning method, namely artificial neural networks, we established data-driven proxy models that could be utilized to maximize the net present value of a waterflooding process by adjusting the well control injection rates over a production period. These data-driven proxies were maneuvered on two different case studies, which included a synthetic 2D reservoir model and a 3D reservoir model (the Egg Model). Regarding the algorithms, we applied two different nature-inspired metaheuristic algorithms, i.e., particle swarm optimization and grey wolf optimization, to perform the optimization task. Pertaining to the development of the proxy models, we demonstrated that the training and blind validation results were excellent (with coefficient of determination, R2 being about 0.99). For both case studies and the optimization algorithms employed, the optimization results obtained using the proxy models were all within 5% error (satisfied level of accuracy) compared with reservoir simulator. These results confirm the usefulness of the methodology in developing the proxy models. Besides that, the computational cost of optimization was significantly reduced using the proxies. This further highlights the significant benefits of employing the proxy models for practical use despite being subject to a few constraints.


SPE Journal ◽  
2021 ◽  
pp. 1-20
Author(s):  
Z. Wang ◽  
J. He ◽  
W. J. Milliken ◽  
X. -H. Wen

Summary Full-physics models in history matching (HM) and optimization can be computationally expensive because these problems usually require hundreds of simulations or more. In a previous study, a physics-baseddata-driven network model was implemented with a commercial simulator that served as a surrogate without the need to build a 3D geological model. In this paper, the network model is reconstructed to account for complex reservoir conditions of mature fields and successfully apply it to a diatomite reservoir in the San Joaquin Valley, California, for rapid HM and optimization. The reservoir is simplified into a network of 1D connections between well perforations. These connections are discretized into gridblocks, and the grid properties are calibrated to historical production data. Elevation change, saturation distribution, capillary pressure, and relative permeability are accounted for to best represent the mature field conditions. To simulate this physics-based network model through a commercial simulator, an equivalent Cartesian model is designed where rows correspond to the previously mentioned connections. Thereafter, the HM can be performed with the ensemble smoother with multiple data assimilation (ESMDA) algorithm under a sequential iterative process. A representative model after HM is then used for well control optimization. The network model methodology has been successfully applied to the waterflood optimization for a 56-well sector model of a diatomite reservoir in the San Joaquin Valley. HM results show that the network model matches with field level production history and gives reasonable matches for most of the wells, including pressure and volumetric data. The calibrated posterior ensemble of HM yields a satisfactory production prediction that is verified by the remaining historical data. For well control optimization, the P50 model is selected to maximize the net present value (NPV) in 5 years under provided well/field constraints. This confirms that the calibrated network model is accurate enough for production forecasts and optimization. The use of a commercial simulator in the network model provided flexibility to account for complex physics, such as elevation difference between wells, saturation nonequilibrium, and strong capillary pressure. Unlike the traditional big-loop workflow that relies on a detailed characterization of geological models, the proposed network model only requires production data and can be built and updated rapidly. The model also runs much faster (tens of seconds) than a full-physics model because of the use of much fewer gridblocks. To our knowledge, this is the first time this physics-baseddata-driven network model is applied with a commercial simulator on a field waterflood case. Unlike approaches developed with analytic solutions, the use of a commercial simulator makes it feasible to be further extended for complex processes (e.g., thermal or compositional flow). It serves as a useful surrogate model for both fast and reliable decision-making in reservoir management.


2020 ◽  
Vol 12 (12) ◽  
pp. 4932
Author(s):  
Baburam Rijal ◽  
Luc LeBel ◽  
Shuva H. Gautam ◽  
Pierre Cantegril

Strategic, tactical, and operation-level forest management plans are commonly formulated by forest planners following even-flow yield principles. Although strategic planning ensures a sustained supply of timber over the long term, it disregards individual mills’ requirements, which leads to discrepancy between supply and demand. We hypothesize that a value-based timber allocation decision, which accounts for individual mills’ demands during tactical level planning, reduces such discrepancy by increasing value over the entire supply chain. Three types of linear programming models were constructed: Model A—status quo volume-maximization model, Model B—supply chain net present value-maximization (NPV) model, and Model C—a novel approach with sub-models embedded that maximize the NPV of individual mills in the allocation decision. Our results showed that only 58% of the annual allowable cut was profitable and the mean net revenue per harvested area was $2455 ha−1 using Model A. The respective values using Models B and C were 64% and $3890 ha−1 and 96% and $4040 ha−1, respectively, showing that Model C generated the highest net revenue for all mills. Such a method of value-based sequential optimization (Model C) will be crucial in sustainable use of forest products and sustaining future bioeconomy, particularly for managing mixed species stands that contain timber suitable for manufacturing a wide range of products with different market values.


SPE Journal ◽  
2020 ◽  
Vol 25 (03) ◽  
pp. 1285-1306
Author(s):  
Ranran Lu ◽  
A. C. Reynolds

Summary Much work has been performed on the optimal well placement/control problem, including some investigations on optimizing well types (injector or producer) and/or drilling order. However, to the best of our knowledge, there are only a handful of papers dealing with the following problem that is sometimes given to reservoir-engineering groups: given a potential set of reasonable drilling paths and a drilling budget that is sufficient to drill only a few wells, find the optimal well paths, determine whether a well should be an injector or a producer, and determine the drilling order that maximizes the net present value (NPV) of production over the life of the reservoir. In this work, the optimal choices of drilling paths, types, and drilling order are found using the genetic algorithm (GA) with mixed encodings. A binary encoding for the optimization variables pertaining to well-location indices and well types is proposed to effectively handle a large amount of categorical variables, while the drilling sequence is parameterized with ordinal numbers. These two sets of variables are optimized both simultaneously and sequentially. Finally, control optimization using a stochastic simplex approximate gradient (StoSAG) is performed to further improve the NPV of life-cycle production. The proposed workflow is tested on two examples: a 3D channelized reservoir where the potential well paths are either vertical or horizontal, and the Brugge model where only vertical wells are drilled. Both numerical examples indicate that GA combined with StoSAG is a viable solution to the problem considered.


SPE Journal ◽  
2013 ◽  
Vol 18 (06) ◽  
pp. 1003-1011 ◽  
Author(s):  
Z.. Bouzarkouna ◽  
D.Y.. Y. Ding ◽  
A.. Auger

Summary The net present value (NPV) of a project can be significantly increased by finding the optimal location of non-conventional wells. This optimization problem is nowadays one of the most challenging problems in oil-and gas-field development. Suitable methods to tackle this problem include stochastic optimization algorithms, which are particularly robust and able to deal with complex reservoir geology with high heterogeneities. However, these methods require in general a considerable computational effort in terms of number of reservoir simulations, which are CPU-time-demanding. This paper presents the use of the CMA-ES (covariance matrix adaptation—evolution strategy) optimizer, recognized as one of the most powerful derivative free optimizers, to optimize well locations and trajectories. A local-regression-based metamodel is incorporated into the optimization process in order to reduce the computational cost. The objective function (e.g., the NPV) can usually be split into local components, referring to each of the wells that moreover depends in general on a smaller number of principal parameters, and thus can be modeled as a partially separable function. In this paper, we propose to exploit the partial separability of the objective function into CMA-ES coupled with metamodels by building partially separated metamodels. Thus, different metamodels are built for each well or set of wells, which results in a more accurate modeling. An example is presented. Results show that taking advantage of the partial separability of the objective function leads to a significant decrease in the number of reservoir simulations needed to find the "optimal" well configuration, given a restricted budget of reservoir simulations. The proposed approach is practical and promising to deal with the placement of a large number of wells.


Author(s):  
Simone Giorgetti ◽  
Diederik Coppitters ◽  
Francesco Contino ◽  
Ward De Paepe ◽  
Laurent Bricteux ◽  
...  

Abstract The growing share of wind and solar power in the total energy mix has caused severe problems in balancing the electrical power production. Consequently, in the future, all fossil fuel-based electricity generation will need to be run on a completely flexible basis. Microgas turbines (mGTs) constitute a mature technology which can offer such flexibility. Even though their greenhouse gas emissions are already very low, stringent carbon reduction targets will require them to be completely carbon neutral: this constraint implies the adoption of postcombustion carbon capture (CC) on these energy systems. Despite this attractive solution, an in-depth study along with a robust optimization of this system has not yet been carried out. Hence, in this paper, a typical mGT with exhaust gas recirculation has been coupled with an amine-based CC plant and simulated using the software aspenplus. A rigorous rate-based simulation of the CO2 absorption and desorption in the CC unit offers an accurate prediction; however, its time complexity and convergence difficulty are severe limitations for a stochastic optimization. Therefore, a surrogate-based optimization approach has been used, which makes use of a Gaussian process regression (GPR) model, trained using the aspenplus data, to quickly find operating points of the plant at a very low computational cost. Using the validated surrogate model, a stochastic optimization has been carried out. As a general result, the analyzed power plant proves to be intrinsically very robust, even when the input variables are affected by strong uncertainties.


2021 ◽  
Author(s):  
Omid Seyedashraf ◽  
Andrea Bottacin-Busolin ◽  
Julien J. Harou

<p>The design of conventional and sustainable urban drainage systems is a complex task that requires consideration of several design objectives and decision variables. Simulation-based optimization models allow exploring the decision space and identify design options that best meet the design criteria. However, existing approaches generally require simulation of the system hydraulics for each function evaluation, which leads to prohibitive computational cost when applied to large drainage networks.</p><p>In this work, a disaggregation-emulation approach is proposed which allows sequential optimization of multiple sub-catchments in an urban area without having to simulate the full system dynamics. This is achieved by using artificial neural networks (ANN) to represent the boundary condition at the interface between neighboring sub-catchments. The approach is demonstrated with an application to a many-objective optimization problem in which sustainable drainage systems are used to expand the capacity of an existing drainage network. The evaluation of the objective function using the emulation model is found to be 22 times faster than using the physically based model, resulting in a significant speed-up of the optimization process. Unlike previously proposed optimization approaches that rely on surrogate models to emulate the objective functions, the proposed approach remains physically based for the individual sub-catchments, thus reducing the chance of bias in the optimization results.</p>


2021 ◽  
Author(s):  
Zhenzhen Wang ◽  
Jincong He ◽  
William J. Milliken ◽  
Xian-Huan Wen

Abstract Full-physics models in history matching and optimization can be computationally expensive since these problems usually require hundreds of simulations or more. We have previously implemented a physics-based data-driven network model with a commercial simulator that serves as a surrogate without the need to build the 3-D geological model. In this paper, we reconstruct the network model to account for complex reservoir conditions of mature fields and successfully apply it to a diatomite reservoir in the San Joaquin Valley (SJV) for rapid history matching and optimization. The reservoir is simplified into a network of 1-D connections between well perforations. These connections are discretized into grid blocks and the grid properties are calibrated to historical production data. Elevation change, saturation distribution, capillary pressure, and relative permeability are accounted for to best represent the mature field conditions. To simulate this physics-based network model through a commercial simulator, an equivalent 2-D Cartesian model is designed where rows correspond to the above-mentioned connections. Thereafter, the history matching can be performed with the Ensemble Smoother with Multiple Data Assimilation (ESMDA) algorithm under a sequential iterative process. A representative model after history matching is then employed for well control optimization. The network model methodology has been successfully applied to the waterflood optimization for a 56-well sector model of a diatomite reservoir in the SJV. History matching result shows that the network model honors field-level production history and gives reasonable matches for most of the wells, including pressure and flow rate. The calibrated ensemble from the last iteration of history matching yields a satisfactory production prediction, which is verified by the remaining historical data. For well control optimization, we select the P50 model to maximize the Net Present Value (NPV) in 5 years under provided well/field constraints. This confirms that the calibrated network model is accurate enough for production forecasts and optimization. The use of a commercial simulator in the network model provided flexibility to account for complex physics, such as elevation difference between wells, saturation non-equilibrium, and strong capillary pressure. Unlike traditional big-loop workflow that relies on a detailed characterization of geological models, the proposed network model only requires production data and can be built and updated rapidly. The model also runs much faster (tens of seconds) than a full-physics model due to the employment of much fewer grid blocks. To our knowledge, this is the first time this physics-based data-driven network model is applied with a commercial simulator on a field waterflood case. Unlike approaches developed with analytic solutions, the use of commercial simulator makes it feasible to be further extended for complex processes, e.g., thermal or compositional flow. It serves as an useful surrogate model for both fast and reliable decision-making in reservoir management.


Author(s):  
Yudong Qiu ◽  
Daniel Smith ◽  
Chaya Stern ◽  
mudong feng ◽  
Lee-Ping Wang

<div>The parameterization of torsional / dihedral angle potential energy terms is a crucial part of developing molecular mechanics force fields.</div><div>Quantum mechanical (QM) methods are often used to provide samples of the potential energy surface (PES) for fitting the empirical parameters in these force field terms.</div><div>To ensure that the sampled molecular configurations are thermodynamically feasible, constrained QM geometry optimizations are typically carried out, which relax the orthogonal degrees of freedom while fixing the target torsion angle(s) on a grid of values.</div><div>However, the quality of results and computational cost are affected by various factors on a non-trivial PES, such as dependence on the chosen scan direction and the lack of efficient approaches to integrate results started from multiple initial guesses.</div><div>In this paper we propose a systematic and versatile workflow called \textit{TorsionDrive} to generate energy-minimized structures on a grid of torsion constraints by means of a recursive wavefront propagation algorithm, which resolves the deficiencies of conventional scanning approaches and generates higher quality QM data for force field development.</div><div>The capabilities of our method are presented for multi-dimensional scans and multiple initial guess structures, and an integration with the MolSSI QCArchive distributed computing ecosystem is described.</div><div>The method is implemented in an open-source software package that is compatible with many QM software packages and energy minimization codes.</div>


Sign in / Sign up

Export Citation Format

Share Document