A Comprehensive Evaluation of Dimension-Reduction Approaches in Optimization of Well Rates

SPE Journal ◽  
2019 ◽  
Vol 24 (03) ◽  
pp. 912-950
Author(s):  
Abeeb A. Awotunde

Summary This paper evaluates the effectiveness of six dimension-reduction approaches. The approaches considered are the constant-control (Const) approach, the piecewise-constant (PWC) approach, the trigonometric approach, the Bessel-function (Bess) approach, the polynomial approach, and the data-decomposition approach. The approaches differ in their mode of operation, but they all reduce the number of parameters required in well-control optimization problems. Results show that the PWC approach performs better than other approaches on many problems, but yields widely fluctuating well controls over the field-development time frame. The trigonometric approach performed well on all the problems and yields controls that vary smoothly over time.

SPE Journal ◽  
2021 ◽  
pp. 1-21
Author(s):  
Yong Do Kim ◽  
Louis J. Durlofsky

Summary In well-control optimization problems, the goal is to determine the time-varying well settings that maximize an objective function, which is often the net present value (NPV). Various proxy models have been developed to predict NPV for a set of inputs such as time-varying well bottomhole pressures (BHPs). However, when nonlinear output constraints (e.g., maximum well/field water production rate or minimum well/field oil rate) are specified, the problem is more challenging because well rates as a function of time are required. In this work, we develop a recurrent neural network (RNN)–based proxy model to treat constrained production optimization problems. The network developed here accepts sequences of BHPs as inputs and predicts sequences of oil and water rates for each well. A long-short-term memory (LSTM) cell, which is capable of learning long-term dependencies, is used. The RNN is trained using well-rate results from 256 full-order simulation runs that involve different injection and production-well BHP schedules. After detailed validation against full-order simulation results, the RNN-based proxy is used for 2D and 3D production optimization problems. Optimizations are performed using a particle swarm optimization (PSO) algorithm with a filter-basednonlinear-constraint treatment. The trained proxy is extremely fast, although optimizations that apply the RNN-based proxy at all iterations are found to be suboptimal relative to full simulation-based (standard) optimization. Through use of a few additional simulation-based PSO iterations after proxy-based optimization, we achieve NPVs comparable with those from simulation-based optimization but with speedups of 10 or more (relative to performing five simulation-based optimization runs). It is important to note that because the RNN-based proxy provides full well-rate time sequences, optimization constraint types or limits, as well as economic parameters, can be varied without retraining.


2019 ◽  
Vol 24 (6) ◽  
pp. 1943-1958 ◽  
Author(s):  
V. L. S. Silva ◽  
M. A. Cardoso ◽  
D. F. B. Oliveira ◽  
R. J. de Moraes

AbstractIn this work, we discuss the application of stochastic optimization approaches to the OLYMPUS case, a benchmark challenge which seeks the evaluation of different techniques applied to well control and field development optimization. For that matter, three exercises have been proposed, namely, (i) well control optimization; (ii) field development optimization; and (iii) joint optimization. All applications were performed considering the so-called OLYMPUS case, a synthetic reservoir model with geological uncertainty provided by TNO (Fonseca 2018). Firstly, in the well control exercise, we successfully applied an ensemble-based approximate gradient method in a robust optimization formulation. Secondly, we solve the field development exercise using a genetic algorithm framework designed with special features for the problem of interest. Finally, in order to evaluate further gains, a sequential optimization approach was employed, in which we run one more well control optimization based on the optimal well locations. Even though we utilize relatively well-known techniques in our studies, we describe the necessary adaptations to the algorithms that enable their successful applications to real-life scenarios. Significant gains in the expected net present value are obtained: in exercise (i) a gain of 7% with respect to reactive control; for exercise (ii) a gain of 660% with respect to a initial well placement based on an engineering approach; and for (iii) an extra gain of 3% due to an additional well control optimization after the well placement optimization. All these gains are obtained with an affordable computational cost via the extensive utilization of high-performance computing (HPC) infrastructure. We also apply a scenario reduction technique to exercise (i), with similar gains obtained in the full ensemble optimization, however, with substantially inferior computational cost. In conclusion, we demonstrate how the state-of-the-art optimization technology available in the model-based reservoir management literature can be successfully applied to field development optimization via the conscious utilization of HPC facilities.


1986 ◽  
Vol 14 (4) ◽  
pp. 235-263
Author(s):  
A. G. Veith

Abstract The effect of tread compound variation on tire treadwear was studied using bias and radial tires of two aspect ratios. Compound variations included types of rubber and carbon black as well as the levels of carbon black, process oil, and curatives. At low to moderate test severity, SBR and an SBR/BR blend performed better than NR while at high test severity NR and SBR were better than the SBR/BR blend. The SBR/BR blend was the best at low severity testing. Higher structure and higher surface area carbon black gave improved treadwear at all severity levels. The concept of a “frictional work intensity” as the primary determinant of treadwear index variation with test severity is proposed. Some factors which influence frictional work intensity are discussed.


OR Spectrum ◽  
2021 ◽  
Author(s):  
Adejuyigbe O. Fajemisin ◽  
Laura Climent ◽  
Steven D. Prestwich

AbstractThis paper presents a new class of multiple-follower bilevel problems and a heuristic approach to solving them. In this new class of problems, the followers may be nonlinear, do not share constraints or variables, and are at most weakly constrained. This allows the leader variables to be partitioned among the followers. We show that current approaches for solving multiple-follower problems are unsuitable for our new class of problems and instead we propose a novel analytics-based heuristic decomposition approach. This approach uses Monte Carlo simulation and k-medoids clustering to reduce the bilevel problem to a single level, which can then be solved using integer programming techniques. The examples presented show that our approach produces better solutions and scales up better than the other approaches in the literature. Furthermore, for large problems, we combine our approach with the use of self-organising maps in place of k-medoids clustering, which significantly reduces the clustering times. Finally, we apply our approach to a real-life cutting stock problem. Here a forest harvesting problem is reformulated as a multiple-follower bilevel problem and solved using our approach.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1055
Author(s):  
Qian Sun ◽  
William Ampomah ◽  
Junyu You ◽  
Martha Cather ◽  
Robert Balch

Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.


Materials ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 2989
Author(s):  
Halina Szafranska ◽  
Ryszard Korycki

In order to ensure a comprehensive evaluation of laminated seams in working clothing, a series of research was carried out to determine the correlation between the parameters of the seam lamination process (i.e., the temperature, the time, the pressure) and the mechanical properties of laminated seams. The mechanical properties were defined by means of the maximum breaking force, the relative elongation at break and the total bending rigidity. The mechanical indexes were accepted as the measure of durability and stability of laminated seams. The correlation between the lamination process parameters and the final properties of the tested seams in working clothing was proposed using a three-factor plan 33. Finally, the single-criteria optimization was introduced and the objective functional is the generalized utility function U. Instead of three independent optimization problems, the single problem was applied, and the global objective function was a weighted average of partial criteria with the assumed weight values. The problem of multicriteria weighted optimization was solved using the determined weights and the ranges of acceptable/unacceptable values.


2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


2018 ◽  
Vol 14 (09) ◽  
pp. 190 ◽  
Author(s):  
Shewangi Kochhar ◽  
Roopali Garg

<p>Cognitive Radio has been skillful technology to improve the spectrum sensing as it enables Cognitive Radio to find Primary User (PU) and let secondary User (SU) to utilize the spectrum holes. However detection of PU leads to longer sensing time and interference. Spectrum sensing is done in specific “time frame” and it is further divided into Sensing time and transmission time. Higher the sensing time better will be detection and lesser will be the probability of false alarm. So optimization technique is highly required to address the issue of trade-off between sensing time and throughput. This paper proposed an application of Genetic Algorithm technique for spectrum sensing in cognitive radio. Here results shows that ROC curve of GA is better than PSO in terms of normalized throughput and sensing time. The parameters that are evaluated are throughput, probability of false alarm, sensing time, cost and iteration.</p>


Author(s):  
George H. Cheng ◽  
Adel Younis ◽  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

Mode Pursuing Sampling (MPS) was developed as a global optimization algorithm for optimization problems involving expensive black box functions. MPS has been found to be effective and efficient for problems of low dimensionality, i.e., the number of design variables is less than ten. A previous conference publication integrated the concept of trust regions into the MPS framework to create a new algorithm, TRMPS, which dramatically improved performance and efficiency for high dimensional problems. However, although TRMPS performed better than MPS, it was unproven against other established algorithms such as GA. This paper introduces an improved algorithm, TRMPS2, which incorporates guided sampling and low function value criterion to further improve algorithm performance for high dimensional problems. TRMPS2 is benchmarked against MPS and GA using a suite of test problems. The results show that TRMPS2 performs better than MPS and GA on average for high dimensional, expensive, and black box (HEB) problems.


Sign in / Sign up

Export Citation Format

Share Document