Investigation of Well Control Parameterization with Reduced Number of Variables Under Reservoir Uncertainties

2021 ◽  
Author(s):  
Daniel Rodrigues Santos ◽  
André Ricardo Fioravanti ◽  
Antonio Alberto Souza Santos ◽  
Denis José Schiozer

Abstract Although several studies have shown that life-cycle well control strategies can significantly improve a field's economic return, the industry often relies on short-term strategies. One drawback of traditional parameterization, adopted for well control life-cycle numerical optimization, is that it often generates control strategies that yield impractical abrupt changes in production curves. Another issue, especially in cases with a large number of decision variables, is the local optima convergence related to the non-convex optimization problems. In this context, we proposed and compared four life-cycle well control parameterizations to maximize the net present value (NPV) of the field under uncertainties, which are able to mitigate both the above-mentioned problems. The first parameterization optimizes the apportionment of well rates at the beginning of the field management and well shut-in time. The other three are based on optimizing the coefficients of parametric equations (first-and second-order polynomials, and logistic equation) to guide the bottom-hole pressure (BHP) over time. We executed each parameterization five times in a deterministic reservoir scenario and compared them with well control short-term strategy that prioritizes production in wells with higher oil-water ratio and that aimed to replicate the general industry practice. In this strategy, the wells’ priority rank was updated at every 30-simulation days. Subsequently, the best parameterization was used to select the well control life-cycle strategy under reservoir uncertainties and this strategy was applied to the reference model representing a real reservoir. The results showed that all the proposed parametrizations significantly improved the NPV in comparison to the well control short-term strategy, while simultaneously ensuring a smooth well production curve. The logistic equation presented the best result among all parameterizations, as it delivered both the highest average of NPV and the smallest dispersion over the five experiment repetitions. This parameterization also produced similar results when applied under uncertainties and for the reference model. These results endorse the importance of not only relying on a short-term strategy, but also planning it for the life-cycle.

SPE Journal ◽  
2012 ◽  
Vol 17 (03) ◽  
pp. 849-864 ◽  
Author(s):  
C.. Chen ◽  
G.. Li ◽  
A.C.. C. Reynolds

Summary In this paper, we develop an efficient algorithm for production optimization under linear and nonlinear constraints and an uncertain reservoir description. The linear and nonlinear constraints are incorporated into the objective function using the augmented Lagrangian method, and the bound constraints are enforced using a gradient-projection trust-region method. Robust long-term optimization maximizes the expected life-cycle net present value (NPV) over a set of geological models, which represent the uncertainty in reservoir description. Because the life-cycle optimal controls may be in conflict with the operator's objective of maximizing short-time production, the method is adapted to maximize the expectation of short-term NPV over the next 1 or 2 years subject to the constraint that the life-cycle NPV will not be substantially decreased. The technique is applied to synthetic reservoir problems to demonstrate its efficiency and robustness. Experiments show that the field cannot always achieve the optimal NPV using the optimal well controls obtained on the basis of a single but uncertain reservoir model, whereas the application of robust optimization reduces this risk significantly. Experimental results also show that robust sequential optimization on each short-term period is not able to achieve an expected life-cycle NPV as high as that obtained with robust long-term optimization.


SPE Journal ◽  
2016 ◽  
Vol 21 (05) ◽  
pp. 1813-1829 ◽  
Author(s):  
Xin Liu ◽  
Albert C. Reynolds

Summary We consider two procedures for multiobjective optimization, the classical weighted-sum (WS) method and the normal-boundary-intersection (NBI) method. To enhance computational efficiency, the methods use gradients calculated with the adjoint method. Our objective is to develop implementations that one can apply for waterflooding optimization under geological uncertainty when we wish to develop well controls that satisfy two objectives: The first is to maximize the expectation of life-cycle net present value (NPV) (commonly referred to as robust optimization), and the second is either to minimize the standard deviation of NPV over that set of plausible reservoir descriptions or to minimize the risk when risk means downside risk. Specifically, minimizing risk refers to maximizing the minimum value of the life-cycle NPV (i.e., is equivalent to a maximum/minimum (max/min) problem). To avoid nondifferentiability issues, we recast the max/min problem as a constrained optimization problem and apply a gradient-based version of either WS or NBI to construct a point on the Pareto front. To deal with the constraints introduced, we derive an augmented-Lagrange algorithm to find points on the Pareto front. To the best of our knowledge, the resulting versions of “constrained” WS and “constrained” NBI were not presented previously in the scientific literature. The methodology is demonstrated for two synthetic reservoirs. We only consider bound constraints in this paper.


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
An-Di Tang ◽  
Shang-Qin Tang ◽  
Tong Han ◽  
Huan Zhou ◽  
Lei Xie

Slime mould algorithm (SMA) is a population-based metaheuristic algorithm inspired by the phenomenon of slime mould oscillation. The SMA is competitive compared to other algorithms but still suffers from the disadvantages of unbalanced exploitation and exploration and is easy to fall into local optima. To address these shortcomings, an improved variant of SMA named MSMA is proposed in this paper. Firstly, a chaotic opposition-based learning strategy is used to enhance population diversity. Secondly, two adaptive parameter control strategies are proposed to balance exploitation and exploration. Finally, a spiral search strategy is used to help SMA get rid of local optimum. The superiority of MSMA is verified in 13 multidimensional test functions and 10 fixed-dimensional test functions. In addition, two engineering optimization problems are used to verify the potential of MSMA to solve real-world optimization problems. The simulation results show that the proposed MSMA outperforms other comparative algorithms in terms of convergence accuracy, convergence speed, and stability.


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


2021 ◽  
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß ◽  
Manuel Schmitt ◽  
Rolf Wanka

AbstractMeta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain “easy” reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


Sign in / Sign up

Export Citation Format

Share Document