Efficient Drilling Sequence Optimization Using Heuristic Priority Functions

SPE Journal ◽  
2021 ◽  
pp. 1-20
Author(s):  
Z. Wang ◽  
J. He ◽  
S. Tanaka ◽  
X.-H. Wen

Summary Drilling sequence optimization is a common challenge faced in the oil and gas industry, and yet it cannot be solved efficiently by existing optimization methods due to its unique features and constraints. For many fields, the drilling queue is currently designed manually based on engineering heuristics. In this paper, we combined the heuristic priority functions (HPFs) with traditional optimizers to boost the optimization efficiency at a lower computational cost to speed up the decision-making process. The HPFs are constructed to map the individual well properties such as well index and interwell distance to the well priority values. As the name indicates, wells with higher priority values will be drilled earlier in the queue. The HPFs are a comprehensive metric of interwell communication and displacement efficiency. For example, injectors with fast support to producers, or producers with a better chance to drain the unswept region, tend to have high scores. They contain components that weigh the different properties of a well. These components are then optimized during the optimization process to generate the beneficial drilling sequences. Embedded with reservoir engineering heuristics, the priority function (PF) helps the optimizer focus on exploring scenarios with promising outcomes. The proposed HPFs, combined with the genetic algorithm (GA), have been tested through drilling sequence optimization problems for the Brugge Field and Olympus Field. Optimizations that are directly performed on the drilling sequence are used as reference cases. Different continuous/categorical parameterization schemes and various forms of HPFs are also investigated. Our exploration reveals that the HPF including well type, constraints, well index, distance to existing wells, and adjacent oil in place (OIP) yields the best outcome. The proposed approach achieved a better optimization starting point (∼5 to 18% improvement due to more reasonable drilling sequence rather than random guess), a faster convergence rate (results stabilized at 12 vs. 30 iterations), and a lower computational cost [150 to 250 vs. 1,300 runs to achieve the same net present value (NPV)] over the reference methods. Similar performance improvement was also observed in another application to a North Sea–type reservoir. This demonstrated the general applicability of the proposed method. The use of HPFs improves the efficiency and reliability of drilling sequence optimization compared with the traditional methods that directly optimize the sequence. They can be easily embedded in either commercial or research simulators as an independent module. In addition, they are also an automatic process that fits well with iterative optimization algorithms.

2021 ◽  
Author(s):  
Zhenzhen Wang ◽  
Jincong He ◽  
Shusei Tanaka ◽  
Xian-Huan Wen

Abstract Drill sequence optimization is a common challenge faced in the oil and gas industry and yet it cannot be solved efficiently by existing optimization methods due to its unique features and constraints. For many fields, the drill queue is currently designed manually based on engineering heuristics. In this paper, a heuristic priority function is combined with traditional optimizers to boost the optimization efficiency at a lower computational cost to speed up the decision-making process. The heuristic priority function is constructed to map the individual well properties such as well index and inter-well distance to the well priority values. As the name indicates, wells with higher priority values will be drilled earlier in the queue. The heuristic priority function is a comprehensive metric of inter-well communication & displacement efficiency. For example, injectors with fast support to producers or producers with a better chance to drain the unswept region tend to have high scores. It contains components that weigh the different properties of a well. These components are then optimized during the optimization process to generate the beneficial drill sequences. Embedded with reservoir engineering heuristics, the priority function helps the optimizer focus on exploring scenarios with promising outcomes. The proposed heuristic priority function, combined with the Genetic Algorithm (GA), has been tested through drill sequence optimization problems for the Brugge field and Olympus field. Optimizations that are directly performed on the drill sequence are employed as reference cases. Different continu- ous/categorical parameterization schemes and various forms of heuristic priority functions are also investigated. Our exploration reveals that the heuristic priority function including well type, constraints, well index, distance to existing wells, and adjacent oil in place yields the best outcome. The proposed approach was able to achieve a better optimization starting point (∼5-18% improvement due to more reasonable drill sequence rather than random guess), a faster convergence rate (results stabilized at 12 vs. 30 iterations), and a lower computational cost (150-250 vs. 1,300 runs to achieve the same NPV) over the reference methods. Similar performance improvement was also observed in another application to a North Sea type reservoir. This demonstrated the general applicability of the proposed method. The employment of the heuristic priority function improves the efficiency and reliability of drill sequence optimization compared to the traditional methods that directly optimize the sequence. It can be easily embedded in either commercial or research simulators as an independent module. In addition, it is also an automatic process that fits well with iterative optimization algorithms.


Author(s):  
Tarun Gangwar ◽  
Dominik Schillinger

AbstractWe present a concurrent material and structure optimization framework for multiphase hierarchical systems that relies on homogenization estimates based on continuum micromechanics to account for material behavior across many different length scales. We show that the analytical nature of these estimates enables material optimization via a series of inexpensive “discretization-free” constraint optimization problems whose computational cost is independent of the number of hierarchical scales involved. To illustrate the strength of this unique property, we define new benchmark tests with several material scales that for the first time become computationally feasible via our framework. We also outline its potential in engineering applications by reproducing self-optimizing mechanisms in the natural hierarchical system of bamboo culm tissue.


2014 ◽  
Vol 1 (4) ◽  
pp. 256-265 ◽  
Author(s):  
Hong Seok Park ◽  
Trung Thanh Nguyen

Abstract Energy efficiency is an essential consideration in sustainable manufacturing. This study presents the car fender-based injection molding process optimization that aims to resolve the trade-off between energy consumption and product quality at the same time in which process parameters are optimized variables. The process is specially optimized by applying response surface methodology and using nondominated sorting genetic algorithm II (NSGA II) in order to resolve multi-object optimization problems. To reduce computational cost and time in the problem-solving procedure, the combination of CAE-integration tools is employed. Based on the Pareto diagram, an appropriate solution is derived out to obtain optimal parameters. The optimization results show that the proposed approach can help effectively engineers in identifying optimal process parameters and achieving competitive advantages of energy consumption and product quality. In addition, the engineering analysis that can be employed to conduct holistic optimization of the injection molding process in order to increase energy efficiency and product quality was also mentioned in this paper.


Processes ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 742
Author(s):  
Morteza Esmaeilpour ◽  
Maziar Gholami Korzani

Injection of Newtonian fluids to displace pseudoplastic and dilatant fluids, governed by the power-law viscosity relationship, is common in many industrial processes. In these applications, changing the viscosity of the displaced fluid through velocity alteration can regulate interfacial instabilities, displacement efficiency, the thickness of the static wall layer, and the injected fluid’s tendency to move toward particular parts of the channel. The dynamic behavior of the fluid–fluid interface in the case of immiscibility is highly complicated and complex. In this study, a code was developed that utilizes a multi-component model of the lattice Boltzmann method to decrease the computational cost and accurately model these problems. Accordingly, a 2D inclined channel, filled with a stagnant incompressible Newtonian fluid in the initial section followed by a power-law material, was modeled for numerous scenarios. In conclusion, the results indicate that reducing the power-law index can regulate interfacial instabilities leading to dynamic deformation of static wall layers at the top and the bottom of the channel. However, it does not guarantee a reduction in the thickness of these layers, which is crucial to improve displacement efficiency. The impacts of the compatibility factor and power-law index variations on the filling pattern and finger structure were intensively evaluated.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


Author(s):  
Álinson S. Xavier ◽  
Ricardo Fukasawa ◽  
Laurent Poirrier

When generating multirow intersection cuts for mixed-integer linear optimization problems, an important practical question is deciding which intersection cuts to use. Even when restricted to cuts that are facet defining for the corner relaxation, the number of potential candidates is still very large, especially for instances of large size. In this paper, we introduce a subset of intersection cuts based on the infinity norm that is very small, works for relaxations having arbitrary number of rows and, unlike many subclasses studied in the literature, takes into account the entire data from the simplex tableau. We describe an algorithm for generating these inequalities and run extensive computational experiments in order to evaluate their practical effectiveness in real-world instances. We conclude that this subset of inequalities yields, in terms of gap closure, around 50% of the benefits of using all valid inequalities for the corner relaxation simultaneously, but at a small fraction of the computational cost, and with a very small number of cuts. Summary of Contribution: Cutting planes are one of the most important techniques used by modern mixed-integer linear programming solvers when solving a variety of challenging operations research problems. The paper advances the state of the art on general-purpose multirow intersection cuts by proposing a practical and computationally friendly method to generate them.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Lin Bao ◽  
Xiaoyan Sun ◽  
Yang Chen ◽  
Guangyi Man ◽  
Hui Shao

A novel algorithm, called restricted Boltzmann machine-assisted estimation of distribution algorithm, is proposed for solving computationally expensive optimization problems with discrete variables. First, the individuals are evaluated using expensive fitness functions of the complex problems, and some dominant solutions are selected to construct the surrogate model. The restricted Boltzmann machine (RBM) is built and trained with the dominant solutions to implicitly extract the distributed representative information of the decision variables in the promising subset. The visible layer’s probability of the RBM is designed as the sampling probability model of the estimation of distribution algorithm (EDA) and is updated dynamically along with the update of the dominant subsets. Second, according to the energy function of the RBM, a fitness surrogate is developed to approximate the expensive individual fitness evaluations and participates in the evolutionary process to reduce the computational cost. Finally, model management is developed to train and update the RBM model with newly dominant solutions. A comparison of the proposed algorithm with several state-of-the-art surrogate-assisted evolutionary algorithms demonstrates that the proposed algorithm effectively and efficiently solves complex optimization problems with smaller computational cost.


2021 ◽  
Author(s):  
◽  
Lukas Weih

High-energy astrophysics plays an increasingly important role in the understanding of our universe. On one hand, this is due to ground-breaking observations, like the gravitational-wave detections of the LIGO and Virgo network or the black-hole shadow observations of the EHT collaboration. On the other hand, the field of numerical relativity has reached a level of sophistication that allows for realistic simulations that include all four fundamental forces of nature. A prime example of how observations and theory complement each other can be seen in the studies following GW170817, the first detection of gravitational waves from a binary neutron-star merger. The same detection is also the chronological starting point of this Thesis. The plethora of information and constraints on nuclear physics derived from GW170817 in conjunction with theoretical computations will be presented in the first part of this Thesis. The second part goes beyond this detection and prepares for future observations when also the high-frequency postmerger signal will become detectable. Specifically, signatures of a quark-hadron phase transition are discussed and the specific case of a delayed phase transition is analyzed in detail. Finally, the third part of this Thesis focuses on the inclusion of radiative transport in numerical astrophysics. In the context of binary neutron-star mergers, radiation in the form of neutrinos is crucial for realistic long-term simulations. Two methods are introduced for treating radiation: the approximate state-of-the-art two-moment method (M1) and the recently developed radiative Lattice-Boltzmann method. The latter promises to be more accurate than M1 at a comparable computational cost. Given that most methods for radiative transport or either inaccurate or unfeasible, the derivation of this new method represents a novel and possibly paradigm-changing contribution to an accurate inclusion of radiation in numerical astrophysics.


Author(s):  
Jose Carrillo ◽  
Shi Jin ◽  
Lei Li ◽  
Yuhua Zhu

We improve recently introduced consensus-based optimization method, proposed in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl. Sci., 27(01):183{204, 2017], which is a gradient-free optimization method for general nonconvex functions. We rst replace the isotropic geometric Brownian motion by the component-wise one, thus removing the dimensionality dependence of the drift rate, making the method more competitive for high dimensional optimization problems. Secondly, we utilize the random mini-batch ideas to reduce the computational cost of calculating the weighted average which the individual particles tend to relax toward. For its mean- eld limit{a nonlinear Fokker-Planck equation{we prove, in both time continuous and semi-discrete settings, that the convergence of the method, which is exponential in time, is guaranteed with parameter constraints independent of the dimensionality. We also conduct numerical tests to high dimensional problems to check the success rate of the method.


Author(s):  
Tobias Leibner ◽  
Mario Ohlberger

In this contribution we derive and analyze a new numerical method for kinetic equations based on a variable transformation of the moment approximation. Classical minimum-entropy moment closures are a class of reduced models for kinetic equations that conserve many of the fundamental physical properties of solutions. However, their practical use is limited by their high computational cost, as an optimization problem has to be solved for every cell in the space-time grid. In addition, implementation of numerical solvers for these models is hampered by the fact that the optimization problems are only well-defined if the moment vectors stay within the realizable set. For the same reason, further reducing these models by, e.g., reduced-basis methods is not a simple task. Our new method overcomes these disadvantages of classical approaches. The transformation is performed on the semi-discretized level which makes them applicable to a wide range of kinetic schemes and replaces the nonlinear optimization problems by inversion of the positive-definite Hessian matrix. As a result, the new scheme gets rid of the realizability-related problems. Moreover, a discrete entropy law can be enforced by modifying the time stepping scheme. Our numerical experiments demonstrate that our new method is often several times faster than the standard optimization-based scheme.


Sign in / Sign up

Export Citation Format

Share Document