Robust Well Placement Optimization Through Universal Trace Kriging with Adaptive Sampling

2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.

2016 ◽  
Author(s):  
Guangliang Fu ◽  
Hai-Xiang Lin ◽  
Arnold Heemink ◽  
Arjo Segers ◽  
Nils van Velzen ◽  
...  

Abstract. In this study, we investigate strategies for accelerating data assimilation on volcanic ash forecasts. Based on evaluations of computational time, the analysis step of the assimilation is evaluated as the most expensive part. After a careful study on the characteristics of the ensemble ash state, we propose a mask-state algorithm which records the sparsity information of the full ensemble state matrix and transforms the full matrix into a relatively small one. This will reduce the computational cost in the analysis step. Experimental results show the mask-state algorithm significantly speeds up the expensive analysis step. Subsequently, the total amount of computing time for volcanic ash data assimilation is reduced to an acceptable level, which is important for providing timely and accurate aviation advices. The mask-state algorithm is generic and thus can be embedded in any ensemble-based data assimilation framework. Moreover, ensemble-based data assimilation with the mask-state algorithm is promising and flexible, because it implements exactly the standard data assimilation without any approximation and it realizes the satisfying performance without any change of the full model.


2017 ◽  
Vol 10 (4) ◽  
pp. 1751-1766 ◽  
Author(s):  
Guangliang Fu ◽  
Hai Xiang Lin ◽  
Arnold Heemink ◽  
Sha Lu ◽  
Arjo Segers ◽  
...  

Abstract. In this study, we investigate a strategy to accelerate the data assimilation (DA) algorithm. Based on evaluations of the computational time, the analysis step of the assimilation turns out to be the most expensive part. After a study of the characteristics of the ensemble ash state, we propose a mask-state algorithm which records the sparsity information of the full ensemble state matrix and transforms the full matrix into a relatively small one. This will reduce the computational cost in the analysis step. Experimental results show the mask-state algorithm significantly speeds up the analysis step. Subsequently, the total amount of computing time for volcanic ash DA is reduced to an acceptable level. The mask-state algorithm is generic and thus can be embedded in any ensemble-based DA framework. Moreover, ensemble-based DA with the mask-state algorithm is promising and flexible, because it implements exactly the standard DA without any approximation and it realizes the satisfying performance without any change in the full model.


Author(s):  
Yan Shi ◽  
Zhenzhou Lu ◽  
Ruyang He

Aiming at accurately and efficiently estimating the time-dependent failure probability, a novel time-dependent reliability analysis method based on active learning Kriging model is proposed. Although active surrogate model methods have been used to estimate the time-dependent failure probability, efficiently estimating the time-dependent failure probability by a fewer computational time remains an issue because screening all the candidate samples iteratively by the active surrogate model is time-consuming. This article is intended to address this issue by establishing an optimization strategy to search the new training samples for updating the surrogate model. The optimization strategy is performed in the adaptive sampling region which is first proposed. The adaptive sampling region is adjustable by the current surrogate model in order to provide a proper candidate samples region of the input variables. The proposed method employs the optimization strategy to select the optimal sample to be the new training sample point in each iteration, and it does not need to predict the values of all the candidate samples at every time instant in each iterative step. Several examples are introduced to illustrate the accuracy and efficiency of the proposed method for estimating the time-dependent failure probability by simultaneously considering the computational cost and precision.


2021 ◽  
Author(s):  
Thiago Dias dos Santos ◽  
Mathieu Morlighem ◽  
Douglas Brinkerhoff

Abstract. Numerical simulations of ice sheets rely on the momentum balance to determine how ice velocities change as the geometry of the system evolves. Ice is generally assumed to follow a Stokes flow with a nonlinear viscosity. Several approximations have been proposed in order to lower the computational cost of a full-Stokes stress balance. A popular option is the Blatter-Pattyn or Higher-Order model (HO), which consists of a three-dimensional set of equations that solves the horizontal velocities only. However, it still remains computationally expensive for long transient simulations. Here we present a depth-integrated formulation of the HO model, which can be solved on a two-dimensional mesh in the horizontal plane. We employ a specific polynomial function to describe the vertical variation of the velocity, which allows us to integrate the vertical dimension using a semi-analytic integration. We assess the performance of this MOno-Layer Higher-Order model (MOLHO) to compute ice velocities and simulate grounding line dynamics on standard benchmarks (ISMIP-HOM and MISMIP3D). We compare MOLHO results to the ones obtained with the original three-dimensional HO model. We also compare the time performance of both models in time-dependent runs. Our results show that the ice velocities and grounding line positions obtained with MOLHO are in very good agreement with the ones from HO. In terms of computing time, MOLHO requires less than 10 % of the computational time of a typical HO model, for the same simulations. These results suggest that the MOno-Layer Higher-Order formulation provides improved computational time performance and a comparable accuracy compared to the HO formulation, which opens the door to Higher-Order paleo simulations.


Author(s):  
Anton van Beek ◽  
Siyu Tao ◽  
Wei Chen

Abstract We consider the problem of adaptive sampling for global emulation (metamodeling) with a finite budget. Conventionally this problem is tackled through a greedy sampling strategy, which is optimal for taking either a single sample or a handful of samples at a single sampling stage but neglects the influence of future samples. This raises the question: “Can we optimize the number of sampling stages as well as the number of samples at each stage?” The proposed thrifty adaptive batch sampling (TABS) approach addresses this challenge by adopting a normative decision-making perspective to determine the total number of required samples and maximize a multistage reward function with respect to the total number of stages and the batch size at each stage. To amend TABS’ numerical complexity we propose two heuristic-based strategies that significantly reduce computational time with minimal reduction of reward optimality. Through numerical examples, TABS is shown to outperform or at least be comparable to conventional greedy sampling techniques. In this fashion, TABS provides modelers a flexible adaptive sampling tool for global emulation, effectively reducing computational cost while maintaining prediction accuracy.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Saúl Zapotecas-Martínez ◽  
Abel García-Nájera ◽  
Adriana Menchaca-Méndez

One of the major limitations of evolutionary algorithms based on the Lebesgue measure for multi-objective optimization is the computational cost required to approximate the Pareto front of a problem. Nonetheless, the Pareto compliance property of the Lebesgue measure makes it one of the most investigated indicators in the designing of indicator-based evolutionary algorithms (IBEAs). The main deficiency of IBEAs that use the Lebesgue measure is their computational cost which increases with the number of objectives of the problem. On this matter, the investigation presented in this paper introduces an evolutionary algorithm based on the Lebesgue measure to deal with box-constrained continuous multi-objective optimization problems. The proposed algorithm implicitly uses the regularity property of continuous multi-objective optimization problems that has suggested effectiveness when solving continuous problems with rough Pareto sets. On the other hand, the survival selection mechanism considers the local property of the Lebesgue measure, thus reducing the computational time in our algorithmic approach. The emerging indicator-based evolutionary algorithm is examined and compared versus three state-of-the-art multi-objective evolutionary algorithms based on the Lebesgue measure. In addition, we validate its performance on a set of artificial test problems with various characteristics, including multimodality, separability, and various Pareto front forms, incorporating concavity, convexity, and discontinuity. For a more exhaustive study, the proposed algorithm is evaluated in three real-world applications having four, five, and seven objective functions whose properties are unknown. We show the high competitiveness of our proposed approach, which, in many cases, improved the state-of-the-art indicator-based evolutionary algorithms on the multi-objective problems adopted in our investigation.


2021 ◽  
Author(s):  
Changkun Wu ◽  
Ke Liang ◽  
Hailang Sang ◽  
Yu Ye ◽  
Mingzhang Pan

Abstract In this paper, a Pareto front (PF)-based sampling algorithm, PF-Voronoi sampling method, is proposed to solve the problem of computationally intensive multi-objectives of medium size. The Voronoi diagram is introduced to classify the region containing PF prediction points into Pareto front cells (PFCs). Valid PFCs are screened according to the maximum crowding criterion (MCC), maximum LOO error criterion (MLEC), and maximum mean MSE criterion (MMMSEC). Sampling points are selected among the valid PFCs based on the Euclidean distance. The PF-Voronoi sampling method is applied to the coupled Kringing and NASG-II models and its validity is verified on the ZDT mathematical cases. The results show that the MCC criterion helps to improve the distribution diversity of PF. The MLEC criterion and the MMMSEC criterion reduce the number of training samples by 38.9% and 21.7%, respectively. The computational cost of the algorithm is reduced by more than 44.2%, compared to EHVIMOPSO and SAO-MOEA algorithms. The algorithm can be applied to multidisciplinary, multi-objective, and computationally intensive complex systems.


2020 ◽  
Author(s):  
Jingbai Li ◽  
Patrick Reiser ◽  
André Eberhard ◽  
Pascal Friederich ◽  
Steven Lopez

<p>Photochemical reactions are being increasingly used to construct complex molecular architectures with mild and straightforward reaction conditions. Computational techniques are increasingly important to understand the reactivities and chemoselectivities of photochemical isomerization reactions because they offer molecular bonding information along the excited-state(s) of photodynamics. These photodynamics simulations are resource-intensive and are typically limited to 1–10 picoseconds and 1,000 trajectories due to high computational cost. Most organic photochemical reactions have excited-state lifetimes exceeding 1 picosecond, which places them outside possible computational studies. Westermeyr <i>et al.</i> demonstrated that a machine learning approach could significantly lengthen photodynamics simulation times for a model system, methylenimmonium cation (CH<sub>2</sub>NH<sub>2</sub><sup>+</sup>).</p><p>We have developed a Python-based code, Python Rapid Artificial Intelligence <i>Ab Initio</i> Molecular Dynamics (PyRAI<sup>2</sup>MD), to accomplish the unprecedented 10 ns <i>cis-trans</i> photodynamics of <i>trans</i>-hexafluoro-2-butene (CF<sub>3</sub>–CH=CH–CF<sub>3</sub>) in 3.5 days. The same simulation would take approximately 58 years with ground-truth multiconfigurational dynamics. We proposed an innovative scheme combining Wigner sampling, geometrical interpolations, and short-time quantum chemical trajectories to effectively sample the initial data, facilitating the adaptive sampling to generate an informative and data-efficient training set with 6,232 data points. Our neural networks achieved chemical accuracy (mean absolute error of 0.032 eV). Our 4,814 trajectories reproduced the S<sub>1</sub> half-life (60.5 fs), the photochemical product ratio (<i>trans</i>: <i>cis</i> = 2.3: 1), and autonomously discovered a pathway towards a carbene. The neural networks have also shown the capability of generalizing the full potential energy surface with chemically incomplete data (<i>trans</i> → <i>cis</i> but not <i>cis</i> → <i>trans</i> pathways) that may offer future automated photochemical reaction discoveries.</p>


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Sign in / Sign up

Export Citation Format

Share Document