expensive function
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 7)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Bhupinder Singh Saini ◽  
Michael Emmerich ◽  
Atanu Mazumdar ◽  
Bekir Afsar ◽  
Babooshka Shavazipour ◽  
...  

AbstractWe introduce novel concepts to solve multiobjective optimization problems involving (computationally) expensive function evaluations and propose a new interactive method called O-NAUTILUS. It combines ideas of trade-off free search and navigation (where a decision maker sees changes in objective function values in real time) and extends the NAUTILUS Navigator method to surrogate-assisted optimization. Importantly, it utilizes uncertainty quantification from surrogate models like Kriging or properties like Lipschitz continuity to approximate a so-called optimistic Pareto optimal set. This enables the decision maker to search in unexplored parts of the Pareto optimal set and requires a small amount of expensive function evaluations. We share the implementation of O-NAUTILUS as open source code. Thanks to its graphical user interface, a decision maker can see in real time how the preferences provided affect the direction of the search. We demonstrate the potential and benefits of O-NAUTILUS with a problem related to the design of vehicles.


2021 ◽  
Vol 72 ◽  
pp. 667-715
Author(s):  
Syrine Belakaria ◽  
Aryan Deshwal ◽  
Janardhan Rao Doppa

We consider the problem of black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments), where the goal is to approximate the true Pareto set of solutions by minimizing the total resource cost of experiments. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive computational simulations. The key challenge is to select the sequence of experiments to uncover high-quality solutions using minimal resources. In this paper, we propose a general framework for solving MOO problems based on the principle of output space entropy (OSE) search: select the experiment that maximizes the information gained per unit resource cost about the true Pareto front. We appropriately instantiate the principle of OSE search to derive efficient algorithms for the following four MOO problem settings: 1) The most basic single-fidelity setting, where experiments are expensive and accurate; 2) Handling black-box constraints which cannot be evaluated without performing experiments; 3) The discrete multi-fidelity setting, where experiments can vary in the amount of resources consumed and their evaluation accuracy; and 4) The continuous-fidelity setting, where continuous function approximations result in a huge space of experiments. Experiments on diverse synthetic and real-world benchmarks show that our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions.


2020 ◽  
Vol 9 (5) ◽  
pp. 2020-2029
Author(s):  
Che Munira Che Razali ◽  
Shahrum Shah Abdullah ◽  
Amir Parnianifard ◽  
Amrul Faruq

The widespread use of computer experiments for design optimization has made the issue of reducing computational cost, improving accuracy, removing the “curse of dimensionality” and avoiding expensive function approximation becoming even more important. Metamodeling also known as surrogate modeling, can approximate the actual simulation model allowing for much faster execution time thus becoming a useful method to mitigate these problems. There are two (2) well-known metamodeling techniques which is kriging and radial basis function (RBF) discussed in this paper based on widely used algorithm tool from previous work in modern engineering design of optimization. An integral part of metamodeling is in the method to sample new data from the actual simulation model. Sampling new data for metamodeling requires finding the location (or value) of one or more new data such that the accuracy of the metamodel can be increased as much as possible after the sampling process. This paper discussed the challenges of adaptive sampling in metamodel and proposed an ensemble non-homogeneous method for best model voting to obtain new sample points.


2020 ◽  
Vol 34 (06) ◽  
pp. 10044-10052 ◽  
Author(s):  
Syrine Belakaria ◽  
Aryan Deshwal ◽  
Nitthilan Kannappan Jayakodi ◽  
Janardhan Rao Doppa

We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions while minimizing the number of function evaluations. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive simulations. We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation to solve this problem. The selection method of USeMO consists of solving a cheap MO optimization problem via surrogate models of the true functions to identify the most promising candidates and picking the best candidate based on a measure of uncertainty. We also provide theoretical analysis to characterize the efficacy of our approach. Our experiments on several synthetic and six diverse real-world benchmark problems show that USeMO consistently outperforms the state-of-the-art algorithms.


2020 ◽  
Vol 19 (3) ◽  
pp. 97-102
Author(s):  
Fernando Mesa ◽  
Carlos A. Ramírez ◽  
José José Barba-Ortega

This contribution presents optimal control over a double extractor induction motor using formalism through variational model. The criterion is subject to non-stationary equations of a reduced order (Dynamics equations of a reduced order model (DSIM)). As is well know, in this model the state variables are the rotor flow and motor speed in a circuit mechanical process. For non-stationary and stationary states, based on the theory of optimal control, this limit provides a high expensive function given as a weighted contribution of a DSIM theory. To order to acquire a lowest energy rotor flow path, the idea is to minimize the function to a dynamic of two equations of the motor speed and rotor flow. This problem is solved using with the Hamilton-Jacobi-Bellman equation and a time dependent solution for the rotor flow is determined analytically.


2019 ◽  
Vol 38 (7) ◽  
pp. 769-792 ◽  
Author(s):  
Gilad Francis ◽  
Lionel Ott ◽  
Roman Marchant ◽  
Fabio Ramos

We propose a novel holistic approach to safe autonomous exploration and map building based on constrained Bayesian optimization. This method finds optimal continuous paths instead of discrete sensing locations that inherently satisfy motion and safety constraints. Evaluating both the objective and constraints functions requires forward simulation of expected observations. As such, evaluations are costly, and therefore the Bayesian optimizer proposes only paths that are likely to yield optimal results and satisfy the constraints with high confidence. By balancing the reward and risk associated with each path, the optimizer minimizes the number of expensive function evaluations. We demonstrate the effectiveness of our approach in a series of experiments both in simulation and with a real ground robot and provide comparisons with other exploration techniques. The experimental results show that our method provides robust and consistent performance in all tests and performs better than or as good as the state of the art.


2019 ◽  
Vol 141 (3) ◽  
Author(s):  
Benson Isaac ◽  
Douglas Allaire

The optimization of black-box models is a challenging task owing to the lack of analytic gradient information and structural information about the underlying function, and also due often to significant run times. A common approach to tackling such problems is the implementation of Bayesian global optimization techniques. However, these techniques often rely on surrogate modeling strategies that endow the approximation of the underlying expensive function with nonexistent features. Further, these techniques tend to push new queries away from previously queried design points, making it difficult to locate an optimum point that rests near a previous model evaluation. To overcome these issues, we propose a gold rush (GR) policy that relies on purely local information to identify the next best design alternative to query. The method employs a surrogate constructed pointwise, that adds no additional features to the approximation. The result is a policy that performs well in comparison to state of the art Bayesian global optimization methods on several benchmark problems. The policy is also demonstrated on a constrained optimization problem using a penalty method.


Author(s):  
Benson Isaac ◽  
Douglas Allaire

The optimization of expensive black-box models is a challenging task owing to the lack of analytic gradient information and structural information about the underlying function, and also due to the sheer computational expense. A common approach to tackling such problems is the implementation of Bayesian global optimization techniques. However, these techniques often rely on surrogate modeling strategies that endow the approximation of the underlying expensive function with nonexistent features. Further, these techniques tend to push new queries away from previously queried design points, making it difficult to locate an optimum point that rests near a previous model evaluation. To overcome these issues, we propose a gold rush policy that relies on purely local information to identify the next best design alternative to query. The method employs a surrogate constructed pointwise, that adds no additional features to the approximation. The result is a policy that performs well in comparison to state of the art Bayesian global optimization methods on several benchmark problems. The policy is also demonstrated on a constrained optimization problem using a penalty method.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. R259-R268 ◽  
Author(s):  
Mostafa Abbasi ◽  
Ali Gholami

There are lots of geophysical problems that include computationally expensive functions (forward models). Polynomial chaos (PC) expansion aims to approximate such an expensive equation or system with a polynomial expansion on the basis of orthogonal polynomials. Evaluation of this expansion is extremely fast because it is a polynomial function. This property of the PC expansion is of great importance for stochastic problems, in which an expensive function needs to be evaluated thousands of times. We have developed PC expansion as a novel technique to solve nonlinear geophysical problems. To better evaluate the methodology, we use PC expansion for automating the velocity analysis. For this purpose, we define the optimally picked velocity model as an optimizer of a variational integral in a semblance field. However, because computation of a variational integral with respect to a given velocity model is rather expensive, it is impossible to use stochastic methods to search for the optimal velocity model. Thus, we replace the variational integral with its PC expansion, in which computation of the new function is extremely faster than the original one. This makes it possible to perturb thousands of velocity models in a matter of seconds. We use particle swarm optimization as the stochastic optimization method to find the optimum velocity model. The methodology is tested on synthetic and field data, and in both cases, reasonable results are achieved in a rather short time.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Tawatchai Kunakote ◽  
Sujin Bureerat

The work in this paper proposes the hybridisation of the well-established strength Pareto evolutionary algorithm (SPEA2) and some commonly used surrogate models. The surrogate models are introduced to an evolutionary optimisation process to enhance the performance of the optimiser when solving design problems with expensive function evaluation. Several surrogate models including quadratic function, radial basis function, neural network, and Kriging models are employed in combination with SPEA2 using real codes. The various hybrid optimisation strategies are implemented on eight simultaneous shape and sizing design problems of structures taking into account of structural weight, lateral bucking, natural frequency, and stress. Structural analysis is carried out by using a finite element procedure. The optimum results obtained are compared and discussed. The performance assessment is based on the hypervolume indicator. The performance of the surrogate models for estimating design constraints is investigated. It has been found that, by using a quadratic function surrogate model, the optimiser searching performance is greatly improved.


Sign in / Sign up

Export Citation Format

Share Document