Methodology for Global Optimization of Computationally Expensive Design Problems

2014 ◽  
Vol 136 (8) ◽  
Author(s):  
Stefanos Koullias ◽  
Dimitri N. Mavris

The design of unconventional systems requires early use of high-fidelity physics-based tools to search the design space for improved and potentially optimum designs. Current methods for incorporating these computationally expensive tools into early design for the purpose of reducing uncertainty are inadequate due to the limited computational resources that are available in early design. Furthermore, the lack of finite difference derivatives, unknown design space properties, and the possibility of code failures motivates the need for a robust and efficient global optimization (EGO) algorithm. A novel surrogate model-based global optimization algorithm capable of efficiently searching challenging design spaces for improved designs is presented. The algorithm, called fBcEGO for fully Bayesian constrained EGO, constructs a fully Bayesian Gaussian process (GP) model through a set of observations and then uses the model to make new observations in promising areas where improvements are likely to occur. This model remedies the inadequacies of likelihood-based approaches, which may provide an incomplete inference of the underlying function when function evaluations are expensive and therefore scarce. A challenge in the construction of the fully Bayesian GP model is the selection of the prior distribution placed on the model hyperparameters. Previous work employs static priors, which may not capture a sufficient number of interpretations of the data to make any useful inferences about the underlying function. An iterative method that dynamically assigns hyperparameter priors by exploiting the mechanics of Bayesian penalization is presented. fBcEGO is incorporated into a methodology that generates relatively few infeasible designs and provides large reductions in the objective function values of design problems. This new algorithm, upon implementation, was found to solve more nonlinearly constrained algebraic test problems to higher accuracies relative to the global minimum than other popular surrogate model-based global optimization algorithms and obtained the largest reduction in the takeoff gross weight objective function for the case study of a notional 70-passenger regional jet when compared with competing design methods.

2017 ◽  
Vol 50 (6) ◽  
pp. 1016-1040 ◽  
Author(s):  
Atthaphon Ariyarit ◽  
Masahiko Sugiura ◽  
Yasutada Tanabe ◽  
Masahiro Kanazaki

2013 ◽  
Vol 23 (06) ◽  
pp. 1350102 ◽  
Author(s):  
KEIJI TATSUMI ◽  
TETSUZO TANINO

The chaotic system has been exploited in metaheuristic methods of solving global optimization problems having a large number of local minima. In those methods, the selection of chaotic system is significantly important to search for solutions extensively. Recently, a novel chaotic system, the gradient model with perturbation methods (GP), was proposed, which can be regarded as the steepest descent method for minimizing an objective function with additional perturbation terms, and it is reported that chaotic metaheuristic method with the GP model has a good performance of solving some benchmark problems through numerical experiments. Moreover, a sufficient condition of parameter was theoretically shown for chaoticity in a simplified GP model where the descent term for the objective function is removed from the original model. However, the shown conditions does not provide enough information to select parameter values in the GP model for metaheuristic methods. Therefore, in this paper, we theoretically derive a sufficient condition under which the original GP model is chaotic, which can be usefully exploited for an appropriate selection of parameter values. In addition, we examine the derived sufficient condition by calculating the Lyapunov exponents of the GP model, and analyze its bifurcation structure through numerical experiments.


2015 ◽  
Vol 29 (4) ◽  
pp. 1421-1427 ◽  
Author(s):  
Su-gil Cho ◽  
Junyong Jang ◽  
Jihoon Kim ◽  
Minuk Lee ◽  
Jong-Su Choi ◽  
...  

2012 ◽  
Vol 124 ◽  
pp. 85-100 ◽  
Author(s):  
Ling-Lu Chen ◽  
Cheng Liao ◽  
Wenbin Lin ◽  
Lei Chang ◽  
Xuan-Ming Zhong

Author(s):  
Qun Meng ◽  
Songhao Wang ◽  
Szu Hui Ng

Gaussian process (GP) model based optimization is widely applied in simulation and machine learning. In general, it first estimates a GP model based on a few observations from the true response and then uses this model to guide the search, aiming to quickly locate the global optimum. Despite its successful applications, it has several limitations that may hinder its broader use. First, building an accurate GP model can be difficult and computationally expensive, especially when the response function is multimodal or varies significantly over the design space. Second, even with an appropriate model, the search process can be trapped in suboptimal regions before moving to the global optimum because of the excessive effort spent around the current best solution. In this work, we adopt the additive global and local GP (AGLGP) model in the optimization framework. The model is rooted in the inducing points based GP sparse approximations and is combined with independent local models in different regions. With these properties, the AGLGP model is suitable for multimodal responses with relatively large data sizes. Based on this AGLGP model, we propose a combined global and local search for optimization (CGLO) algorithm. It first divides the whole design space into disjoint local regions and identifies a promising region with the global model. Next, a local model in the selected region is fit to guide detailed search within this region. The algorithm then switches back to the global step when a good local solution is found. The global and local natures of CGLO enable it to enjoy the benefits of both global and local search to efficiently locate the global optimum. Summary of Contribution: This work proposes a new Gaussian process based algorithm for stochastic simulation optimization, which is an important area in operations research. This type of algorithm is also regarded as one of the state-of-the-art optimization algorithms for black-box functions in computer science. The aim of this work is to provide a computationally efficient optimization algorithm when the baseline functions are highly nonstationary (the function values change dramatically across the design space). Such nonstationary surfaces are very common in reality, such as the case in the maritime traffic safety problem considered here. In this problem, agent-based simulation is used to simulate the probability of collision of one vessel with the others on a given trajectory, and the decision maker needs to choose the trajectory with the minimum probability of collision quickly. Typically, in a high-congestion region, a small turn of the vessel can result in a very different conflict environment, and thus the response is highly nonstationary. Through our study, we find that the proposed algorithm can provide safer choices within a limited time compared with other methods. We believe the proposed algorithm is very computationally efficient and has large potential in such operational problems.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 1043 (5) ◽  
pp. 052049
Author(s):  
X Zhang ◽  
H Li ◽  
G Xiang ◽  
H W Xu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document