Hybrid and Adaptive Metamodel Based Global Optimization

Author(s):  
J. Gu ◽  
G. Y. Li ◽  
Z. Dong

Metamodeling techniques are increasingly used in solving computation intensive design optimization problems today. In this work, the issue of automatic identification of appropriate metamodeling techniques in global optimization is addressed. A generic, new hybrid metamodel based global optimization method, particularly suitable for design problems involving computation intensive, black-box analyses and simulations, is introduced. The method employs three representative metamodels concurrently in the search process and selects sample data points adaptively according to the values calculated using the three metamodels to improve the accuracy of modeling. The global optimum is identified when the metamodels become reasonably accurate. The new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization problem involving vehicle crash simulation, to demonstrate the superior performance of the new algorithm over existing search methods. Present limitations of the proposed method are also discussed.

Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2005 ◽  
Vol 128 (4) ◽  
pp. 701-709 ◽  
Author(s):  
Masataka Yoshimura ◽  
Masahiko Taniguchi ◽  
Kazuhiro Izui ◽  
Shinji Nishiwaki

This paper proposes a machine product design optimization method based on the decomposition of performance characteristics, or alternatively, extraction of simpler characteristics, that is especially responsive to the detailed features or difficulties presented by specific design problems. The optimization problems examined here are expressed using hierarchical constructions of the decomposed and extracted characteristics and the optimizations are sequentially repeated, starting with groups of characteristics having conflicting characteristics at the lowest hierarchical level and proceeding to higher levels. The proposed method not only effectively provides optimum design solutions, but also facilitates deeper insight into the design optimization results, so that ideas for optimum solution breakthroughs are more easily obtained. An applied example is given to demonstrate the effectiveness of the proposed method.


Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2699 ◽  
Author(s):  
Jiajun Liu ◽  
Huachao Dong ◽  
Tianxu Jin ◽  
Li Liu ◽  
Babak Manouchehrinia ◽  
...  

In this paper, identification of an appropriate hybrid energy storage system (HESS) architecture, introduction of a comprehensive and accurate HESS model, as well as HESS design optimization using a nested, dual-level optimization formulation and suitable optimization algorithms for both levels of searches have been presented. At the bottom level, design optimization focuses on the minimization of power loss in batteries, converter, and ultracapacitors (UCs), as well as the impact of battery depth of discharge (DOD) to its operation life, using a dynamic programming (DP)-based optimal energy management strategy (EMS). At the top level, HESS optimization of component size and battery DOD is carried out to achieve the minimum life-cycle cost (LCC) of the HESS for given power profiles and performance requirements as an outer loop. The complex and challenging optimization problem is solved using an advanced Multi-Start Space Reduction (MSSR) search method developed for computation-intensive, black-box global optimization problems. An example of load-haul-dump (LHD) vehicles is employed to verify the proposed HESS design optimization method and MSSR leads to superior optimization results and dramatically reduces computation time. This research forms the foundation for the design optimization of HESS, hybridization of vehicles with dynamic on-off power loads, and applications of the advanced global optimization method.


2010 ◽  
Vol 132 (6) ◽  
Author(s):  
Yen-Chih Huang ◽  
Kuei-Yuan Chan

Design optimization problems under random uncertainties are commonly formulated with constraints in probabilistic forms. This formulation, also referred to as reliability-based design optimization (RBDO), has gained extensive attention in recent years. Most researchers assume that reliability levels are given based on past experiences or other design considerations without exploring the constrained space. Therefore, inappropriate target reliability levels might be assigned, which either result in a null probabilistic feasible space or performance underestimations. In this research, we investigate the maximal reliability within a probabilistic constrained space using modified efficient global optimization (EGO) algorithm. By constructing and improving Kriging models iteratively, EGO can obtain a global optimum of a possibly disconnected feasible space at high reliability levels. An infill sampling criterion (ISC) is proposed to enforce added samples on the constraint boundaries to improve the accuracy of probabilistic constraint evaluations via Monte Carlo simulations. This limit state ISC is combined with the existing ISC to form a heuristic approach that efficiently improves the Kriging models. For optimization problems with expensive functions and disconnected feasible space, such as the maximal reliability problems in RBDO, the efficiency of the proposed approach in finding the optimum is higher than those of existing gradient-based and direct search methods. Several examples are used to demonstrate the proposed methodology.


2018 ◽  
Vol 35 (1) ◽  
pp. 71-90 ◽  
Author(s):  
Xiwen Cai ◽  
Haobo Qiu ◽  
Liang Gao ◽  
Xiaoke Li ◽  
Xinyu Shao

Purpose This paper aims to propose hybrid global optimization based on multiple metamodels for improving the efficiency of global optimization. Design/methodology/approach The method has fully utilized the information provided by different metamodels in the optimization process. It not only imparts the expected improvement criterion of kriging into other metamodels but also intelligently selects appropriate metamodeling techniques to guide the search direction, thus making the search process very efficient. Besides, the corresponding local search strategies are also put forward to further improve the optimizing efficiency. Findings To validate the method, it is tested by several numerical benchmark problems and applied in two engineering design optimization problems. Moreover, an overall comparison between the proposed method and several other typical global optimization methods has been made. Results show that the global optimization efficiency of the proposed method is higher than that of the other methods for most situations. Originality/value The proposed method sufficiently utilizes multiple metamodels in the optimizing process. Thus, good optimizing results are obtained, showing great applicability in engineering design optimization problems which involve costly simulations.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 149
Author(s):  
Yaohui Li ◽  
Jingfang Shen ◽  
Ziliang Cai ◽  
Yizhong Wu ◽  
Shuting Wang

The kriging optimization method that can only obtain one sampling point per cycle has encountered a bottleneck in practical engineering applications. How to find a suitable optimization method to generate multiple sampling points at a time while improving the accuracy of convergence and reducing the number of expensive evaluations has been a wide concern. For this reason, a kriging-assisted multi-objective constrained global optimization (KMCGO) method has been proposed. The sample data obtained from the expensive function evaluation is first used to construct or update the kriging model in each cycle. Then, kriging-based estimated target, RMSE (root mean square error), and feasibility probability are used to form three objectives, which are optimized to generate the Pareto frontier set through multi-objective optimization. Finally, the sample data from the Pareto frontier set is further screened to obtain more promising and valuable sampling points. The test results of five benchmark functions, four design problems, and a fuel economy simulation optimization prove the effectiveness of the proposed algorithm.


2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Octavio Camarena ◽  
Erik Cuevas ◽  
Marco Pérez-Cisneros ◽  
Fernando Fausto ◽  
Adrián González ◽  
...  

The Locust Search (LS) algorithm is a swarm-based optimization method inspired in the natural behavior of the desert locust. LS considers the inclusion of two distinctive nature-inspired search mechanism, namely, their solitary phase and social phase operators. These interesting search schemes allow LS to overcome some of the difficulties that commonly affect other similar methods, such as premature convergence and the lack of diversity on solutions. Recently, computer vision experiments in insect tracking methods have conducted to the development of more accurate locust motion models than those produced by simple behavior observations. The most distinctive characteristic of such new models is the use of probabilities to emulate the locust decision process. In this paper, a modification to the original LS algorithm, referred to as LS-II, is proposed to better handle global optimization problems. In LS-II, the locust motion model of the original algorithm is modified incorporating the main characteristics of the new biological formulations. As a result, LS-II improves its original capacities of exploration and exploitation of the search space. In order to test its performance, the proposed LS-II method is compared against several the state-of-the-art evolutionary methods considering a set of benchmark functions and engineering problems. Experimental results demonstrate the superior performance of the proposed approach in terms of solution quality and robustness.


Sign in / Sign up

Export Citation Format

Share Document