Global Optimization Using the Multistart Method

Author(s):  
P. Jain ◽  
A. M. Agogino

Abstract Multistart is a novel stochastic global optimization method for finding the global optimum of highly nonlinear mechanical problems. In this paper we introduce and develop a variant of the multistart method in which a fraction of the sample points in the feasible region with smallest function value are clustered using the Vector Quantization technique. The theories of lattices and sphere packing are used to define optimal lattices. These lattices are optimal with respect to quantization error and are used as code points for vector quantization. The implementation of these ideas has resulted in the VQ-multistart algorithm for finding the global optimum with substantial reductions in both the incore memory requirements and the computation time. We solve several mathematical test problems and a mechanical optimal design problem using the VQ-multistart algorithm.

1993 ◽  
Vol 115 (4) ◽  
pp. 770-775 ◽  
Author(s):  
P. Jain ◽  
A. M. Agogino

Multistart is a stochastic global optimization method for finding the global optimum of highly nonlinear mechanical problems. In this paper we introduce and develop a variant of the multistart method in which a fraction of the sample points in the feasible region with smallest function value are clustered using the Vector Quantization technique. The theories of lattices and sphere packing are used to define optimal lattices. These lattices are optimal with respect to quantization error and are used as code points for vector quantization. The implementation of these ideas has resulted in the VQ-multistart algorithm for finding the global optimum with substantial reductions in both the incore memory requirements and the computation time. We solve several mathematical test problems and a mechanical optimal design problem using the VQ-multistart algorithm.


2010 ◽  
Vol 44-47 ◽  
pp. 3240-3244 ◽  
Author(s):  
Yu Dong Zhang ◽  
Le Nan Wu ◽  
Yuan Kai Huo ◽  
Shui Hua Wang

A novel global optimization method is proposed to find global minimal points more effectively and quickly. The new algorithm is based on both genetic algorithms (GA) and pattern search (PS) algorithms, thus, we have named it genetic pattern search. The procedure involves two-phases: First, GA executes a coarse search, PS then executes a fine search. Experiments on four different test functions (consisting of Hump, Powell, Rosenbrock, and Woods) demonstrate that this proposed new algorithm is superior to improved GA and improved PS with respect to success rate and computation time. Therefore, genetic pattern search is an effective and viable global optimization method.


Author(s):  
Adel A. Younis ◽  
George H. Cheng ◽  
G. Gary Wang ◽  
Zuomin Dong

Metamodel based design optimization (MBDO) algorithms have attracted considerable interests in recent years due to their special capability in dealing with complex optimization problems with computationally expensive objective and constraint functions and local optima. Conventional unimodal-based optimization algorithms and stochastic global optimization algorithms either miss the global optimum frequently or require unacceptable computation time. In this work, a generic testbed/platform for evaluating various MBDO algorithms has been introduced. The purpose of the platform is to facilitate quantitative comparison of different MBDO algorithms using standard test problems, test procedures, and test outputs, as well as to improve the efficiency of new algorithm testing and improvement. The platform consists of a comprehensive test function database that contains about 100 benchmark functions and engineering problems. The testbed accepts any optimization algorithm to be tested, and only requires minor modifications to meet the test-bed requirements. The testbed is useful in comparing the performance of competing algorithms through execution of same problems. It allows researchers and practitioners to test and choose the most suitable optimization tool for their specific needs. It also helps to increase confidence and reliability of the newly developed MBDO tools. Many new MBDO algorithms, including Mode Pursuing Sampling (MPS), Pareto Set Pursuing (PSP), and Space Exploration and Unimodal Region Elimination (SEUMRE), were tested in this work to demonstrate its functionality and benefits.


Author(s):  
J. Gu ◽  
G. Y. Li ◽  
Z. Dong

Metamodeling techniques are increasingly used in solving computation intensive design optimization problems today. In this work, the issue of automatic identification of appropriate metamodeling techniques in global optimization is addressed. A generic, new hybrid metamodel based global optimization method, particularly suitable for design problems involving computation intensive, black-box analyses and simulations, is introduced. The method employs three representative metamodels concurrently in the search process and selects sample data points adaptively according to the values calculated using the three metamodels to improve the accuracy of modeling. The global optimum is identified when the metamodels become reasonably accurate. The new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization problem involving vehicle crash simulation, to demonstrate the superior performance of the new algorithm over existing search methods. Present limitations of the proposed method are also discussed.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


Author(s):  
Alireza Saremi ◽  
Nasr Al-Hinai ◽  
G. Gary Wang ◽  
Tarek ElMekkawy

The current work discusses a novel global optimization method called the Multi-Agent Normal Sampling Technique (MANST). MANST is based on systematic sampling of points around agents; each agent in MANST represents a candidate solution of the problem. All agents compete with each other for a larger share of available resources. The performance of all agents is periodically evaluated and a specific number of agents who show no promising achievements are deleted; new agents are generated in the proximity of those promising agents. This process continues until the agents converge to the global optimum. MANST is a standalone global optimization technique. It is benchmarked with six well-known test cases and the results are then compared with those obtained from Matlab™ 7.1 GA Toolbox. The test results showed that MANST outperformed Matlab™ 7.1 GA Toolbox for the benchmark problems in terms of accuracy, number of function evaluations, and CPU time.


Author(s):  
Alireza Saremi ◽  
Amir H. Birjandi ◽  
G. Gary Wang ◽  
Tarek ElMekkawy ◽  
Eric Bibeau

This paper describes an enhanced version of a new global optimization method, Multi-Agent Normal Sampling Technique (MANST) described in reference [1]. Each agent in MANST includes a number of points that sample around the mean point with a certain standard deviation. In each step the point with the minimum value in the agent is chosen as the center point for the next step normal sampling. Then the chosen points of all agents are compared to each other and agents receive a certain share of the resources for the next step according to their lowest mean function value at the current step. The performance of all agents is periodically evaluated and a specific number of agents who show no promising achievements are deleted; new agents are generated in the proximity of those promising agents. This process continues until the agents converge to the global optimum. MANST is a standalone global optimization technique and does not require equations or knowledge about the objective function. The unique feature of this method in comparison with other global optimization methods is its dynamic normal distribution search. This work presents our recent research in enhancing MANST to handle variable boundaries and constraints. Moreover, a lean group sampling approach is implemented to prevent sampling in the same region for different agents. The overall capability and efficiency of the MANST has been improved as a result in the newer version. The enhanced MANST is highly competitive with other stochastic methods such as Genetic Algorithm (GA). In most of the test cases, the performance of the MANST is significantly higher than the Matlab™ GA Toolbox.


2017 ◽  
Vol 33 (3) ◽  
pp. 373-380
Author(s):  
AHMET SAHINER ◽  
◽  
NURULLAH YILMAZ ◽  
GULDEN KAPUSUZ ◽  
◽  
...  

In this study, we introduce a new global optimization method, named Esthetic Delving Method, based on the auxiliary function approach. First, we design the method theoretically and then present its implementable version. Finally, we apply the algorithm to the test problems in order to demonstrate its efficiency.


2011 ◽  
Vol 16 (3) ◽  
pp. 451-460 ◽  
Author(s):  
Antanas Žilinskas ◽  
Julius Žilinskas

A simplicial statistical model of multimodal functions is used to construct a global optimization algorithm. The search for the global minimum in the multidimensional space is reduced to the search over the edges of simplices covering the feasible region combined with the refinement of the cover. The refinement is performed by subdivision of selected simplices taking into account the point where the objective function value has been computed at the current iteration. For the search over the edges the one-dimensional P-algorithm based on the statistical smooth function model is adapted. Differently from the recently proposed algorithm here the statistical model is used for modelling the behaviour of the objective function not over the whole simplex but only over its edges. Testing results of the proposed algorithm are included.


Author(s):  
Hiroyuki Kawagishi ◽  
Kazuhiko Kudo

A new optimization method which can search for the global optimum solution and decrease the number of iterations was developed. The performance of the new method was found to be effective in finding the optimum solution for single- and multi-peaked functions for which the global optimum solution was known in advance. According to the application of the method to the optimum design of turbine stages, it was shown that the method can search the global optimum solution at approximately one seventh of the iterations of GA (Genetic Algorithm) or SA (Simulated Annealing).


Sign in / Sign up

Export Citation Format

Share Document