Global Optimization Problems in Stability Analysis of Linear Dynamical Systems

Author(s):  
Pham Huu Anh Ngoc ◽  
Toshiki Naito ◽  
Jong Son Shin
2013 ◽  
Vol 32 (4) ◽  
pp. 981-985
Author(s):  
Ya-fei HUANG ◽  
Xi-ming LIANG ◽  
Yi-xiong CHEN

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1477
Author(s):  
Chun-Yao Lee ◽  
Guang-Lin Zhuo

This paper proposes a hybrid whale optimization algorithm (WOA) that is derived from the genetic and thermal exchange optimization-based whale optimization algorithm (GWOA-TEO) to enhance global optimization capability. First, the high-quality initial population is generated to improve the performance of GWOA-TEO. Then, thermal exchange optimization (TEO) is applied to improve exploitation performance. Next, a memory is considered that can store historical best-so-far solutions, achieving higher performance without adding additional computational costs. Finally, a crossover operator based on the memory and a position update mechanism of the leading solution based on the memory are proposed to improve the exploration performance. The GWOA-TEO algorithm is then compared with five state-of-the-art optimization algorithms on CEC 2017 benchmark test functions and 8 UCI repository datasets. The statistical results of the CEC 2017 benchmark test functions show that the GWOA-TEO algorithm has good accuracy for global optimization. The classification results of 8 UCI repository datasets also show that the GWOA-TEO algorithm has competitive results with regard to comparison algorithms in recognition rate. Thus, the proposed algorithm is proven to execute excellent performance in solving optimization problems.


Sign in / Sign up

Export Citation Format

Share Document