scholarly journals BFO, A Trainable Derivative-free Brute Force Optimizer for Nonlinear Bound-constrained Optimization and Equilibrium Computations with Continuous and Discrete Variables

2017 ◽  
Vol 44 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Margherita Porcelli ◽  
Philippe L. Toint
2018 ◽  
Vol 2018 ◽  
pp. 1-13
Author(s):  
Jing Gao ◽  
Jian Cao ◽  
Yueting Yang

We propose a derivative-free trust region algorithm with a nonmonotone filter technique for bound constrained optimization. The derivative-free strategy is applied for special minimization functions in which derivatives are not all available. A nonmonotone filter technique ensures not only the trust region feature but also the global convergence under reasonable assumptions. Numerical experiments demonstrate that the new algorithm is effective for bound constrained optimization. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. The performance of the best choice of parameter values obtained by the algorithm we presented which differs from traditionally used values indicates that the algorithm proposed in this paper has a certain advantage for the nondifferentiable optimization problems.


Author(s):  
Morteza Kimiaei

AbstractThis paper discusses an active set trust-region algorithm for bound-constrained optimization problems. A sufficient descent condition is used as a computational measure to identify whether the function value is reduced or not. To get our complexity result, a critical measure is used which is computationally better than the other known critical measures. Under the positive definiteness of approximated Hessian matrices restricted to the subspace of non-active variables, it will be shown that unlimited zigzagging cannot occur. It is shown that our algorithm is competitive in comparison with the state-of-the-art solvers for solving an ill-conditioned bound-constrained least-squares problem.


1995 ◽  
Vol 16 (5) ◽  
pp. 1190-1208 ◽  
Author(s):  
Richard H. Byrd ◽  
Peihuang Lu ◽  
Jorge Nocedal ◽  
Ciyou Zhu

Author(s):  
T. O. Ting ◽  
H. C. Ting ◽  
T. S. Lee

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.


Sign in / Sign up

Export Citation Format

Share Document