Smoothing Nonlinear Penalty Functions for Constrained Optimization Problems

2003 ◽  
Vol 24 (3-4) ◽  
pp. 351-364 ◽  
Author(s):  
X. Q. Yang ◽  
Z. Q. Meng ◽  
X. X. Huang ◽  
G. T. Y. Pong
2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Bingzhuang Liu ◽  
Wenling Zhao

For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.


Author(s):  
Dongkyu Sohn ◽  
◽  
Shingo Mabu ◽  
Kotaro Hirasawa ◽  
Jinglu Hu

This paper proposes Adaptive Random search with Intensification and Diversification combined with Genetic Algorithm (RasID-GA) for constrained optimization. In the previous work, we proposed RasID-GA which combines the best properties of RasID and Genetic Algorithm for unconstrained optimization problems. In general, it is very difficult to find an optimal solution for constrained optimization problems because their feasible solution space is very limited and they should consider the objective functions and constraint conditions. The conventional constrained optimization methods usually use penalty functions to solve given problems. But, it is generally recognized that the penalty function is hard to handle in terms of the balance between penalty functions and objective functions. In this paper, we propose a constrained optimization method using RasID-GA, which solves given problems without using penalty functions. The proposed method is tested and compared with Evolution Strategy with Stochastic Ranking using well-known 11 benchmark problems with constraints. From the Simulation results, RasID-GA can find an optimal solution or approximate solutions without using penalty functions.


Author(s):  
Francisco Facchinei ◽  
Vyacheslav Kungurtsev ◽  
Lorenzo Lampariello ◽  
Gesualdo Scutari

We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for sequential quadratic programming–type algorithms.


Sign in / Sign up

Export Citation Format

Share Document