Penalty Function in Optimization Problems: A Review of Recent Developments

Author(s):  
Humberto Bustince ◽  
Javier Fernandez ◽  
Pedro Burillo
2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Minggang Dong ◽  
Ning Wang ◽  
Xiaohui Cheng ◽  
Chuanxian Jiang

Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE) for constrained optimization problems (COPs). More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex COPs with discrete, integer, or binary variables, a discrete variable handling technique is introduced into MOCoDE to solve complex COPs with mix variables. This method is assessed on eleven constrained optimization benchmark functions and seven well-studied engineering problems in real life. Experimental results demonstrate that MOCoDE achieves competitive performance with respect to some other state-of-the-art approaches in constrained optimization evolutionary algorithms. Moreover, the strengths of the proposed method include few parameters and its ease of implementation, rendering it applicable to real life. Therefore, MOCoDE can be an efficient alternative to solving constrained optimization problems.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Zhensheng Yu ◽  
Jinhong Yu

We present a nonmonotone trust region algorithm for nonlinear equality constrained optimization problems. In our algorithm, we use the average of the successive penalty function values to rectify the ratio of predicted reduction and the actual reduction. Compared with the existing nonmonotone trust region methods, our method is independent of the nonmonotone parameter. We establish the global convergence of the proposed algorithm and give the numerical tests to show the efficiency of the algorithm.


Author(s):  
Jing Qiu ◽  
Jiguo Yu ◽  
Shujun Lian

In this paper, we propose a new non-smooth penalty function with two parameters for nonlinear inequality constrained optimization problems. And we propose a twice continuously differentiable function which is smoothing approximation to the non-smooth penalty function and define the corresponding smoothed penalty problem. A global solution of the smoothed penalty problem is proved to be an approximation global solution of the non-smooth penalty problem. Based on the smoothed penalty function, we develop an algorithm and prove that the sequence generated by the algorithm can converge to the optimal solution of the original problem.


2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


Author(s):  
Xinghuo Yu ◽  
◽  
Baolin Wu

In this paper, we propose a novel adaptive penalty function method for constrained optimization problems using the evolutionary programming technique. This method incorporates an adaptive tuning algorithm that adjusts the penalty parameters according to the population landscape so that it allows fast escape from a local optimum and quick convergence toward a global optimum. The method is simple and computationally effective in the sense that only very few penalty parameters are needed for tuning. Simulation results of five well-known benchmark problems are presented to show the performance of the proposed method.


1995 ◽  
Vol 4 (2) ◽  
pp. 167-180 ◽  
Author(s):  
Oleg Verbitsky

We focus our attention on the class RMAX(2) of NP optimization problems. Owing to recent developments in interactive proof techniques, RMAX(2) was shown to be the lowest class of logical classification that contains problems hard to approximate. Namely, the RMAX(2)-complete problem MAX CLIQUE (of finding the size of the largest clique in a graph) is not approximable in polynomial time within any constant factor unless NP=P.We are interested in problems inside RMAX(2) that are not known to be complete but are still hard to approximate. We point out that one such problem is MAXlog n, n, considered by Berman and Schnitger: given m conjunctions, each of them consisting of log m propositional variables or their negations, find the maximal number of simultaneously satisfiable conjunctions. We also obtain the approximation hardness results for some other problems in RMAX(2). Finally, we discuss the question of whether or not the problems under consideration are RMAX(2)-complete.


Author(s):  
Nurullah Yilmaz ◽  
Ahmet Sahiner

In this study, we deal with the nonlinear constrained global optimization problems. First, we introduce a new smooth exact penalty function for constrained optimization problems. We combine the exact penalty function with the auxiliary function in regard to constrained global optimization. We present a new auxiliary function approach and the adapted algorithm for solving  non-linear inequality constrained global optimization problems. Finally, we illustrate the efficiency of the algorithm on some numerical examples.


Sign in / Sign up

Export Citation Format

Share Document