Solution of some max-separable optimization problems with inequality constraints

Author(s):  
Karel Zimmermann
2020 ◽  
Vol 10 (6) ◽  
pp. 2075 ◽  
Author(s):  
Shih-Cheng Horng ◽  
Shieh-Shing Lin

The stochastic inequality constrained optimization problems (SICOPs) consider the problems of optimizing an objective function involving stochastic inequality constraints. The SICOPs belong to a category of NP-hard problems in terms of computational complexity. The ordinal optimization (OO) method offers an efficient framework for solving NP-hard problems. Even though the OO method is helpful to solve NP-hard problems, the stochastic inequality constraints will drastically reduce the efficiency and competitiveness. In this paper, a heuristic method coupling elephant herding optimization (EHO) with ordinal optimization (OO), abbreviated as EHOO, is presented to solve the SICOPs with large solution space. The EHOO approach has three parts, which are metamodel construction, diversification and intensification. First, the regularized minimal-energy tensor-product splines is adopted as a metamodel to approximately evaluate fitness of a solution. Next, an improved elephant herding optimization is developed to find N significant solutions from the entire solution space. Finally, an accelerated optimal computing budget allocation is utilized to select a superb solution from the N significant solutions. The EHOO approach is tested on a one-period multi-skill call center for minimizing the staffing cost, which is formulated as a SICOP. Simulation results obtained by the EHOO are compared with three optimization methods. Experimental results demonstrate that the EHOO approach obtains a superb solution of higher quality as well as a higher computational efficiency than three optimization methods.


2021 ◽  
Vol Volume 2 (Original research articles>) ◽  
Author(s):  
Lisa C. Hegerhorst-Schultchen ◽  
Christian Kirches ◽  
Marc C. Steinbach

This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie's and Guignard's kink qualification and prove relations to MPCC-ACQ and MPCC-GCQ for the counterpart MPCC formulations. Due to non-uniqueness of a specific slack reformulation suggested in [10], the relations are non-trivial. It turns out that constraint qualifications of Abadie type are preserved. We also prove the weaker result that equivalence of Guginard's (and Abadie's) constraint qualifications for all branch problems hold, while the question of GCQ preservation remains open. Finally, we introduce M-stationarity and B-stationarity concepts for abs-normal NLPs and prove first order optimality conditions corresponding to MPCC counterpart formulations.


2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Paulraj S. ◽  
Sumathi P.

The objective function and the constraints can be formulated as linear functions of independent variables in most of the real-world optimization problems. Linear Programming (LP) is the process of optimizing a linear function subject to a finite number of linear equality and inequality constraints. Solving linear programming problems efficiently has always been a fascinating pursuit for computer scientists and mathematicians. The computational complexity of any linear programming problem depends on the number of constraints and variables of the LP problem. Quite often large-scale LP problems may contain many constraints which are redundant or cause infeasibility on account of inefficient formulation or some errors in data input. The presence of redundant constraints does not alter the optimal solutions(s). Nevertheless, they may consume extra computational effort. Many researchers have proposed different approaches for identifying the redundant constraints in linear programming problems. This paper compares five of such methods and discusses the efficiency of each method by solving various size LP problems and netlib problems. The algorithms of each method are coded by using a computer programming language C. The computational results are presented and analyzed in this paper.


2020 ◽  
Vol 28 (4) ◽  
pp. 280-289
Author(s):  
Hamda Chagraoui ◽  
Mohamed Soula

The purpose of the present work is to improve the performance of the standard collaborative optimization (CO) approach based on an existing dynamic relaxation method. This approach may be weakened by starting design points. First, a New Relaxation (NR) method is proposed to solve the difficulties in convergence and low accuracy of CO. The new method is based on the existing dynamic relaxation method and it is achieved by changing the system-level consistency equality constraints into relaxation inequality constraints. Then, a Modified Collaborative Optimization (MCO) approach is proposed to eliminate the impact of the information inconsistency between the system-level and the discipline-level on the feasibility of optimal solutions. In the MCO approach, the impact of the inconsistency is treated by transforming the discipline-level constrained optimization problems into an unconstrained optimization problem using an exact penalty function. Based on the NR method, the performance of the MCO approach carried out by solving two multidisciplinary optimization problems. The obtained results show that the MCO approach has improved the convergence of CO significantly. These results prove that the present MCO succeeds in getting feasible solutions while the CO fails to provide feasible solutions with the used starting design points.


Sign in / Sign up

Export Citation Format

Share Document