SINGLE MACHINE FAMILY SCHEDULING WITH TWO COMPETING AGENTS TO MINIMIZE MAKESPAN

2011 ◽  
Vol 28 (06) ◽  
pp. 773-785 ◽  
Author(s):  
GUOSHENG DING ◽  
SHIJIE SUN

We consider two-agent scheduling on a single machine, where there are job families and setup requirements exist between these families. Each agent's objective function is to minimize his own makespan. One of our goals is to find the optimal solution for one agent with a constraint on the other agent's makespan (constrained optimization). This problem is equivalent to the caudate Knapsack problem that we define in the paper. The other goal is to find single nondominated schedules (i.e., such that a better schedule for one of the two agents necessarily result in a worse schedule of the other agent), and to enumerate all nondominated schedules. Finally, two special cases, one with equal job processing times and the other with equal family setups are studied. We prove that the constrained optimization problems in both cases can be solved in polynomial time and that the cases have a polynomial number of nondominated schedules.

Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 131 ◽  
Author(s):  
Florin Stoican ◽  
Paul Irofti

The ℓ 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation.


Author(s):  
Miguel Terra-Neves ◽  
Inês Lynce ◽  
Vasco Manquinho

A Minimal Correction Subset (MCS) of an unsatisfiable constraint set is a minimal subset of constraints that, if removed, makes the constraint set satisfiable. MCSs enjoy a wide range of applications, such as finding approximate solutions to constrained optimization problems. However, existing work on applying MCS enumeration to optimization problems focuses on the single-objective case. In this work, Pareto Minimal Correction Subsets (Pareto-MCSs) are proposed for approximating the Pareto-optimal solution set of multi-objective constrained optimization problems. We formalize and prove an equivalence relationship between Pareto-optimal solutions and Pareto-MCSs. Moreover, Pareto-MCSs and MCSs can be connected in such a way that existing state-of-the-art MCS enumeration algorithms can be used to enumerate Pareto-MCSs. Finally, experimental results on the multi-objective virtual machine consolidation problem show that the Pareto-MCS approach is competitive with state-of-the-art algorithms.


2010 ◽  
Vol 450 ◽  
pp. 560-563
Author(s):  
Dong Mei Cheng ◽  
Jian Huang ◽  
Hong Jiang Li ◽  
Jing Sun

This paper presents a new method of dynamic sub-population genetic algorithm combined with modified dynamic penalty function to solve constrained optimization problems. The new method ensures the final optimal solution yields all constraints through re-organizing all individuals of each generation into two sub-populations according to the feasibility of individuals. And the modified dynamic penalty function gradually increases the punishment to bad individuals with the development of the evolution. With the help of the penalty function and other improvements, the new algorithm prevents local convergence and iteration wandering fluctuations. Typical instances are used to evaluate the optimizing performance of this new method; and the result shows that it can deal with constrained optimization problems well.


Author(s):  
Shengyu Pei

How to solve constrained optimization problems constitutes an important part of the research on optimization problems. In this paper, a hybrid immune clonal particle swarm optimization multi-objective algorithm is proposed to solve constrained optimization problems. In the proposed algorithm, the population is first initialized with the theory of good point set. Then, differential evolution is adopted to improve the local optimal solution of each particle, with immune clonal strategy incorporated to improve each particle. As a final step, sub-swarm is used to enhance the position and velocity of individual particle. The new algorithm has been tested on 24 standard test functions and three engineering optimization problems, whose results show that the new algorithm has good performance in both robustness and convergence.


Author(s):  
Dongkyu Sohn ◽  
◽  
Shingo Mabu ◽  
Kotaro Hirasawa ◽  
Jinglu Hu

This paper proposes Adaptive Random search with Intensification and Diversification combined with Genetic Algorithm (RasID-GA) for constrained optimization. In the previous work, we proposed RasID-GA which combines the best properties of RasID and Genetic Algorithm for unconstrained optimization problems. In general, it is very difficult to find an optimal solution for constrained optimization problems because their feasible solution space is very limited and they should consider the objective functions and constraint conditions. The conventional constrained optimization methods usually use penalty functions to solve given problems. But, it is generally recognized that the penalty function is hard to handle in terms of the balance between penalty functions and objective functions. In this paper, we propose a constrained optimization method using RasID-GA, which solves given problems without using penalty functions. The proposed method is tested and compared with Evolution Strategy with Stochastic Ranking using well-known 11 benchmark problems with constraints. From the Simulation results, RasID-GA can find an optimal solution or approximate solutions without using penalty functions.


2021 ◽  
Vol 30 (01) ◽  
pp. 2140001
Author(s):  
Litao Ma ◽  
Jiqiang Chen ◽  
Sitian Qin ◽  
Lina Zhang ◽  
Feng Zhang

In both practical applications and theoretical analysis, there are many fuzzy chance-constrained optimization problems. Currently, there is short of real-time algorithms for solving such problems. Therefore, in this paper, a continuous-time neurodynamic approach is proposed for solving a class of fuzzy chance-constrained optimization problems. Firstly, an equivalent deterministic problem with inequality constraint is discussed, and then a continuous-time neurodynamic approach is proposed. Secondly, a sufficient and necessary optimality condition of the considered optimization problem is obtained. Thirdly, the boundedness, global existence and Lyapunov stability of the state solution to the proposed approach are proved. Moreover, the convergence to the optimal solution of considered problem is studied. Finally, several experiments are provided to show the performance of proposed approach.


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Zhijun Luo ◽  
Lirong Wang

A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.


Sign in / Sign up

Export Citation Format

Share Document