scholarly journals Multi-Objective Optimization Through Pareto Minimal Correction Subsets

Author(s):  
Miguel Terra-Neves ◽  
Inês Lynce ◽  
Vasco Manquinho

A Minimal Correction Subset (MCS) of an unsatisfiable constraint set is a minimal subset of constraints that, if removed, makes the constraint set satisfiable. MCSs enjoy a wide range of applications, such as finding approximate solutions to constrained optimization problems. However, existing work on applying MCS enumeration to optimization problems focuses on the single-objective case. In this work, Pareto Minimal Correction Subsets (Pareto-MCSs) are proposed for approximating the Pareto-optimal solution set of multi-objective constrained optimization problems. We formalize and prove an equivalence relationship between Pareto-optimal solutions and Pareto-MCSs. Moreover, Pareto-MCSs and MCSs can be connected in such a way that existing state-of-the-art MCS enumeration algorithms can be used to enumerate Pareto-MCSs. Finally, experimental results on the multi-objective virtual machine consolidation problem show that the Pareto-MCS approach is competitive with state-of-the-art algorithms.

Author(s):  
Shengyu Pei

How to solve constrained optimization problems constitutes an important part of the research on optimization problems. In this paper, a hybrid immune clonal particle swarm optimization multi-objective algorithm is proposed to solve constrained optimization problems. In the proposed algorithm, the population is first initialized with the theory of good point set. Then, differential evolution is adopted to improve the local optimal solution of each particle, with immune clonal strategy incorporated to improve each particle. As a final step, sub-swarm is used to enhance the position and velocity of individual particle. The new algorithm has been tested on 24 standard test functions and three engineering optimization problems, whose results show that the new algorithm has good performance in both robustness and convergence.


Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 131 ◽  
Author(s):  
Florin Stoican ◽  
Paul Irofti

The ℓ 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation.


Author(s):  
James Kotary ◽  
Ferdinando Fioretto ◽  
Pascal Van Hentenryck ◽  
Bryan Wilder

This paper surveys the recent attempts at leveraging machine learning to solve constrained optimization problems. It focuses on surveying the work on integrating combinatorial solvers and optimization methods with machine learning architectures. These approaches hold the promise to develop new hybrid machine learning and optimization methods to predict fast, approximate, solutions to combinatorial problems and to enable structural logical inference. This paper presents a conceptual review of the recent advancements in this emerging area.


2010 ◽  
Vol 450 ◽  
pp. 560-563
Author(s):  
Dong Mei Cheng ◽  
Jian Huang ◽  
Hong Jiang Li ◽  
Jing Sun

This paper presents a new method of dynamic sub-population genetic algorithm combined with modified dynamic penalty function to solve constrained optimization problems. The new method ensures the final optimal solution yields all constraints through re-organizing all individuals of each generation into two sub-populations according to the feasibility of individuals. And the modified dynamic penalty function gradually increases the punishment to bad individuals with the development of the evolution. With the help of the penalty function and other improvements, the new algorithm prevents local convergence and iteration wandering fluctuations. Typical instances are used to evaluate the optimizing performance of this new method; and the result shows that it can deal with constrained optimization problems well.


2014 ◽  
Vol 22 (1) ◽  
pp. 47-77 ◽  
Author(s):  
N. Al Moubayed ◽  
A. Petrovski ◽  
J. McCall

This paper improves a recently developed multi-objective particle swarm optimizer ([Formula: see text]) that incorporates dominance with decomposition used in the context of multi-objective optimization. Decomposition simplifies a multi-objective problem (MOP) by transforming it to a set of aggregation problems, whereas dominance plays a major role in building the leaders’ archive. [Formula: see text] introduces a new archiving technique that facilitates attaining better diversity and coverage in both objective and solution spaces. The improved method is evaluated on standard benchmarks including both constrained and unconstrained test problems, by comparing it with three state of the art multi-objective evolutionary algorithms: MOEA/D, OMOPSO, and dMOPSO. The comparison and analysis of the experimental results, supported by statistical tests, indicate that the proposed algorithm is highly competitive, efficient, and applicable to a wide range of multi-objective optimization problems.


Sign in / Sign up

Export Citation Format

Share Document