The octagon abstract domain for continuous constraints

Constraints ◽  
2014 ◽  
Vol 19 (3) ◽  
pp. 309-337 ◽  
Author(s):  
Marie Pelleau ◽  
Charlotte Truchet ◽  
Frédéric Benhamou
2007 ◽  
Author(s):  
E. Carvalho ◽  
J. Cruz ◽  
P. Barahona ◽  
Theodore E. Simos ◽  
George Psihoyios ◽  
...  

Constraints ◽  
1996 ◽  
Vol 1 (1-2) ◽  
pp. 85-118 ◽  
Author(s):  
D. Sam-Haroud ◽  
B. Faltings

2015 ◽  
Vol 31 (6) ◽  
pp. 1458-1471 ◽  
Author(s):  
Ana Lucia Pais Ureche ◽  
Keisuke Umezawa ◽  
Yoshihiko Nakamura ◽  
Aude Billard

Author(s):  
Fabian Gnegel ◽  
Armin Fügenschuh ◽  
Michael Hagel ◽  
Sven Leyffer ◽  
Marcus Stiemer

AbstractWe present a general numerical solution method for control problems with state variables defined by a linear PDE over a finite set of binary or continuous control variables. We show empirically that a naive approach that applies a numerical discretization scheme to the PDEs to derive constraints for a mixed-integer linear program (MILP) leads to systems that are too large to be solved with state-of-the-art solvers for MILPs, especially if we desire an accurate approximation of the state variables. Our framework comprises two techniques to mitigate the rise of computation times with increasing discretization level: First, the linear system is solved for a basis of the control space in a preprocessing step. Second, certain constraints are just imposed on demand via the IBM ILOG CPLEX feature of a lazy constraint callback. These techniques are compared with an approach where the relations obtained by the discretization of the continuous constraints are directly included in the MILP. We demonstrate our approach on two examples: modeling of the spread of wildfire and the mitigation of water contamination. In both examples the computational results demonstrate that the solution time is significantly reduced by our methods. In particular, the dependence of the computation time on the size of the spatial discretization of the PDE is significantly reduced.


Author(s):  
Mingyu Fan ◽  
Xiaojun Chang ◽  
Xiaoqin Zhang ◽  
Di Wang ◽  
Liang Du

Recently, structured sparsity inducing based feature selection has become a hot topic in machine learning and pattern recognition. Most of the sparsity inducing feature selection methods are designed to rank all features by certain criterion and then select the k top ranked features, where k is an integer. However, the k top features are usually not the top k features and therefore maybe a suboptimal result. In this paper, we propose a novel supervised feature selection method to directly identify the top k features. The new method is formulated as a classic regularized least squares regression model with two groups of variables. The problem with respect to one group of the variables turn out to be a 0-1 integer programming, which had been considered very hard to solve. To address this, we utilize an efficient optimization method to solve the integer programming, which first replaces the discrete 0-1 constraints with two continuous constraints and then utilizes the alternating direction method of multipliers to optimize the equivalent problem. The obtained result is the top subset with k features under the proposed criterion rather than the subset of k top features. Experiments have been conducted on benchmark data sets to show the effectiveness of proposed method.


Sign in / Sign up

Export Citation Format

Share Document