continuous constraints
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 1)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Fabian Gnegel ◽  
Armin Fügenschuh ◽  
Michael Hagel ◽  
Sven Leyffer ◽  
Marcus Stiemer

AbstractWe present a general numerical solution method for control problems with state variables defined by a linear PDE over a finite set of binary or continuous control variables. We show empirically that a naive approach that applies a numerical discretization scheme to the PDEs to derive constraints for a mixed-integer linear program (MILP) leads to systems that are too large to be solved with state-of-the-art solvers for MILPs, especially if we desire an accurate approximation of the state variables. Our framework comprises two techniques to mitigate the rise of computation times with increasing discretization level: First, the linear system is solved for a basis of the control space in a preprocessing step. Second, certain constraints are just imposed on demand via the IBM ILOG CPLEX feature of a lazy constraint callback. These techniques are compared with an approach where the relations obtained by the discretization of the continuous constraints are directly included in the MILP. We demonstrate our approach on two examples: modeling of the spread of wildfire and the mitigation of water contamination. In both examples the computational results demonstrate that the solution time is significantly reduced by our methods. In particular, the dependence of the computation time on the size of the spatial discretization of the PDE is significantly reduced.


Author(s):  
Mingyu Fan ◽  
Xiaojun Chang ◽  
Xiaoqin Zhang ◽  
Di Wang ◽  
Liang Du

Recently, structured sparsity inducing based feature selection has become a hot topic in machine learning and pattern recognition. Most of the sparsity inducing feature selection methods are designed to rank all features by certain criterion and then select the k top ranked features, where k is an integer. However, the k top features are usually not the top k features and therefore maybe a suboptimal result. In this paper, we propose a novel supervised feature selection method to directly identify the top k features. The new method is formulated as a classic regularized least squares regression model with two groups of variables. The problem with respect to one group of the variables turn out to be a 0-1 integer programming, which had been considered very hard to solve. To address this, we utilize an efficient optimization method to solve the integer programming, which first replaces the discrete 0-1 constraints with two continuous constraints and then utilizes the alternating direction method of multipliers to optimize the equivalent problem. The obtained result is the top subset with k features under the proposed criterion rather than the subset of k top features. Experiments have been conducted on benchmark data sets to show the effectiveness of proposed method.


2015 ◽  
Vol 31 (6) ◽  
pp. 1458-1471 ◽  
Author(s):  
Ana Lucia Pais Ureche ◽  
Keisuke Umezawa ◽  
Yoshihiko Nakamura ◽  
Aude Billard

Constraints ◽  
2014 ◽  
Vol 19 (3) ◽  
pp. 309-337 ◽  
Author(s):  
Marie Pelleau ◽  
Charlotte Truchet ◽  
Frédéric Benhamou

Author(s):  
KLAUS NEUMANN ◽  
MATTHIAS ROLF ◽  
JOCHEN JAKOB STEIL

The application of machine learning methods in the engineering of intelligent technical systems often requires the integration of continuous constraints like positivity, monotonicity, or bounded curvature in the learned function to guarantee a reliable performance. We show that the extreme learning machine is particularly well suited for this task. Constraints involving arbitrary derivatives of the learned function are effectively implemented through quadratic optimization because the learned function is linear in its parameters, and derivatives can be derived analytically. We further provide a constructive approach to verify that discretely sampled constraints are generalized to continuous regions and show how local violations of the constraint can be rectified by iterative re-learning. We demonstrate the approach on a practical and challenging control problem from robotics, illustrating also how the proposed method enables learning from few data samples if additional prior knowledge about the problem is available.


Sign in / Sign up

Export Citation Format

Share Document