A Constraint Satisfaction Approach for Multi-Attribute Design Optimization Problems

Author(s):  
Joseph D’Ambrosio ◽  
Timothy Darr ◽  
William Birmingham

Abstract In this paper, we describe a multi-attribute domain CSP approach for solving a class of discrete, constrained, optimization problems. The multi-attribute domain CSP formulation provides a compact representation for design problems characterized by multiple, conflicting attributes. Design trade-off information is represented by a multi-attribute value function. Necessary conditions for an optimal solution, defined in terms of the value function, are represented as constraints. This provides a uniform problem-solving approach (constraint satisfaction) for identifying solutions that are both feasible and of high value. We present and characterize a consistency algorithm for this type of CSP.

Author(s):  
Jing Qiu ◽  
Jiguo Yu ◽  
Shujun Lian

In this paper, we propose a new non-smooth penalty function with two parameters for nonlinear inequality constrained optimization problems. And we propose a twice continuously differentiable function which is smoothing approximation to the non-smooth penalty function and define the corresponding smoothed penalty problem. A global solution of the smoothed penalty problem is proved to be an approximation global solution of the non-smooth penalty problem. Based on the smoothed penalty function, we develop an algorithm and prove that the sequence generated by the algorithm can converge to the optimal solution of the original problem.


Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 131 ◽  
Author(s):  
Florin Stoican ◽  
Paul Irofti

The ℓ 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation.


2014 ◽  
Vol 17 (08) ◽  
pp. 1450055
Author(s):  
Fabian Astic ◽  
Agnès Tourin

We propose a framework for analyzing the credit risk of secured loans with maximum loan-to-value covenants. Here, we do not assume that the collateral can be liquidated as soon as the maximum loan-to-value is breached. Closed-form solutions for the expected loss are obtained for nonrevolving loans. In the revolving case, we introduce a minimization problem with an objective function parameterized by a risk reluctance coefficient, capturing the trade-off between minimizing the expected loss incurred in the event of liquidation and maximizing the interest gain. Using stochastic control techniques, we derive the partial integro-differential equation satisfied by the value function, and solve it numerically with a finite difference scheme. The experimental results and their comparison with a standard loan-to-value-based lending policy suggest that stricter lending decisions would benefit the lender.


2010 ◽  
Vol 42 (1) ◽  
pp. 158-182 ◽  
Author(s):  
Kurt Helmes ◽  
Richard H. Stockbridge

A new approach to the solution of optimal stopping problems for one-dimensional diffusions is developed. It arises by imbedding the stochastic problem in a linear programming problem over a space of measures. Optimizing over a smaller class of stopping rules provides a lower bound on the value of the original problem. Then the weak duality of a restricted form of the dual linear program provides an upper bound on the value. An explicit formula for the reward earned using a two-point hitting time stopping rule allows us to prove strong duality between these problems and, therefore, allows us to either optimize over these simpler stopping rules or to solve the restricted dual program. Each optimization problem is parameterized by the initial value of the diffusion and, thus, we are able to construct the value function by solving the family of optimization problems. This methodology requires little regularity of the terminal reward function. When the reward function is smooth, the optimal stopping locations are shown to satisfy the smooth pasting principle. The procedure is illustrated using two examples.


2005 ◽  
Vol 24 ◽  
pp. 81-108 ◽  
Author(s):  
P. Geibel ◽  
F. Wysotzki

In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.


2011 ◽  
Vol 19 (2) ◽  
pp. 249-285 ◽  
Author(s):  
Yong Wang ◽  
Zixing Cai

This paper proposes a (μ + λ)-differential evolution and an improved adaptive trade-off model for solving constrained optimization problems. The proposed (μ + λ)-differential evolution adopts three mutation strategies (i.e., rand/1 strategy, current-to-best/1 strategy, and rand/2 strategy) and binomial crossover to generate the offspring population. Moreover, the current-to-best/1 strategy has been improved in this paper to further enhance the global exploration ability by exploiting the feasibility proportion of the last population. Additionally, the improved adaptive trade-off model includes three main situations: the infeasible situation, the semi-feasible situation, and the feasible situation. In each situation, a constraint-handling mechanism is designed based on the characteristics of the current population. By combining the (μ + λ)-differential evolution with the improved adaptive trade-off model, a generic method named (μ + λ)-constrained differential evolution ((μ + λ)-CDE) is developed. The (μ + λ)-CDE is utilized to solve 24 well-known benchmark test functions provided for the special session on constrained real-parameter optimization of the 2006 IEEE Congress on Evolutionary Computation (CEC2006). Experimental results suggest that the (μ + λ)-CDE is very promising for constrained optimization, since it can reach the best known solutions for 23 test functions and is able to successfully solve 21 test functions in all runs. Moreover, in this paper, a self-adaptive version of (μ + λ)-CDE is proposed which is the most competitive algorithm so far among the CEC2006 entries.


Sign in / Sign up

Export Citation Format

Share Document