scholarly journals Nonsmooth optimal regulation and discontinuous stabilization

2003 ◽  
Vol 2003 (20) ◽  
pp. 1159-1195 ◽  
Author(s):  
A. Bacciotti ◽  
F. Ceragioli

For affine control systems, we study the relationship between an optimal regulation problem on the infinite horizon and stabilizability. We are interested in the case the value function of the optimal regulation problem is not smooth and feedback laws involved in stabilizability may be discontinuous.

Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Design for disassembly and reuse focuses on developing methods to minimize difficulty in disassembly for maintenance or reuse. These methods can gain substantially if the relationship between component attributes (material mix, ease of disassembly etc.) and their likelihood of reuse or disposal is understood. For products already in the marketplace, a feedback approach that evaluates willingness of manufacturers or customers (decision makers) to reuse a component can reveal how attributes of a component affect reuse decisions. This paper introduces some metrics and combines them with ones proposed in literature into a measure that captures the overall value of a decision made by the decision makers. The premise is that the decision makers would choose a decision that has the maximum value. Four decisions are considered regarding a component’s fate after recovery ranging from direct reuse to disposal. A method on the lines of discrete choice theory is utilized that uses maximum likelihood estimates to determine the parameters that define the value function. The maximum likelihood method can take inputs from actual decisions made by the decision makers to assess the value function. This function can be used to determine the likelihood that the component takes a certain path (one of the four decisions), taking as input its attributes, which can facilitate long range planning and also help determine ways reuse decisions can be influenced.


2012 ◽  
Author(s):  
Mona Gauth ◽  
Maria Henriksson ◽  
Peter Juslin ◽  
Neda Kerimi ◽  
Marcus Lindskog ◽  
...  

2018 ◽  
Vol 24 (2) ◽  
pp. 873-899 ◽  
Author(s):  
Mingshang Hu ◽  
Falei Wang

The present paper considers a stochastic optimal control problem, in which the cost function is defined through a backward stochastic differential equation with infinite horizon driven by G-Brownian motion. Then we study the regularities of the value function and establish the dynamic programming principle. Moreover, we prove that the value function is the unique viscosity solution of the related Hamilton−Jacobi−Bellman−Isaacs (HJBI) equation.


Author(s):  
Alexander Leonidovich Bagno ◽  
Alexander Mikhailovich Tarasyev

Asymptotic behavior of the value function is studied in an infinite horizon optimal control problem with an unlimited integrand index discounted in the objective functional. Optimal control problems of such type are related to analysis of trends of trajectories in models of economic growth. Stability properties of the value function are expressed in the infinitesimal form. Such representation implies that the value function coincides with the generalized minimax solution of the Hamilton–Jacobi equation. It is shown that that the boundary condition for the value function is substituted by the property of the sublinear asymptotic behavior. An example is given to illustrate construction of the value function as the generalized minimax solution in economic growth models.


Sign in / Sign up

Export Citation Format

Share Document