scholarly journals On the Bellman equations with varying control

1996 ◽  
Vol 53 (1) ◽  
pp. 51-62 ◽  
Author(s):  
Shigeaki Koike

The value function is presented by minimisation of a cost functional over admissible controls. The associated first order Bellman equations with varying control are treated. It turns out that the value function is a viscosity solution of the Bellman equation and the comparison principle holds, which is an essential tool in obtaining the uniqueness of the viscosity solutions.

Author(s):  
Richard C. Kraaij ◽  
Mikola C. Schlottke

AbstractWe study the well-posedness of Hamilton–Jacobi–Bellman equations on subsets of $${\mathbb {R}}^d$$ R d in a context without boundary conditions. The Hamiltonian is given as the supremum over two parts: an internal Hamiltonian depending on an external control variable and a cost functional penalizing the control. The key feature in this paper is that the control function can be unbounded and discontinuous. This way we can treat functionals that appear e.g. in the Donsker–Varadhan theory of large deviations for occupation-time measures. To allow for this flexibility, we assume that the internal Hamiltonian and cost functional have controlled growth, and that they satisfy an equi-continuity estimate uniformly over compact sets in the space of controls. In addition to establishing the comparison principle for the Hamilton–Jacobi–Bellman equation, we also prove existence, the viscosity solution being the value function with exponentially discounted running costs. As an application, we verify the conditions on the internal Hamiltonian and cost functional in two examples.


2019 ◽  
Vol 25 ◽  
pp. 15 ◽  
Author(s):  
Manh Khang Dao

We consider an optimal control on networks in the spirit of the works of Achdou et al. [NoDEA Nonlinear Differ. Equ. Appl. 20 (2013) 413–445] and Imbert et al. [ESAIM: COCV 19 (2013) 129–166]. The main new feature is that there are entry (or exit) costs at the edges of the network leading to a possible discontinuous value function. We characterize the value function as the unique viscosity solution of a new Hamilton-Jacobi system. The uniqueness is a consequence of a comparison principle for which we give two different proofs, one with arguments from the theory of optimal control inspired by Achdou et al. [ESAIM: COCV 21 (2015) 876–899] and one based on partial differential equations techniques inspired by a recent work of Lions and Souganidis [Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 27 (2016) 535–545].


2020 ◽  
Vol 10 (1) ◽  
pp. 235-259
Author(s):  
Katharina Bata ◽  
Hanspeter Schmidli

AbstractWe consider a risk model in discrete time with dividends and capital injections. The goal is to maximise the value of a dividend strategy. We show that the optimal strategy is of barrier type. That is, all capital above a certain threshold is paid as dividend. A second problem adds tax to the dividends but an injection leads to an exemption from tax. We show that the value function fulfils a Bellman equation. As a special case, we consider the case of premia of size one. In this case we show that the optimal strategy is a two barrier strategy. That is, there is a barrier if a next dividend of size one can be paid without tax and a barrier if the next dividend of size one will be taxed. In both models, we illustrate the findings by de Finetti’s example.


1984 ◽  
Vol 16 (1) ◽  
pp. 16-16
Author(s):  
Domokos Vermes

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).


Author(s):  
Shihong Wang ◽  
Zuoyi Zhou

AbstractWe study the averaging of the Hamilton-Jacobi equation with fast variables in the viscosity solution sense in infinite dimensions. We prove that the viscosity solution of the original equation converges to the viscosity solution of the averaged equation and apply this result to the limit problem of the value function for an optimal control problem with fast variables.


2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Moussa Kounta

We consider the so-called mean-variance portfolio selection problem in continuous time under the constraint that the short-selling of stocks is prohibited where all the market coefficients are random processes. In this situation the Hamilton-Jacobi-Bellman (HJB) equation of the value function of the auxiliary problem becomes a coupled system of backward stochastic partial differential equation. In fact, the value functionVoften does not have the smoothness properties needed to interpret it as a solution to the dynamic programming partial differential equation in the usual (classical) sense; however, in such casesVcan be interpreted as a viscosity solution. Here we show the unicity of the viscosity solution and we see that the optimal and the value functions are piecewise linear functions based on some Riccati differential equations. In particular we solve the open problem posed by Li and Zhou and Zhou and Yin.


1997 ◽  
Vol 1 (1) ◽  
pp. 255-277 ◽  
Author(s):  
MICHAEL A. TRICK ◽  
STANLEY E. ZIN

We review the properties of algorithms that characterize the solution of the Bellman equation of a stochastic dynamic program, as the solution to a linear program. The variables in this problem are the ordinates of the value function; hence, the number of variables grows with the state space. For situations in which this size becomes computationally burdensome, we suggest the use of low-dimensional cubic-spline approximations to the value function. We show that fitting this approximation through linear programming provides upper and lower bounds on the solution to the original large problem. The information contained in these bounds leads to inexpensive improvements in the accuracy of approximate solutions.


Author(s):  
Min Sun

AbstractWe consider in this article an evolutionary monotone follower problem in [0,1]. State processes under consideration are controlled diffusion processes , solutions of dyx(t) = g(yx(t), t)dt + σu(yx(t), t) dwt + dυt with yx(0) = x ∈[0, 1], where the control processes υt are increasing, positive, and adapted. The cost functional is of integral type, with certain explicit cost of control action including the cost of jumps. We shall present some analytic results of the value function, mainly its characterisation, by standard dynamic programming arguments.


Sign in / Sign up

Export Citation Format

Share Document