scholarly journals The Relationship between the Stochastic Maximum Principle and the Dynamic Programming in Singular Control of Jump Diffusions

2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Farid Chighoub ◽  
Brahim Mezerdi

The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint equation coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy, for a geometric Brownian motion with jumps.

2020 ◽  
Vol 26 ◽  
pp. 81
Author(s):  
Mingshang Hu ◽  
Shaolin Ji ◽  
Xiaole Xue

Within the framework of viscosity solution, we study the relationship between the maximum principle (MP) from M. Hu, S. Ji and X. Xue [SIAM J. Control Optim. 56 (2018) 4309–4335] and the dynamic programming principle (DPP) from M. Hu, S. Ji and X. Xue [SIAM J. Control Optim. 57 (2019) 3911–3938] for a fully coupled forward–backward stochastic controlled system (FBSCS) with a nonconvex control domain. For a fully coupled FBSCS, both the corresponding MP and the corresponding Hamilton–Jacobi–Bellman (HJB) equation combine an algebra equation respectively. With the help of a new decoupling technique, we obtain the desirable estimates for the fully coupled forward–backward variational equations and establish the relationship. Furthermore, for the smooth case, we discover the connection between the derivatives of the solution to the algebra equation and some terms in the first-order and second-order adjoint equations. Finally, we study the local case under the monotonicity conditions as from J. Li and Q. Wei [SIAM J. Control Optim. 52 (2014) 1622–1662] and Z. Wu [Syst. Sci. Math. Sci. 11 (1998) 249–259], and obtain the relationship between the MP from Z. Wu [Syst. Sci. Math. Sci. 11 (1998) 249–259] and the DPP from J. Li and Q. Wei [SIAM J. Control Optim. 52 (2014) 1622–1662].


1968 ◽  
Vol 5 (3) ◽  
pp. 679-692 ◽  
Author(s):  
Richard Morton

Suppose that the state variables x = (x1,…,xn)′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


1968 ◽  
Vol 5 (03) ◽  
pp. 679-692
Author(s):  
Richard Morton

Suppose that the state variables x = (x 1,…,x n )′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x 1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


Sign in / Sign up

Export Citation Format

Share Document