General fully coupled FBSDES involving the value function and related nonlocal HJB equations combined with algebraic equations

2020 ◽  
pp. 2150032
Author(s):  
Tao Hao ◽  
Qingfeng Zhu

Recently, Hao and Li [Fully coupled forward-backward SDEs involving the value function. Nonlocal Hamilton–Jacobi–Bellman equations, ESAIM: Control Optim, Calc. Var. 22(2016) 519–538] studied a new kind of forward-backward stochastic differential equations (FBSDEs), namely the fully coupled FBSDEs involving the value function in the case where the diffusion coefficient [Formula: see text] in forward stochastic differential equations depends on control, but does not depend on [Formula: see text]. In our paper, we generalize their work to the case where [Formula: see text] depends on both control and [Formula: see text], which is called the general fully coupled FBSDEs involving the value function. The existence and uniqueness theorem of this kind of equations under suitable assumptions is proved. After obtaining the dynamic programming principle for the value function [Formula: see text], we prove that the value function [Formula: see text] is the minimum viscosity solution of the related nonlocal Hamilton–Jacobi–Bellman equation combined with an algebraic equation.

2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Tao Hao ◽  
Juan Li

We get a new type of controlled backward stochastic differential equations (BSDEs), namely, the BSDEs, coupled with value function. We prove the existence and the uniqueness theorem as well as a comparison theorem for such BSDEs coupled with value function by using the approximation method. We get the related dynamic programming principle (DPP) with the help of the stochastic backward semigroup which was introduced by Peng in 1997. By making use of a new, more direct approach, we prove that our nonlocal Hamilton-Jacobi-Bellman (HJB) equation has a unique viscosity solution in the space of continuous functions of at most polynomial growth. These results generalize the corresponding conclusions given by Buckdahn et al. (2009) in the case without control.


2018 ◽  
Vol 6 (1) ◽  
pp. 85-96
Author(s):  
Delei Sheng ◽  
Linfang Xing

AbstractAn insurance-package is a combination being tie-in at least two different categories of insurances with different underwriting-yield-rate. In this paper, the optimal insurance-package and investment problem is investigated by maximizing the insurer’s exponential utility of terminal wealth to find the optimal combination-share and investment strategy. Using the methods of stochastic analysis and stochastic optimal control, the Hamilton-Jacobi-Bellman (HJB) equations are established, the optimal strategy and the value function are obtained in closed form. By comparing with classical results, it shows that the insurance-package can enhance the utility of terminal wealth, meanwhile, reduce the insurer’s claim risk.


2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Jun Moon

AbstractWe consider the optimal control problem for stochastic differential equations (SDEs) with random coefficients under the recursive-type objective functional captured by the backward SDE (BSDE). Due to the random coefficients, the associated Hamilton–Jacobi–Bellman (HJB) equation is a class of second-order stochastic PDEs (SPDEs) driven by Brownian motion, which we call the stochastic HJB (SHJB) equation. In addition, as we adopt the recursive-type objective functional, the drift term of the SHJB equation depends on the second component of its solution. These two generalizations cause several technical intricacies, which do not appear in the existing literature. We prove the dynamic programming principle (DPP) for the value function, for which unlike the existing literature we have to use the backward semigroup associated with the recursive-type objective functional. By the DPP, we are able to show the continuity of the value function. Using the Itô–Kunita’s formula, we prove the verification theorem, which constitutes a sufficient condition for optimality and characterizes the value function, provided that the smooth (classical) solution of the SHJB equation exists. In general, the smooth solution of the SHJB equation may not exist. Hence, we study the existence and uniqueness of the solution to the SHJB equation under two different weak solution concepts. First, we show, under appropriate assumptions, the existence and uniqueness of the weak solution via the Sobolev space technique, which requires converting the SHJB equation to a class of backward stochastic evolution equations. The second result is obtained under the notion of viscosity solutions, which is an extension of the classical one to the case for SPDEs. Using the DPP and the estimates of BSDEs, we prove that the value function is the viscosity solution to the SHJB equation. For applications, we consider the linear-quadratic problem, the utility maximization problem, and the European option pricing problem. Specifically, different from the existing literature, each problem is formulated by the generalized recursive-type objective functional and is subject to random coefficients. By applying the theoretical results of this paper, we obtain the explicit optimal solution for each problem in terms of the solution of the corresponding SHJB equation.


Author(s):  
Yue Zhou ◽  
Xinwei Feng ◽  
Jiongmin Yong

Deterministic optimal impulse control problem with terminal state constraint is considered. Due to the appearance of the terminal state constraint, the value function might be discontinuous in general. The main contribution of this paper is the introduction of an intrinsic condition under which the value function is proved to be continuous. Then by a Bellman dynamic programming principle, the corresponding Hamilton-Jacobi-Bellman type quasi-variational inequality (QVI, for short) is derived. The value function is proved to be a viscosity solution to such a QVI. The issue of whether the value function is characterized as the unique viscosity solution to this QVI is carefully addressed and the answer is left open challengingly.


Author(s):  
Sudeep Kundu ◽  
Karl Kunisch

AbstractPolicy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.


2013 ◽  
Vol 50 (4) ◽  
pp. 1025-1043 ◽  
Author(s):  
Nicole Bäuerle ◽  
Zejing Li

We consider a multi asset financial market with stochastic volatility modeled by a Wishart process. This is an extension of the one-dimensional Heston model. Within this framework we study the problem of maximizing the expected utility of terminal wealth for power and logarithmic utility. We apply the usual stochastic control approach and obtain, explicitly, the optimal portfolio strategy and the value function in some parameter settings. In particular, we do this when the drift of the assets is a linear function of the volatility matrix. In this case the affine structure of the model can be exploited. In some cases we obtain a Feynman-Kac representation of the candidate value function. Though the approach we use is quite standard, the hard part is to identify when the solution of the Hamilton-Jacobi-Bellman equation is finite. This involves a couple of matrix analytic arguments. In a numerical study we discuss the influence of the investors' risk aversion on the hedging demand.


Sign in / Sign up

Export Citation Format

Share Document