scholarly journals The Dynamic Programming Method of Stochastic Differential Game for Functional Forward-Backward Stochastic System

2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Shaolin Ji ◽  
Chuanfeng Sun ◽  
Qingmeng Wei

This paper is devoted to a stochastic differential game (SDG) of decoupled functional forward-backward stochastic differential equation (FBSDE). For our SDG, the associated upper and lower value functions of the SDG are defined through the solution of controlled functional backward stochastic differential equations (BSDEs). Applying the Girsanov transformation method introduced by Buckdahn and Li (2008), the upper and the lower value functions are shown to be deterministic. We also generalize the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations to the path-dependent ones. By establishing the dynamic programming principal (DPP), we derive that the upper and the lower value functions are the viscosity solutions of the corresponding upper and the lower path-dependent HJBI equations, respectively.

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
J. Y. Li ◽  
M. N. Tang

In this paper, we study a two-player zero-sum stochastic differential game with regime switching in the framework of forward-backward stochastic differential equations on a finite time horizon. By means of backward stochastic differential equation methods, in particular that of the notion from stochastic backward semigroups, we prove a dynamic programming principle for both the upper and the lower value functions of the game. Based on the dynamic programming principle, the upper and the lower value functions are shown to be the unique viscosity solutions of the associated upper and lower Hamilton–Jacobi–Bellman–Isaacs equations.


2018 ◽  
Vol 48 (1) ◽  
pp. 413-434 ◽  
Author(s):  
Shumin Chen ◽  
Hailiang Yang ◽  
Yan Zeng

AbstractWe study a stochastic differential game problem between two insurers, who invest in a financial market and adopt reinsurance to manage their claim risks. Supposing that their reinsurance premium rates are calculated according to the generalized mean-variance principle, we consider the competition between the two insurers as a non-zero sum stochastic differential game. Using dynamic programming technique, we derive a system of coupled Hamilton–Jacobi–Bellman equations and show the existence of equilibrium strategies. For an exponential utility maximizing game and a probability maximizing game, we obtain semi-explicit solutions for the equilibrium strategies and the equilibrium value functions, respectively. Finally, we provide some detailed comparative-static analyses on the equilibrium strategies and illustrate some economic insights.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Yan Wang ◽  
Aimin Song ◽  
Cheng-De Zheng ◽  
Enmin Feng

We consider a nonzero-sum stochastic differential game which involves two players, a controller and a stopper. The controller chooses a control process, and the stopper selects the stopping rule which halts the game. This game is studied in a jump diffusions setting within Markov control limit. By a dynamic programming approach, we give a verification theorem in terms of variational inequality-Hamilton-Jacobi-Bellman (VIHJB) equations for the solutions of the game. Furthermore, we apply the verification theorem to characterize Nash equilibrium of the game in a specific example.


Author(s):  
Ashley Davey ◽  
Harry Zheng

AbstractThis paper proposes two algorithms for solving stochastic control problems with deep learning, with a focus on the utility maximisation problem. The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman (HJB) equation. We solve this highly nonlinear partial differential equation (PDE) with a second order backward stochastic differential equation (2BSDE) formulation. The convex structure of the problem allows us to describe a dual problem that can either verify the original primal approach or bypass some of the complexity. The second algorithm utilises the full power of the duality method to solve non-Markovian problems, which are often beyond the scope of stochastic control solvers in the existing literature. We solve an adjoint BSDE that satisfies the dual optimality conditions. We apply these algorithms to problems with power, log and non-HARA utilities in the Black-Scholes, the Heston stochastic volatility, and path dependent volatility models. Numerical experiments show highly accurate results with low computational cost, supporting our proposed algorithms.


Author(s):  
Arkadii V. Kim ◽  
Gennady A. Bocharov

The paper considers a minimax positional differential game with aftereffect based on the i-smooth analysis methodology. In the finite-dimensional (ODE) case for a minimax differential game, resolving mixed strategies can be constructed using the dynamic programming method. The report shows that the i-smooth analysis methodology allows one to construct counterstrategies in a completely similar way to the finite-dimensional case. Moreover as it is typical for the use of i-smooth analysis, in the absence of an aftereffect, all the results of the article pass to the corresponding results of the finite-dimensional theory of positional differential games.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Fu Zhang ◽  
QingXin Meng ◽  
MaoNing Tang

In this paper, we consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by a backward stochastic differential equation driven by Teugels martingales and an independent Brownian motion. A sufficient condition and a necessary one for the existence of the saddle point for the game are proved. As an application, a linear quadratic stochastic differential game problem is discussed.


Author(s):  
Juan Li ◽  
Wenqiang Li ◽  
Qingmeng Wei

By introducing a stochastic differential game whose dynamics and multi-dimensional cost functionals form a multi-dimensional coupled forward-backward stochastic differential equation with jumps, we give a probabilistic interpretation to a system of coupled Hamilton-Jacobi-Bellman-Isaacs equations. For this, we generalize the definition of the lower value function  initially defined only for deterministic times $t$ and states $x$ to  stopping times $\tau$ and random variables $\eta\in L^2(\Omega,\mathcal {F}_\tau,P; \mathbb{R})$. The generalization plays a key role in the proof of a strong dynamic programming principle. This strong dynamic programming principle allows us to show that the lower value function is a viscosity solution of our system of multi-dimensional coupled Hamilton-Jacobi-Bellman-Isaacs equations. The uniqueness is obtained for a particular but important case.


2018 ◽  
Vol 24 (1) ◽  
pp. 355-376 ◽  
Author(s):  
Jiangyan Pu ◽  
Qi Zhang

In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lipschitz with respect to the first unknown variable and the control, and monotonic with respect to the first unknown variable. The dynamic programming principle and the connection between the value function and the viscosity solution of the associated Hamilton-Jacobi-Bellman equation are established in this setting by the generalized comparison theorem for backward stochastic differential equations and the stability of viscosity solutions. Finally we take the control problem of continuous-time Epstein−Zin utility with non-Lipschitz aggregator as an example to demonstrate the application of our study.


Sign in / Sign up

Export Citation Format

Share Document