hjb equation
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Christelle Dleuna Nyoumbi ◽  
Antoine Tambue

AbstractStochastic optimal principle leads to the resolution of a partial differential equation (PDE), namely the Hamilton–Jacobi–Bellman (HJB) equation. In general, this equation cannot be solved analytically, thus numerical algorithms are the only tools to provide accurate approximations. The aims of this paper is to introduce a novel fitted finite volume method to solve high dimensional degenerated HJB equation from stochastic optimal control problems in high dimension ($$ n\ge 3$$ n ≥ 3 ). The challenge here is due to the nature of our HJB equation which is a degenerated second-order partial differential equation coupled with an optimization problem. For such problems, standard scheme such as finite difference method losses its monotonicity and therefore the convergence toward the viscosity solution may not be guarantee. We discretize the HJB equation using the fitted finite volume method, well known to tackle degenerated PDEs, while the time discretisation is performed using the Implicit Euler scheme.. We show that matrices resulting from spatial discretization and temporal discretization are M-matrices. Numerical results in finance demonstrating the accuracy of the proposed numerical method comparing to the standard finite difference method are provided.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yanan Li

This paper examines the optimal annuitization, investment, and consumption strategies of an individual facing a time-dependent mortality rate in the tax-deferred annuity model and considers both the case when the rate of buying annuities is unrestricted and the case when it is restricted. At the beginning, by using the dynamic programming principle, we obtain the corresponding HJB equation. Since the existence of the tax and the time-dependence of the value function make the corresponding HJB equation hard to solve, firstly, we analyze the problem in a simpler case and use some numerical methods to get the solution and some of its useful properties. Then, by using the obtained properties and Kuhn–Tucker conditions, we discuss the problem in general cases and get the value functions and the optimal annuitization strategies, respectively.


Optimization ◽  
2021 ◽  
pp. 1-20
Author(s):  
Jie Wen ◽  
Yuanhao Shi ◽  
Xiaoqiong Pang ◽  
Jianfang Jia
Keyword(s):  

2021 ◽  
Vol 37 (3) ◽  
pp. 427-440
Author(s):  
DRAGOŞ – PĂTRU COVEI ◽  
◽  
TRAIAN A. PIRVU ◽  
◽  

This paper studies a stochastic control problem with regime switching in a fairly general abstract setting. Such problems may arise from production planning management. We perform a full mathematical analysis of this stochastic control problem via the HJB equation and verification. The connection of the optimal controls and subgame perfect controls is discussed, and it is shown that the optimal controls solve the generalized HJB equation as well. In a special case we provide a closed form solution.


Author(s):  
Sudeep Kundu ◽  
Karl Kunisch

AbstractPolicy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.


Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 73
Author(s):  
Julia Eisenberg ◽  
Lukas Fabrykowski ◽  
Maren Diane Schmeck

In this paper, we consider a company that wishes to determine the optimal reinsurance strategy minimising the total expected discounted amount of capital injections needed to prevent the ruin. The company’s surplus process is assumed to follow a Brownian motion with drift, and the reinsurance price is modelled by a continuous-time Markov chain with two states. The presence of regime-switching substantially complicates the optimal reinsurance problem, as the surplus-independent strategies turn out to be suboptimal. We develop a recursive approach that allows to represent a solution to the corresponding Hamilton–Jacobi–Bellman (HJB) equation and the corresponding reinsurance strategy as the unique limits of the sequence of solutions to ordinary differential equations and their first- and second-order derivatives. Via Ito’s formula, we prove the constructed function to be the value function. Two examples illustrate the recursive procedure along with a numerical approach yielding the direct solution to the HJB equation.


2021 ◽  
Author(s):  
Guoping Zhang ◽  
Quanxin Zhu

Abstract For nonlinear Itˆo-type stochastic systems, the problem of event-triggered optimal control (ETOC) is studied in this paper, and the adaptive dynamic programming (ADP) approach is explored to implement it. The value function of the Hamilton-Jacobi Bellman(HJB) equation is approximated by applying critical neural network (CNN). Moreover, a new event-triggering scheme is proposed, which can be used to design ETOC directly via the solution of HJB equation. By utilizing the Lyapunov direct method, it can be proved that the ETOC based on ADP approach can ensure that the CNN weight errors and states of system are semiglobally uniformly ultimately bounded (SGUUB) in probability. Furthermore, an upper bound is given on predetermined cost function. Specifically, there has been no published literature on the ETOC for nonlinear Itˆo-type stochastic systems via the ADP method. This work is the first attempt to fill the gap in this subject. Finally, the effectiveness of the proposed method are illustrated through two numerical examples.


Sign in / Sign up

Export Citation Format

Share Document