scholarly journals Comparison Principle for Hamilton-Jacobi-Bellman Equations via a Bootstrapping Procedure

Author(s):  
Richard C. Kraaij ◽  
Mikola C. Schlottke

AbstractWe study the well-posedness of Hamilton–Jacobi–Bellman equations on subsets of $${\mathbb {R}}^d$$ R d in a context without boundary conditions. The Hamiltonian is given as the supremum over two parts: an internal Hamiltonian depending on an external control variable and a cost functional penalizing the control. The key feature in this paper is that the control function can be unbounded and discontinuous. This way we can treat functionals that appear e.g. in the Donsker–Varadhan theory of large deviations for occupation-time measures. To allow for this flexibility, we assume that the internal Hamiltonian and cost functional have controlled growth, and that they satisfy an equi-continuity estimate uniformly over compact sets in the space of controls. In addition to establishing the comparison principle for the Hamilton–Jacobi–Bellman equation, we also prove existence, the viscosity solution being the value function with exponentially discounted running costs. As an application, we verify the conditions on the internal Hamiltonian and cost functional in two examples.

1996 ◽  
Vol 53 (1) ◽  
pp. 51-62 ◽  
Author(s):  
Shigeaki Koike

The value function is presented by minimisation of a cost functional over admissible controls. The associated first order Bellman equations with varying control are treated. It turns out that the value function is a viscosity solution of the Bellman equation and the comparison principle holds, which is an essential tool in obtaining the uniqueness of the viscosity solutions.


1984 ◽  
Vol 16 (1) ◽  
pp. 16-16
Author(s):  
Domokos Vermes

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).


2011 ◽  
Vol 52 (3) ◽  
pp. 250-262 ◽  
Author(s):  
XIANG LIN ◽  
PENG YANG

AbstractWe consider an insurance company whose surplus is governed by a jump diffusion risk process. The insurance company can purchase proportional reinsurance for claims and invest its surplus in a risk-free asset and a risky asset whose return follows a jump diffusion process. Our main goal is to find an optimal investment and proportional reinsurance policy which maximizes the expected exponential utility of the terminal wealth. By solving the corresponding Hamilton–Jacobi–Bellman equation, closed-form solutions for the value function as well as the optimal investment and proportional reinsurance policy are obtained. We also discuss the effects of parameters on the optimal investment and proportional reinsurance policy by numerical calculations.


Risks ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 96
Author(s):  
Christian Hipp

We consider optimal dividend payment under the constraint that the with-dividend ruin probability does not exceed a given value α. This is done in most simple discrete De Finetti models. We characterize the value function V(s,α) for initial surplus s of this problem, characterize the corresponding optimal dividend strategies, and present an algorithm for its computation. In an earlier solution to this problem, a Hamilton-Jacobi-Bellman equation for V(s,α) can be found which leads to its representation as the limit of a monotone iteration scheme. However, this scheme is too complex for numerical computations. Here, we introduce the class of two-barrier dividend strategies with the following property: when dividends are paid above a barrier B, i.e., a dividend of size 1 is paid when reaching B+1 from B, then we repeat this dividend payment until reaching a limit L for some 0≤L≤B. For these strategies we obtain explicit formulas for ruin probabilities and present values of dividend payments, as well as simplifications of the above iteration scheme. The results of numerical experiments show that the values V(s,α) obtained in earlier work can be improved, they are suboptimal.


2018 ◽  
Vol 24 (1) ◽  
pp. 355-376 ◽  
Author(s):  
Jiangyan Pu ◽  
Qi Zhang

In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lipschitz with respect to the first unknown variable and the control, and monotonic with respect to the first unknown variable. The dynamic programming principle and the connection between the value function and the viscosity solution of the associated Hamilton-Jacobi-Bellman equation are established in this setting by the generalized comparison theorem for backward stochastic differential equations and the stability of viscosity solutions. Finally we take the control problem of continuous-time Epstein−Zin utility with non-Lipschitz aggregator as an example to demonstrate the application of our study.


2017 ◽  
Vol 49 (2) ◽  
pp. 515-548 ◽  
Author(s):  
Hansjörg Albrecher ◽  
Pablo Azcue ◽  
Nora Muler

Abstract We consider a two-dimensional optimal dividend problem in the context of two insurance companies with compound Poisson surplus processes, who collaborate by paying each other's deficit when possible. We study the stochastic control problem of maximizing the weighted sum of expected discounted dividend payments (among all admissible dividend strategies) until ruin of both companies, by extending results of univariate optimal control theory. In the case that the dividends paid by the two companies are equally weighted, the value function of this problem compares favorably with the one of merging the two companies completely. We identify the optimal value function as the smallest viscosity supersolution of the respective Hamilton–Jacobi–Bellman equation and provide an iterative approach to approximate it numerically. Curve strategies are identified as the natural analogue of barrier strategies in this two-dimensional context. A numerical example is given for which such a curve strategy is indeed optimal among all admissible dividend strategies, and for which this collaboration mechanism also outperforms the suitably weighted optimal dividend strategies of the two stand-alone companies.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yuzhen Wen ◽  
Chuancun Yin

In this paper, we consider the problem of maximizing the expected discounted utility of dividend payments for an insurance company taking into account the time value of ruin. We assume the preference of the insurer is of the CRRA form. The discounting factor is modeled as a geometric Brownian motion. We introduce the VaR control levels for the insurer to control its loss in reinsurance strategies. By solving the corresponding Hamilton-Jacobi-Bellman equation, we obtain the value function and the corresponding optimal strategy. Finally, we provide some numerical examples to illustrate the results and analyze the VaR control levels on the optimal strategy.


2019 ◽  
Vol 17 (01) ◽  
pp. 1940004 ◽  
Author(s):  
Natalia G. Novoselova

In this paper, a problem of chemotherapy of a malignant tumor is considered. Dynamics is piecewise monotone and a therapy function has two maxima. The aim of therapy is to minimize the number of tumor cells at the given final instance. The main result of this work is the construction of optimal feedbacks in the chemotherapy task. The construction of optimal feedback is based on the value function in the corresponding problem of optimal control (therapy). The value function is represented as a minimax generalized solution of the Hamilton–Jacobi–Bellman equation. It is proved that optimal feedback is a discontinuous function and the line of discontinuity satisfies the Rankin–Hugoniot conditions. Other results of the work are illustrative numerical examples of the construction of optimal feedbacks and Rankin–Hugoniot lines.


2020 ◽  
pp. 2150032
Author(s):  
Tao Hao ◽  
Qingfeng Zhu

Recently, Hao and Li [Fully coupled forward-backward SDEs involving the value function. Nonlocal Hamilton–Jacobi–Bellman equations, ESAIM: Control Optim, Calc. Var. 22(2016) 519–538] studied a new kind of forward-backward stochastic differential equations (FBSDEs), namely the fully coupled FBSDEs involving the value function in the case where the diffusion coefficient [Formula: see text] in forward stochastic differential equations depends on control, but does not depend on [Formula: see text]. In our paper, we generalize their work to the case where [Formula: see text] depends on both control and [Formula: see text], which is called the general fully coupled FBSDEs involving the value function. The existence and uniqueness theorem of this kind of equations under suitable assumptions is proved. After obtaining the dynamic programming principle for the value function [Formula: see text], we prove that the value function [Formula: see text] is the minimum viscosity solution of the related nonlocal Hamilton–Jacobi–Bellman equation combined with an algebraic equation.


Sign in / Sign up

Export Citation Format

Share Document