scholarly journals Optimal capital injections and dividends with tax in a risk model in discrete time

2020 ◽  
Vol 10 (1) ◽  
pp. 235-259
Author(s):  
Katharina Bata ◽  
Hanspeter Schmidli

AbstractWe consider a risk model in discrete time with dividends and capital injections. The goal is to maximise the value of a dividend strategy. We show that the optimal strategy is of barrier type. That is, all capital above a certain threshold is paid as dividend. A second problem adds tax to the dividends but an injection leads to an exemption from tax. We show that the value function fulfils a Bellman equation. As a special case, we consider the case of premia of size one. In this case we show that the optimal strategy is a two barrier strategy. That is, there is a barrier if a next dividend of size one can be paid without tax and a barrier if the next dividend of size one will be taxed. In both models, we illustrate the findings by de Finetti’s example.

2011 ◽  
Vol 48 (3) ◽  
pp. 733-748 ◽  
Author(s):  
Julia Eisenberg ◽  
Hanspeter Schmidli

We consider a classical risk model and its diffusion approximation, where the individual claims are reinsured by a reinsurance treaty with deductible b ∈ [0, b̃]. Here b = b̃ means ‘no reinsurance’ and b= 0 means ‘full reinsurance’. In addition, the insurer is allowed to invest in a riskless asset with some constant interest rate m > 0. The cedent can choose an adapted reinsurance strategy {bt}t≥0, i.e. the parameter can be changed continuously. If the surplus process becomes negative, the cedent has to inject additional capital. Our aim is to minimise the expected discounted capital injections over all admissible reinsurance strategies. We find an explicit expression for the value function and the optimal strategy using the Hamilton-Jacobi-Bellman approach in the case of a diffusion approximation. In the case of the classical risk model, we show the existence of a ‘weak’ solution and calculate the value function numerically.


2011 ◽  
Vol 48 (03) ◽  
pp. 733-748
Author(s):  
Julia Eisenberg ◽  
Hanspeter Schmidli

We consider a classical risk model and its diffusion approximation, where the individual claims are reinsured by a reinsurance treaty with deductible b ∈ [0, b̃]. Here b = b̃ means ‘no reinsurance’ and b= 0 means ‘full reinsurance’. In addition, the insurer is allowed to invest in a riskless asset with some constant interest rate m > 0. The cedent can choose an adapted reinsurance strategy {b t } t≥0, i.e. the parameter can be changed continuously. If the surplus process becomes negative, the cedent has to inject additional capital. Our aim is to minimise the expected discounted capital injections over all admissible reinsurance strategies. We find an explicit expression for the value function and the optimal strategy using the Hamilton-Jacobi-Bellman approach in the case of a diffusion approximation. In the case of the classical risk model, we show the existence of a ‘weak’ solution and calculate the value function numerically.


1984 ◽  
Vol 16 (1) ◽  
pp. 16-16
Author(s):  
Domokos Vermes

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).


1996 ◽  
Vol 53 (1) ◽  
pp. 51-62 ◽  
Author(s):  
Shigeaki Koike

The value function is presented by minimisation of a cost functional over admissible controls. The associated first order Bellman equations with varying control are treated. It turns out that the value function is a viscosity solution of the Bellman equation and the comparison principle holds, which is an essential tool in obtaining the uniqueness of the viscosity solutions.


2013 ◽  
Vol 55 (2) ◽  
pp. 129-150 ◽  
Author(s):  
ZHUO JIN ◽  
GEORGE YIN

AbstractThis work focuses on finding optimal dividend payment and capital injection policies to maximize the present value of the difference between the cumulative dividend payment and the possible capital injections with delays. Starting from the classical Cramér–Lundberg process, using the dynamic programming approach, the value function obeys a quasi-variational inequality. With delays in capital injections, the company will be exposed to the risk of financial ruin during the delay period. In addition, the optimal dividend payment and capital injection strategy should balance the expected cost of the possible capital injections and the time value of the delay period. In this paper, the closed-form solution of the value function and the corresponding optimal policies are obtained. Some limiting cases are also discussed. A numerical example is presented to illustrate properties of the solution. Some economic insights are also given.


1988 ◽  
Vol 20 (3) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.


1997 ◽  
Vol 1 (1) ◽  
pp. 255-277 ◽  
Author(s):  
MICHAEL A. TRICK ◽  
STANLEY E. ZIN

We review the properties of algorithms that characterize the solution of the Bellman equation of a stochastic dynamic program, as the solution to a linear program. The variables in this problem are the ordinates of the value function; hence, the number of variables grows with the state space. For situations in which this size becomes computationally burdensome, we suggest the use of low-dimensional cubic-spline approximations to the value function. We show that fitting this approximation through linear programming provides upper and lower bounds on the solution to the original large problem. The information contained in these bounds leads to inexpensive improvements in the accuracy of approximate solutions.


2020 ◽  
Vol 92 (2) ◽  
pp. 285-309
Author(s):  
Julia Eisenberg ◽  
Yuliya Mishura

AbstractWe consider an economic agent (a household or an insurance company) modelling its surplus process by a deterministic process or by a Brownian motion with drift. The goal is to maximise the expected discounted spending/dividend payments under a discounting factor given by an exponential CIR process. In the deterministic case, we are able to find explicit expressions for the optimal strategy and the value function. For the Brownian motion case, we are able to show that for a special parameter choice the optimal strategy is a constant-barrier strategy.


1988 ◽  
Vol 20 (03) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.


Sign in / Sign up

Export Citation Format

Share Document