scholarly journals Leaving an interval in limited playing time

1988 ◽  
Vol 20 (03) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.

1988 ◽  
Vol 20 (3) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.


1996 ◽  
Vol 33 (03) ◽  
pp. 714-728
Author(s):  
Douglas W. Mcbeth ◽  
Ananda P. N. Weerasinghe

Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.


1996 ◽  
Vol 33 (3) ◽  
pp. 714-728 ◽  
Author(s):  
Douglas W. Mcbeth ◽  
Ananda P. N. Weerasinghe

Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.


1996 ◽  
Vol 28 (03) ◽  
pp. 763-783 ◽  
Author(s):  
Terence Chan

The ‘Mabinogion sheep’ problem, originally due to D. Williams, is a nice illustration in discrete time of the martingale optimality principle and the use of local time in stochastic control. The use of singular controls involving local time is even more strikingly highlighted in the context of continuous time. This paper considers a class of diffusion versions of the discrete-time Mabinogion sheep problem. The stochastic version of the Bellman dynamic programming approach leads to a free boundary problem in each case. The most surprising feature in the continuous-time context is the existence of diffusion versions of the original discrete-time problem for which the optimal boundary is different from that in the discrete-time case; even when the optimal boundary is the same, the value functions can be very different.


2020 ◽  
Vol 9 (2) ◽  
pp. 459-470
Author(s):  
Helin Wu ◽  
Yong Ren ◽  
Feng Hu

Abstract In this paper, we investigate some kind of Dynkin game under g-expectation induced by backward stochastic differential equation (short for BSDE). The lower and upper value functions $$\underline{V}_t=ess\sup \nolimits _{\tau \in {\mathcal {T}_t}} ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ̲ t = e s s sup τ ∈ T t e s s inf σ ∈ T t E t g [ R ( τ , σ ) ] and $$\overline{V}_t=ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}} ess\sup \nolimits _{\tau \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ¯ t = e s s inf σ ∈ T t e s s sup τ ∈ T t E t g [ R ( τ , σ ) ] are defined, respectively. Under some suitable assumptions, a pair of saddle points is obtained and the value function of Dynkin game $$V(t)=\underline{V}_t=\overline{V}_t$$ V ( t ) = V ̲ t = V ¯ t follows. Furthermore, we also consider the constrained case of Dynkin game.


2020 ◽  
Vol 10 (1) ◽  
pp. 235-259
Author(s):  
Katharina Bata ◽  
Hanspeter Schmidli

AbstractWe consider a risk model in discrete time with dividends and capital injections. The goal is to maximise the value of a dividend strategy. We show that the optimal strategy is of barrier type. That is, all capital above a certain threshold is paid as dividend. A second problem adds tax to the dividends but an injection leads to an exemption from tax. We show that the value function fulfils a Bellman equation. As a special case, we consider the case of premia of size one. In this case we show that the optimal strategy is a two barrier strategy. That is, there is a barrier if a next dividend of size one can be paid without tax and a barrier if the next dividend of size one will be taxed. In both models, we illustrate the findings by de Finetti’s example.


2004 ◽  
Vol 218 (9) ◽  
pp. 1033-1040 ◽  
Author(s):  
M. Šolc ◽  
J. Hostomský

AbstractWe present a numerical study of equilibrium composition fluctuations in a system where the reaction X1 ⇔ X2 having the equilibrium constant equal to 1 takes place. The total number of reacting particles is N. On a discrete time scale, the amplitude of a fluctuation having the lifetime 2r reaction events is defined as the difference between the number of particles X1 in the microstate most distant from the microstate N/2 visited at least once during the fluctuation lifetime, and the equilibrium number of particles X1, N/2. On the discrete time scale, the mean value of this amplitude, m̅(r̅), is calculated in the random walk approximation. On a continuous time scale, the average amplitude of fluctuations chosen randomly and regardless of their lifetime from an ensemble of fluctuations occurring within the time interval (0,z), z → ∞, tends with increasing N to ~1.243 N0.25. Introducing a fraction of fluctuation lifetime during which the composition of the system spends below the mean amplitude m̅(r̅), we obtain a value of the mean amplitude of equilibrium fluctuations on the continuous time scale equal to ~1.19√N. The results suggest that using the random walk value m̅(r̅) and taking into account a) the exponential density of fluctuations lifetimes and b) the fact that the time sequence of reaction events represents the Poisson process, we obtain values of fluctuations amplitudes which differ only slightly from those derived for the Ehrenfest model.


1998 ◽  
Vol 7 (4) ◽  
pp. 397-401 ◽  
Author(s):  
OLLE HÄGGSTRÖM

We consider continuous time random walks on a product graph G×H, where G is arbitrary and H consists of two vertices x and y linked by an edge. For any t>0 and any a, b∈V(G), we show that the random walk starting at (a, x) is more likely to have hit (b, x) than (b, y) by time t. This contrasts with the discrete time case and proves a conjecture of Bollobás and Brightwell. We also generalize the result to cases where H is either a complete graph on n vertices or a cycle on n vertices.


2019 ◽  
Vol 6 (11) ◽  
pp. 191423
Author(s):  
Julia Stadlmann ◽  
Radek Erban

A shift-periodic map is a one-dimensional map from the real line to itself which is periodic up to a linear translation and allowed to have singularities. It is shown that iterative sequences x n +1 = F ( x n ) generated by such maps display rich dynamical behaviour. The integer parts ⌊ x n ⌋ give a discrete-time random walk for a suitable initial distribution of x 0 and converge in certain limits to Brownian motion or more general Lévy processes. Furthermore, for certain shift-periodic maps with small holes on [0,1], convergence of trajectories to a continuous-time random walk is shown in a limit.


Sign in / Sign up

Export Citation Format

Share Document