Optimal strategies for discovering new species in continuous time

1989 ◽  
Vol 26 (04) ◽  
pp. 695-706
Author(s):  
Gerold Alsmeyer ◽  
Albrecht Irle

Consider a population of distinct species Sj , j∈J, members of which are selected at different time points T 1 , T 2,· ··, one at each time. Assume linear costs per unit of time and that a reward is earned at each discovery epoch of a new species. We treat the problem of finding a selection rule which maximizes the expected payoff. As the times between successive selections are supposed to be continuous random variables, we are dealing with a continuous-time optimal stopping problem which is the natural generalization of the one Rasmussen and Starr (1979) have investigated; namely, the corresponding problem with fixed times between successive selections. However, in contrast to their discrete-time setting the derivation of an optimal strategy appears to be much harder in our model as generally we are no longer in the monotone case. This note gives a general point process formulation for this problem, leading in particular to an equivalent stopping problem via stochastic intensities which is easier to handle. Then we present a formal derivation of the optimal stopping time under the stronger assumption of i.i.d. (X 1 , A 1) (X2, A2 ), · ·· where Xn gives the label (j for Sj ) of the species selected at Tn and An denotes the time between the nth and (n – 1)th selection, i.e. An = Tn – Tn– 1. In the case where even Xn and An are independent and An has an IFR (increasing failure rate) distribution, an explicit solution for the optimal strategy is derived as a simple consequence.

1989 ◽  
Vol 26 (4) ◽  
pp. 695-706 ◽  
Author(s):  
Gerold Alsmeyer ◽  
Albrecht Irle

Consider a population of distinct species Sj, j∈J, members of which are selected at different time points T1, T2,· ··, one at each time. Assume linear costs per unit of time and that a reward is earned at each discovery epoch of a new species. We treat the problem of finding a selection rule which maximizes the expected payoff. As the times between successive selections are supposed to be continuous random variables, we are dealing with a continuous-time optimal stopping problem which is the natural generalization of the one Rasmussen and Starr (1979) have investigated; namely, the corresponding problem with fixed times between successive selections. However, in contrast to their discrete-time setting the derivation of an optimal strategy appears to be much harder in our model as generally we are no longer in the monotone case.This note gives a general point process formulation for this problem, leading in particular to an equivalent stopping problem via stochastic intensities which is easier to handle. Then we present a formal derivation of the optimal stopping time under the stronger assumption of i.i.d. (X1, A1) (X2, A2), · ·· where Xn gives the label (j for Sj) of the species selected at Tn and An denotes the time between the nth and (n – 1)th selection, i.e. An = Tn – Tn–1. In the case where even Xn and An are independent and An has an IFR (increasing failure rate) distribution, an explicit solution for the optimal strategy is derived as a simple consequence.


2012 ◽  
Vol 49 (3) ◽  
pp. 806-820
Author(s):  
Pieter C. Allaart

Let (Xt)0 ≤ t ≤ T be a one-dimensional stochastic process with independent and stationary increments, either in discrete or continuous time. In this paper we consider the problem of stopping the process (Xt) ‘as close as possible’ to its eventual supremum MT := sup0 ≤ t ≤ TXt, when the reward for stopping at time τ ≤ T is a nonincreasing convex function of MT - Xτ. Under fairly general conditions on the process (Xt), it is shown that the optimal stopping time τ takes a trivial form: it is either optimal to stop at time 0 or at time T. For the case of a random walk, the rule τ ≡ T is optimal if the steps of the walk stochastically dominate their opposites, and the rule τ ≡ 0 is optimal if the reverse relationship holds. An analogous result is proved for Lévy processes with finite Lévy measure. The result is then extended to some processes with nonfinite Lévy measure, including stable processes, CGMY processes, and processes whose jump component is of finite variation.


2012 ◽  
Vol 49 (03) ◽  
pp. 806-820
Author(s):  
Pieter C. Allaart

Let (X t )0 ≤ t ≤ T be a one-dimensional stochastic process with independent and stationary increments, either in discrete or continuous time. In this paper we consider the problem of stopping the process (X t ) ‘as close as possible’ to its eventual supremum M T := sup0 ≤ t ≤ T X t , when the reward for stopping at time τ ≤ T is a nonincreasing convex function of M T - X τ. Under fairly general conditions on the process (X t ), it is shown that the optimal stopping time τ takes a trivial form: it is either optimal to stop at time 0 or at time T. For the case of a random walk, the rule τ ≡ T is optimal if the steps of the walk stochastically dominate their opposites, and the rule τ ≡ 0 is optimal if the reverse relationship holds. An analogous result is proved for Lévy processes with finite Lévy measure. The result is then extended to some processes with nonfinite Lévy measure, including stable processes, CGMY processes, and processes whose jump component is of finite variation.


2002 ◽  
Vol 34 (01) ◽  
pp. 141-157 ◽  
Author(s):  
Paul Dupuis ◽  
Hui Wang

We consider a class of optimal stopping problems where the ability to stop depends on an exogenous Poisson signal process - we can only stop at the Poisson jump times. Even though the time variable in these problems has a discrete aspect, a variational inequality can be obtained by considering an underlying continuous-time structure. Depending on whether stopping is allowed at t = 0, the value function exhibits different properties across the optimal exercise boundary. Indeed, the value function is only 𝒞 0 across the optimal boundary when stopping is allowed at t = 0 and 𝒞 2 otherwise, both contradicting the usual 𝒞 1 smoothness that is necessary and sufficient for the application of the principle of smooth fit. Also discussed is an equivalent stochastic control formulation for these stopping problems. Finally, we derive the asymptotic behaviour of the value functions and optimal exercise boundaries as the intensity of the Poisson process goes to infinity or, roughly speaking, as the problems converge to the classical continuous-time optimal stopping problems.


2016 ◽  
Vol 53 (1) ◽  
pp. 91-105
Author(s):  
Fabián Crocce ◽  
Ernesto Mordecki

Abstract We provide an algorithm to find the value and an optimal strategy of the Ten Thousand dice game solitaire variant in the framework of Markov control processes. Once an optimal critical threshold is found, the set of nonstopping states of the game becomes finite and the solution is found by a backwards algorithm that gives the values for each one of these states of the game. The algorithm is finite and exact. The strategy to find the critical threshold comes from the continuous pasting condition used in optimal stopping problems for continuous-time processes with jumps.


2002 ◽  
Vol 34 (1) ◽  
pp. 141-157 ◽  
Author(s):  
Paul Dupuis ◽  
Hui Wang

We consider a class of optimal stopping problems where the ability to stop depends on an exogenous Poisson signal process - we can only stop at the Poisson jump times. Even though the time variable in these problems has a discrete aspect, a variational inequality can be obtained by considering an underlying continuous-time structure. Depending on whether stopping is allowed att= 0, the value function exhibits different properties across the optimal exercise boundary. Indeed, the value function is only𝒞0across the optimal boundary when stopping is allowed att= 0 and𝒞2otherwise, both contradicting the usual𝒞1smoothness that is necessary and sufficient for the application of the principle of smooth fit. Also discussed is an equivalent stochastic control formulation for these stopping problems. Finally, we derive the asymptotic behaviour of the value functions and optimal exercise boundaries as the intensity of the Poisson process goes to infinity or, roughly speaking, as the problems converge to the classical continuous-time optimal stopping problems.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Lu Ye

This paper considers the optimal stopping problem for continuous-time Markov processes. We describe the methodology and solve the optimal stopping problem for a broad class of reward functions. Moreover, we illustrate the outcomes by some typical Markov processes including diffusion and Lévy processes with jumps. For each of the processes, the explicit formula for value function and optimal stopping time is derived. Furthermore, we relate the derived optimal rules to some other optimal problems.


2007 ◽  
Vol 39 (03) ◽  
pp. 753-775
Author(s):  
Tze Leung Lai ◽  
Yi-Ching Yao ◽  
Farid Aitsahlia

Corrected random walk approximations to continuous-time optimal stopping boundaries for Brownian motion, first introduced by Chernoff and Petkau, have provided powerful computational tools in option pricing and sequential analysis. This paper develops the theory of these second-order approximations and describes some new applications.


2007 ◽  
Vol 39 (3) ◽  
pp. 753-775 ◽  
Author(s):  
Tze Leung Lai ◽  
Yi-Ching Yao ◽  
Farid Aitsahlia

Corrected random walk approximations to continuous-time optimal stopping boundaries for Brownian motion, first introduced by Chernoff and Petkau, have provided powerful computational tools in option pricing and sequential analysis. This paper develops the theory of these second-order approximations and describes some new applications.


Sign in / Sign up

Export Citation Format

Share Document