scholarly journals Optimal stopping of strong Markov processes

2013 ◽  
Vol 123 (3) ◽  
pp. 1138-1159 ◽  
Author(s):  
Sören Christensen ◽  
Paavo Salminen ◽  
Bao Quoc Ta
2015 ◽  
Vol 26 (03) ◽  
pp. 1550028
Author(s):  
Bao Quoc Ta

Recently the new technique to solve optimal stopping problems for Hunt processes is developed (see [S. Christensen, P. Salminen and B. Q. Ta, Optimal stopping of strong Markov processes, Stochastic Process. Appl. 123(3) (2013) 1138–1159]). The crucial feature of the approach is to utilize the representation of the r-excessive functions as expected suprema. However, it seems to be difficult when applying directly the approach to some concrete cases, e.g. one-sided problem for reflecting Brownian motion and two-sided problem for Brownian motion. In this paper, we review and exploit this approach to find explicit solutions of two problems above.


2020 ◽  
Vol 91 (3) ◽  
pp. 559-583
Author(s):  
Jukka Lempa

AbstractWe study optimal stopping of strong Markov processes under random implementation delay. By random implementation delay we mean the following: the payoff is not realised immediately when the process is stopped but rather after a random waiting period. The distribution of the random waiting period is assumed to be phase-type. We prove first a general result on the solvability of the problem. Then we study the case of Coxian distribution both in general and with scalar diffusion dynamics in more detail. The study is concluded with two explicit examples.


1986 ◽  
Vol 18 (03) ◽  
pp. 724-746
Author(s):  
W. J. R. Eplett

The theory of allocation indices for defining the optimal policy in multi-armed bandit problems developed by Gittins is presented in the continuous-time case where the projects (or ‘arms’) are strong Markov processes. Complications peculiar to the continuous-time case are discussed. This motivates investigation of whether approximation of the continuous-time problems by discrete-time versions provides a valid technique with convergent allocation indices and optimal expected rewards. Conditions are presented under which the convergence holds.


2008 ◽  
Vol 47 (2) ◽  
pp. 684-702 ◽  
Author(s):  
Erik Ekström ◽  
Goran Peskir

Sign in / Sign up

Export Citation Format

Share Document