Passing to the limit in optimal stopping problems for Markov processes

1973 ◽  
Vol 13 (1) ◽  
pp. 81-90 ◽  
Author(s):  
V. Matskyavichyus
1993 ◽  
Vol 25 (4) ◽  
pp. 825-846 ◽  
Author(s):  
Frans A. Boshuizen ◽  
José M. Gouweleeuw

In this paper, optimal stopping problems for semi-Markov processes are studied in a fairly general setting. In such a process transitions are made from state to state in accordance with a Markov chain, but the amount of time spent in each state is random. The times spent in each state follow a general renewal process. They may depend on the present state as well as on the state into which the next transition is made.Our goal is to maximize the expected net return, which is given as a function of the state at time t minus some cost function. Discounting may or may not be considered. The main theorems (Theorems 3.5 and 3.11) are expressions for the optimal stopping time in the undiscounted and discounted case. These theorems generalize results of Zuckerman [16] and Boshuizen and Gouweleeuw [3]. Applications are given in various special cases.The results developed in this paper can also be applied to semi-Markov shock models, as considered in Taylor [13], Feldman [6] and Zuckerman [15].


2012 ◽  
Vol 22 (3) ◽  
pp. 1243-1265 ◽  
Author(s):  
Mamadou Cissé ◽  
Pierre Patie ◽  
Etienne Tanré

2012 ◽  
Vol 45 (2) ◽  
Author(s):  
Ł. Stettner

AbstractIn the paper we use penalty method to approximate a number of general stopping problems over finite horizon. We consider optimal stopping of discrete time or right continuous stochastic processes, and show that suitable version of Snell’s envelope can by approximated by solutions to penalty equations. Then we study optimal stopping problem for Markov processes on a general Polish space, and again show that the optimal stopping value function can be approximated by a solution to a Markov version of the penalty equation.


2015 ◽  
Vol 26 (03) ◽  
pp. 1550028
Author(s):  
Bao Quoc Ta

Recently the new technique to solve optimal stopping problems for Hunt processes is developed (see [S. Christensen, P. Salminen and B. Q. Ta, Optimal stopping of strong Markov processes, Stochastic Process. Appl. 123(3) (2013) 1138–1159]). The crucial feature of the approach is to utilize the representation of the r-excessive functions as expected suprema. However, it seems to be difficult when applying directly the approach to some concrete cases, e.g. one-sided problem for reflecting Brownian motion and two-sided problem for Brownian motion. In this paper, we review and exploit this approach to find explicit solutions of two problems above.


1993 ◽  
Vol 25 (04) ◽  
pp. 825-846 ◽  
Author(s):  
Frans A. Boshuizen ◽  
José M. Gouweleeuw

In this paper, optimal stopping problems for semi-Markov processes are studied in a fairly general setting. In such a process transitions are made from state to state in accordance with a Markov chain, but the amount of time spent in each state is random. The times spent in each state follow a general renewal process. They may depend on the present state as well as on the state into which the next transition is made. Our goal is to maximize the expected net return, which is given as a function of the state at time t minus some cost function. Discounting may or may not be considered. The main theorems (Theorems 3.5 and 3.11) are expressions for the optimal stopping time in the undiscounted and discounted case. These theorems generalize results of Zuckerman [16] and Boshuizen and Gouweleeuw [3]. Applications are given in various special cases. The results developed in this paper can also be applied to semi-Markov shock models, as considered in Taylor [13], Feldman [6] and Zuckerman [15].


Sign in / Sign up

Export Citation Format

Share Document