Optimal stopping of Markov chains and three abstract optimization problems

Stochastics ◽  
2011 ◽  
Vol 83 (4-6) ◽  
pp. 405-414 ◽  
Author(s):  
Isaac M. Sonin
2009 ◽  
Vol 46 (04) ◽  
pp. 1130-1145 ◽  
Author(s):  
G. Deligiannidis ◽  
H. Le ◽  
S. Utev

In this paper we present an explicit solution to the infinite-horizon optimal stopping problem for processes with stationary independent increments, where reward functions admit a certain representation in terms of the process at a random time. It is shown that it is optimal to stop at the first time the process crosses a level defined as the root of an equation obtained from the representation of the reward function. We obtain an explicit formula for the value function in terms of the infimum and supremum of the process, by making use of the Wiener–Hopf factorization. The main results are applied to several problems considered in the literature, to give a unified approach, and to new optimization problems from the finance industry.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 123 ◽  
Author(s):  
Bernardo D’Auria ◽  
Alessandro Ferriero

In this paper, we study the optimal stopping-time problems related to a class of Itô diffusions, modeling for example an investment gain, for which the terminal value is a priori known. This could be the case of an insider trading or of the pinning at expiration of stock options. We give the explicit solution to these optimization problems and in particular we provide a class of processes whose optimal barrier has the same form as the one of the Brownian bridge. These processes may be a possible alternative to the Brownian bridge in practice as they could better model real applications. Moreover, we discuss the existence of a process with a prescribed curve as optimal barrier, for any given (decreasing) curve. This gives a modeling approach for the optimal liquidation time, i.e., the optimal time at which the investor should liquidate a position to maximize the gain.


2010 ◽  
Vol 42 (1) ◽  
pp. 158-182 ◽  
Author(s):  
Kurt Helmes ◽  
Richard H. Stockbridge

A new approach to the solution of optimal stopping problems for one-dimensional diffusions is developed. It arises by imbedding the stochastic problem in a linear programming problem over a space of measures. Optimizing over a smaller class of stopping rules provides a lower bound on the value of the original problem. Then the weak duality of a restricted form of the dual linear program provides an upper bound on the value. An explicit formula for the reward earned using a two-point hitting time stopping rule allows us to prove strong duality between these problems and, therefore, allows us to either optimize over these simpler stopping rules or to solve the restricted dual program. Each optimization problem is parameterized by the initial value of the diffusion and, thus, we are able to construct the value function by solving the family of optimization problems. This methodology requires little regularity of the terminal reward function. When the reward function is smooth, the optimal stopping locations are shown to satisfy the smooth pasting principle. The procedure is illustrated using two examples.


2003 ◽  
Vol 35 (2) ◽  
pp. 449-476 ◽  
Author(s):  
G. Yin ◽  
Q. Zhang ◽  
G. Badowski

This work is devoted to asymptotic properties of singularly perturbed Markov chains in discrete time. The motivation stems from applications in discrete-time control and optimization problems, manufacturing and production planning, stochastic networks, and communication systems, in which finite-state Markov chains are used to model large-scale and complex systems. To reduce the complexity of the underlying system, the states in each recurrent class are aggregated into a single state. Although the aggregated process may not be Markovian, its continuous-time interpolation converges to a continuous-time Markov chain whose generator is a function determined by the invariant measures of the recurrent states. Sequences of occupation measures are defined. A mean square estimate on a sequence of unscaled occupation measures is obtained. Furthermore, it is proved that a suitably scaled sequence of occupation measures converges to a switching diffusion.


2010 ◽  
Vol 42 (01) ◽  
pp. 158-182 ◽  
Author(s):  
Kurt Helmes ◽  
Richard H. Stockbridge

A new approach to the solution of optimal stopping problems for one-dimensional diffusions is developed. It arises by imbedding the stochastic problem in a linear programming problem over a space of measures. Optimizing over a smaller class of stopping rules provides a lower bound on the value of the original problem. Then the weak duality of a restricted form of the dual linear program provides an upper bound on the value. An explicit formula for the reward earned using a two-point hitting time stopping rule allows us to prove strong duality between these problems and, therefore, allows us to either optimize over these simpler stopping rules or to solve the restricted dual program. Each optimization problem is parameterized by the initial value of the diffusion and, thus, we are able to construct the value function by solving the family of optimization problems. This methodology requires little regularity of the terminal reward function. When the reward function is smooth, the optimal stopping locations are shown to satisfy the smooth pasting principle. The procedure is illustrated using two examples.


1968 ◽  
Vol 39 (6) ◽  
pp. 1905-1912 ◽  
Author(s):  
Alberto Ruiz-Moncayo

2015 ◽  
Vol 47 (2) ◽  
pp. 378-401 ◽  
Author(s):  
B. Eriksson ◽  
M. R. Pistorius

This paper is concerned with the solution of the optimal stopping problem associated to the value of American options driven by continuous-time Markov chains. The value-function of an American option in this setting is characterised as the unique solution (in a distributional sense) of a system of variational inequalities. Furthermore, with continuous and smooth fit principles not applicable in this discrete state-space setting, a novel explicit characterisation is provided of the optimal stopping boundary in terms of the generator of the underlying Markov chain. Subsequently, an algorithm is presented for the valuation of American options under Markov chain models. By application to a suitably chosen sequence of Markov chains, the algorithm provides an approximate valuation of an American option under a class of Markov models that includes diffusion models, exponential Lévy models, and stochastic differential equations driven by Lévy processes. Numerical experiments for a range of different models suggest that the approximation algorithm is flexible and accurate. A proof of convergence is also provided.


Sign in / Sign up

Export Citation Format

Share Document