markov policy
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Eugene A. Feinberg ◽  
Manasa Mandava ◽  
Albert N. Shiryaev

One of the basic facts known for discrete-time Markov decision processes is that, if the probability distribution of an initial state is fixed, then for every policy it is easy to construct a (randomized) Markov policy with the same marginal distributions of state-action pairs as for the original policy. This equality of marginal distributions implies that the values of major objective criteria, including expected discounted total costs and average rewards per unit time, are equal for these two policies. This paper investigates the validity of the similar fact for continuous-time jump Markov decision processes (CTJMDPs). It is shown in this paper that the equality of marginal distributions takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance, the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are applied in this paper to CTJMDPs with expected discounted total costs and average costs per unit time. It is shown for these criteria that, if the initial state distribution is fixed, then for every policy, there exists a Markov policy with the same or better value of the objective function.


2019 ◽  
Vol 75 (3) ◽  
pp. 421-455 ◽  
Author(s):  
Olli-Pekka Kuusela ◽  
Jussi Lintunen

AbstractWe examine the planner’s dynamic regulation problem in an emission trading system (ETS) with allowance banking. The planner sets the emissions cap for the next period after the current period allowance market has cleared, but before knowing the next period’s abatement cost realization. This creates a time consistency problem when banking is possible. We examine two policies to overcome the consistency problem: a commitment solution and the Markov perfect solution. We show that the endogenous price floor generated by the banking demand becomes an integral feature of the two policies. Hence, they can be best described as hybrid policies that combine elements from emissions taxes and tradable allowances. This reveals new welfare implications that have an influence on instrument choice in the traditional prices versus quantities setup. We compare the expected welfare outcomes of four different policy instruments: the commitment policy, the Markov policy, a Pigouvian tax, and a no-banking ETS. We show that allowing banking can yield welfare gains compared to tax and quantity regulation, with or without commitment.


2011 ◽  
Vol 25 (3) ◽  
pp. 307-342 ◽  
Author(s):  
Dinard van der Laan

In this article we study Markov decision process (MDP) problems with the restriction that at decision epochs, only a finite number of given Markov decision rules are admissible. For example, the set of admissible Markov decision rules ${\cal D}$ could consist of some easy-implementable decision rules. Additionally, many open-loop control problems can be modeled as an MDP with such a restriction on the admissible decision rules. Within the class of available policies, optimal policies are generally nonstationary and it is difficult to prove that some policy is optimal. We give an example with two admissible decision rules—${\cal D}$={d1, d2} —for which we conjecture that the nonstationary periodic Markov policy determined by its period cycle (d1, d1, d2, d1, d2, d1, d2, d1, d2) is optimal. This conjecture is supported by results that we obtain on the structure of optimal ${\cal D}$ Markov policies in general. We also present some numerical results that give additional confirmation for the conjecture for the particular example we consider.


1991 ◽  
Vol 28 (2) ◽  
pp. 480-486 ◽  
Author(s):  
Richard H. Stockbridge

A Markov queueing system having heterogeneous servers under a long-run average criterion is analyzed. A direct proof of the optimality of a stationary, Markov policy is given using martingale methods. Simultaneously, the problem is reduced to a linear programming problem. Analysis of the LP for a system having finite queueing length shows the optimal policy is not always of threshold type.


1991 ◽  
Vol 28 (02) ◽  
pp. 480-486 ◽  
Author(s):  
Richard H. Stockbridge

A Markov queueing system having heterogeneous servers under a long-run average criterion is analyzed. A direct proof of the optimality of a stationary, Markov policy is given using martingale methods. Simultaneously, the problem is reduced to a linear programming problem. Analysis of the LP for a system having finite queueing length shows the optimal policy is not always of threshold type.


Sign in / Sign up

Export Citation Format

Share Document