scholarly journals Nonzero-Sum Risk-Sensitive Stochastic Games on a Countable State Space

2018 ◽  
Vol 43 (2) ◽  
pp. 516-532 ◽  
Author(s):  
Arnab Basu ◽  
Mrinal K. Ghosh
2017 ◽  
Vol 32 (4) ◽  
pp. 626-639 ◽  
Author(s):  
Zhiyan Shi ◽  
Pingping Zhong ◽  
Yan Fan

In this paper, we give the definition of tree-indexed Markov chains in random environment with countable state space, and then study the realization of Markov chain indexed by a tree in random environment. Finally, we prove the strong law of large numbers and Shannon–McMillan theorem for Markov chains indexed by a Cayley tree in a Markovian environment with countable state space.


1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 253 ◽  
Author(s):  
Alexander Zeifman ◽  
Victor Korolev ◽  
Yacov Satin

This paper is largely a review. It considers two main methods used to study stability and to obtain appropriate quantitative estimates of perturbations of (inhomogeneous) Markov chains with continuous time and a finite or countable state space. An approach is described to the construction of perturbation estimates for the main five classes of such chains associated with queuing models. Several specific models are considered for which the limit characteristics and perturbation bounds for admissible “perturbed” processes are calculated.


1973 ◽  
Vol 73 (1) ◽  
pp. 119-138 ◽  
Author(s):  
Gerald S. Goodman ◽  
S. Johansen

1. SummaryWe shall consider a non-stationary Markov chain on a countable state space E. The transition probabilities {P(s, t), 0 ≤ s ≤ t <t0 ≤ ∞} are assumed to be continuous in (s, t) uniformly in the state i ε E.


1991 ◽  
Vol 5 (4) ◽  
pp. 463-475 ◽  
Author(s):  
Linn I. Sennott

A Markov decision chain with countable state space incurs two types of costs: an operating cost and a holding cost. The objective is to minimize the expected discounted operating cost, subject to a constraint on the expected discounted holding cost. The existence of an optimal randomized simple policy is proved. This is a policy that randomizes between two stationary policies, that differ in at most one state. Several examples from the control of discrete time queueing systems are discussed.


1978 ◽  
Vol 10 (2) ◽  
pp. 452-471 ◽  
Author(s):  
A. Federgruen

This paper considers non-cooperative N-person stochastic games with a countable state space and compact metric action spaces. We concentrate upon the average return per unit time criterion for which the existence of an equilibrium policy is established under a number of recurrency conditions with respect to the transition probability matrices associated with the stationary policies. These results are obtained by establishing the existence of total discounted return equilibrium policies, for each discount factor α ∈ [0, 1) and by showing that under each one of the aforementioned recurrency conditions, average return equilibrium policies appear as limit policies of sequences of discounted return equilibrium policies, with discount factor tending to one.Finally, we review and extend the results that are known for the case where both the state space and the action spaces are finite.


Sign in / Sign up

Export Citation Format

Share Document