Optimal control of a truncated general immigration process through total catastrophes

1999 ◽  
Vol 36 (02) ◽  
pp. 461-472
Author(s):  
E. G. Kyriakidis

A Markov decision model is considered for the control of a truncated general immigration process, which represents a pest population, by the introduction of total catastrophes. The optimality criterion is that of minimizing the expected long-run average cost per unit time. Firstly, a necessary and sufficient condition is found under which the policy of never controlling is optimal. If this condition fails, a parametric analysis, in which a fictitious parameter is varied over the entire real line, is used to establish the optimality of a control-limit policy. Furthermore, an efficient Markov decision algorithm operating on the class of control-limit policies is developed for the computation of the optimal policy.

1999 ◽  
Vol 36 (2) ◽  
pp. 461-472 ◽  
Author(s):  
E. G. Kyriakidis

A Markov decision model is considered for the control of a truncated general immigration process, which represents a pest population, by the introduction of total catastrophes. The optimality criterion is that of minimizing the expected long-run average cost per unit time. Firstly, a necessary and sufficient condition is found under which the policy of never controlling is optimal. If this condition fails, a parametric analysis, in which a fictitious parameter is varied over the entire real line, is used to establish the optimality of a control-limit policy. Furthermore, an efficient Markov decision algorithm operating on the class of control-limit policies is developed for the computation of the optimal policy.


2006 ◽  
Vol 2006 ◽  
pp. 1-12
Author(s):  
E. G. Kyriakidis

This paper is concerned with the problem of controlling a truncated general immigration process, which represents a population of harmful individuals, by the introduction of a predator. If the parameters of the model satisfy some mild conditions, the existence of a control-limit policy that is average-cost optimal is proved. The proof is based on the uniformization technique and on the variation of a fictitious parameter over the entire real line. Furthermore, an efficient Markov decision algorithm is developed that generates a sequence of improving control-limit policies converging to the optimal policy.


1975 ◽  
Vol 12 (2) ◽  
pp. 298-305 ◽  
Author(s):  
Arie Hordijk ◽  
Paul J. Schweitzer ◽  
Henk Tijms

This paper considers the discrete time Markov decision model with a denumerable state space and finite action space. Under certain conditions it is proved that the minimal total expected cost for a planning horizon of n epochs minus n times the minimal long-run average expected cost per unit time has a finite limit as n → ∞ for each initial state.


1975 ◽  
Vol 12 (02) ◽  
pp. 298-305 ◽  
Author(s):  
Arie Hordijk ◽  
Paul J. Schweitzer ◽  
Henk Tijms

This paper considers the discrete time Markov decision model with a denumerable state space and finite action space. Under certain conditions it is proved that the minimal total expected cost for a planning horizon of n epochs minus n times the minimal long-run average expected cost per unit time has a finite limit as n → ∞ for each initial state.


1970 ◽  
Vol 7 (3) ◽  
pp. 649-656 ◽  
Author(s):  
Sheldon M. Ross

The semi-Markov decision model is considered under the criterion of long-run average cost. A new criterion, which for any policy considers the limit of the expected cost incurred during the first n transitions divided by the expected length of the first n transitions, is considered. Conditions guaranteeing that an optimal stationary (non-randomized) policy exist are then presented. It is also shown that the above criterion is equivalent to the usual one under certain conditions.


1970 ◽  
Vol 7 (03) ◽  
pp. 649-656 ◽  
Author(s):  
Sheldon M. Ross

The semi-Markov decision model is considered under the criterion of long-run average cost. A new criterion, which for any policy considers the limit of the expected cost incurred during the first n transitions divided by the expected length of the first n transitions, is considered. Conditions guaranteeing that an optimal stationary (non-randomized) policy exist are then presented. It is also shown that the above criterion is equivalent to the usual one under certain conditions.


Sign in / Sign up

Export Citation Format

Share Document