scholarly journals Testing the Semi Markov Model Using Monte Carlo Simulation Method for Predicting the Network Traffic

Author(s):  
Shirin Kordnoori ◽  
Hamidreza Mostafaei ◽  
Shaghayegh Kordnoori ◽  
Mohammadmohsen Ostadrahimi

Semi-Markov processes can be considered as a generalization of both Markov and renewal processes. One of the principal characteristics of these processes is that in opposition to Markov models, they represent systems whose evolution is dependent not only on their last visited state but on the elapsed time since this state. Semi-Markov processes are replacing the exponential distribution of time intervals with an optional distribution. In this paper, we give a statistical approach to test the semi-Markov hypothesis. Moreover, we describe a Monte Carlo algorithm able to simulate the trajectories of the semi-Markov chain. This simulation method is used to test the semi-Markov model by comparing and analyzing the results with empirical data. We introduce the database of Network traffic which is employed for applying the Monte Carlo algorithm. The statistical characteristics of real and synthetic data from the models are compared. The comparison between the semi-Markov and the Markov models is done by computing the Autocorrelation functions and the probability density functions of the Network traffic real and simulated data as well. All the comparisons admit that the Markovian hypothesis is rejected in favor of the more general semi Markov one. Finally, the interval transition probabilities which show the future predictions of the Network traffic are given.

1965 ◽  
Vol 2 (02) ◽  
pp. 269-285 ◽  
Author(s):  
George H. Weiss ◽  
Marvin Zelen

This paper applies the theory of semi-Markov processes to the construction of a stochastic model for interpreting data obtained from clinical trials. The model characterizes the patient as being in one of a finite number of states at any given time with an arbitrary probability distribution to describe the length of stay in a state. Transitions between states are assumed to be chosen according to a stationary finite Markov chain.Other attempts have been made to develop stochastic models of clinical trials. However, these have all been essentially Markovian with constant transition probabilities which implies that the distribution of time spent during a visit to a state is exponential (or geometric for discrete Markov chains). Markov models need also to assume that the transitions in the state of a patient depend only on absolute time whereas the semi-Markov model assumes that transitions depend on time relative to a patient. Thus the models are applicable to degenerative diseases (cancer, acute leukemia), while Markov models with time dependent transition probabilities are applicable to colds and epidemic diseases. In this paper the Laplace transforms are obtained for (i) probability of being in a state at timet, (ii) probability distribution to reach absorption state and (iii) the probability distribution of the first passage times to go from initial states to transient or absorbing states, transient to transient, and transient to absorbing. The model is applied to a clinical study of acute leukemia in which patients have been treated with methotrexate and 6-mercaptopurine. The agreement between the data and the model is very good.


1965 ◽  
Vol 2 (2) ◽  
pp. 269-285 ◽  
Author(s):  
George H. Weiss ◽  
Marvin Zelen

This paper applies the theory of semi-Markov processes to the construction of a stochastic model for interpreting data obtained from clinical trials. The model characterizes the patient as being in one of a finite number of states at any given time with an arbitrary probability distribution to describe the length of stay in a state. Transitions between states are assumed to be chosen according to a stationary finite Markov chain.Other attempts have been made to develop stochastic models of clinical trials. However, these have all been essentially Markovian with constant transition probabilities which implies that the distribution of time spent during a visit to a state is exponential (or geometric for discrete Markov chains). Markov models need also to assume that the transitions in the state of a patient depend only on absolute time whereas the semi-Markov model assumes that transitions depend on time relative to a patient. Thus the models are applicable to degenerative diseases (cancer, acute leukemia), while Markov models with time dependent transition probabilities are applicable to colds and epidemic diseases. In this paper the Laplace transforms are obtained for (i) probability of being in a state at timet, (ii) probability distribution to reach absorption state and (iii) the probability distribution of the first passage times to go from initial states to transient or absorbing states, transient to transient, and transient to absorbing. The model is applied to a clinical study of acute leukemia in which patients have been treated with methotrexate and 6-mercaptopurine. The agreement between the data and the model is very good.


2014 ◽  
Vol 51 (1) ◽  
pp. 13-36 ◽  
Author(s):  
Giovanni Masala ◽  
Giuseppina Cannas ◽  
Marco Micocci

SUMMARY In this paper we apply a parametric semi-Markov process to model the dynamic evolution of HIV-1 infected patients. The seriousness of the infection is rendered by the CD4+ T-lymphocyte counts. For this purpose we introduce the main features of nonhomogeneous semi-Markov models. After determining the transition probabilities and the waiting time distributions in each state of the disease, we solve the evolution equations of the process in order to estimate the interval transition probabilities. These quantities appear to be of fundamental importance for clinical predictions. We also estimate the survival probabilities for HIV infected patients and compare them with respect to certain categories, such as gender, age group or type of antiretroviral therapy. Finally we attach a reward structure to the aforementioned semi-Markov processes in order to estimate clinical costs. For this purpose we generate random trajectories from the semi-Markov processes through Monte Carlo simulation. The proposed model is then applied to a large database provided by ISS (Istituto Superiore di Sanità, Rome, Italy), and all the quantities of interest are computed.


1999 ◽  
Vol 36 (2) ◽  
pp. 415-432 ◽  
Author(s):  
Frank Ball

In this paper, central limit theorems for multivariate semi-Markov sequences and processes are obtained, both as the number of jumps of the associated Markov chain tends to infinity and, if appropriate, as the time for which the process has been running tends to infinity. The theorems are widely applicable since many functions defined on Markov or semi-Markov processes can be analysed by exploiting appropriate embedded multivariate semi-Markov sequences. An application to a problem in ion channel modelling is described in detail. Other applications, including to multivariate stationary reward processes, counting processes associated with Markov renewal processes, the interpretation of Markov chain Monte Carlo runs and statistical inference on semi-Markov models are briefly outlined.


1972 ◽  
Vol 9 (04) ◽  
pp. 789-802
Author(s):  
Choong K. Cheong ◽  
Jozef L. Teugels

Let {Zt, t ≧ 0} be an irreducible regular semi-Markov process with transition probabilities Pij (t). Let f(t) be non-negative and non-decreasing to infinity, and let λ ≧ 0. This paper identifies a large set of functions f(t) with the solidarity property that convergence of the integral ≧ eλtf(t)Pij (t) dt for a specific pair of states i and j implies convergence of the integral for all pairs of states. Similar results are derived for the Markov renewal functions Mij (t). Among others it is shown that f(t) can be taken regularly varying.


1972 ◽  
Vol 4 (2) ◽  
pp. 133-146 ◽  
Author(s):  
G Gilbert

This paper develops two mathematical models of housing turnover in a neighborhood. The first of these draws upon the theory of non-homogeneous Markov processes and includes the effects of present neighborhood composition upon future turnover probabilities. The second model considers the turnover process as a Markov renewal process and therefore allows the inclusion of length of occupancy as a determinant of transition probabilities. Example calculations for both models are included, and procedures for using the models are outlined.


2002 ◽  
Vol 10 (3) ◽  
pp. 241-251 ◽  
Author(s):  
R.J. Boys ◽  
D.A. Henderson

This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.


1976 ◽  
Vol 8 (03) ◽  
pp. 531-547 ◽  
Author(s):  
Esa Nummelin

In this paper the limit behaviour of α-recurrent Markov renewal processes and semi-Markov processes is studied by using the recent results on the concept of α-recurrence for Markov renewal processes. Section 1 contains the preliminary results, which are needed later in the paper. In Section 2 we consider the limit behaviour of the transition probabilities Pij (t) of an α-recurrent semi-Markov process. Section 4 deals with quasi-stationarity. Our results extend the results of Cheong (1968), (1970) and of Flaspohler and Holmes (1972) to the case in which the functions to be considered are directly Riemann integrable. We also try to correct the errors we have found in these papers. As a special case from our results we consider continuous-time Markov processes in Sections 3 and 5.


1972 ◽  
Vol 9 (4) ◽  
pp. 789-802
Author(s):  
Choong K. Cheong ◽  
Jozef L. Teugels

Let {Zt, t ≧ 0} be an irreducible regular semi-Markov process with transition probabilities Pij (t). Let f(t) be non-negative and non-decreasing to infinity, and let λ ≧ 0. This paper identifies a large set of functions f(t) with the solidarity property that convergence of the integral ≧ eλtf(t)Pij(t) dt for a specific pair of states i and j implies convergence of the integral for all pairs of states. Similar results are derived for the Markov renewal functions Mij (t). Among others it is shown that f(t) can be taken regularly varying.


Sign in / Sign up

Export Citation Format

Share Document