The recursive estimation of a Markov chain

1974 ◽  
Vol 11 (02) ◽  
pp. 394-400
Author(s):  
B. J. N. Blight ◽  
J. L. Devore

For every hth member of a two-state Markov chain the value of a random variable Y is observed where the distribution of Y is conditional on the state of the corresponding member of the chain. A recursive set of equations is derived giving the posterior probabilities for both the observed and unobserved members. The use of this recursive solution to investigate the optimality of certain simple classification rules is discussed, and a “classification by runs” is also presented.

1974 ◽  
Vol 11 (2) ◽  
pp. 394-400 ◽  
Author(s):  
B. J. N. Blight ◽  
J. L. Devore

For every hth member of a two-state Markov chain the value of a random variable Y is observed where the distribution of Y is conditional on the state of the corresponding member of the chain. A recursive set of equations is derived giving the posterior probabilities for both the observed and unobserved members. The use of this recursive solution to investigate the optimality of certain simple classification rules is discussed, and a “classification by runs” is also presented.


1970 ◽  
Vol 68 (1) ◽  
pp. 159-166 ◽  
Author(s):  
A. M. Kshirsagar ◽  
R. Wysocki

1. Introduction. A Markov Renewal Process (MRP) with m(<∞) states is one which records at each time t, the number of times a system visits each of the m states up to time t, if the system moves from state to state according to a Markov chain with transition probability matrix P0 = [pij] and if the time required for each successive move is a random variable whose distribution function (d.f.) depends on the two states between which the move is made. Thus, if the system moves from state i to state j, the holding time in the state i has Fij(x) as its d.f. (i, j = 1,2, …, m).


2021 ◽  
Vol 58 (2) ◽  
pp. 372-393
Author(s):  
H. M. Jansen

AbstractOur aim is to find sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure of a Markov chain. First, we study properties of the state indicator function and the state occupation measure of a Markov chain. In particular, we establish weak convergence of the state occupation measure under a scaling of the generator matrix. Then, relying on the connection between the state occupation measure and the Dynkin martingale, we provide sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure. We apply our results to derive diffusion limits for the Markov-modulated Erlang loss model and the regime-switching Cox–Ingersoll–Ross process.


1982 ◽  
Vol 19 (2) ◽  
pp. 433-438 ◽  
Author(s):  
P.-C. G. Vassiliou

We study the limiting behaviour of a manpower system where the non-homogeneous Markov chain model proposed by Young and Vassiliou (1974) is applicable. This is done in the cases where the input is a time-homogeneous and time-inhomogeneous Poisson random variable. It is also found that the number in the various grades are asymptotically mutually independent Poisson variates.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


Author(s):  
Qianmu Li ◽  
Yinhai Wang ◽  
Ziyuan Pu ◽  
Shuo Wang ◽  
Weibin Zhang

A robust, integrated and flexible charging network is essential for the growth and deployment of electric vehicles (EVs). The State Grid of China has developed a Smart Internet of Electric Vehicle Charging Network (SIEN). At present, there are three main ways to attack SIEN maliciously: distributed data tampering; distributed denial of service (DDoS); and forged command attacks. Network attacks are random and continuous, closely related to time. By contrast, when analyzing the alarm in malicious attacks, the traditional Markov chain based model ignores the association relationship in the time series between states of alarm, so that the analysis and prediction of alarms are not suitable for real situations. This paper analyzes the characteristics of the three types of attack and proposes an association state analysis method on the time series. This method firstly analyzes alarm logs at different locations, different levels, and different types, and then establishes the temporal association of scattered and isolated alarm information. Secondly, it tracks the transition trend of abnormal events in the SIEN’s main station layer, the channel layer, and the sub-station layer. It also identifies the real attack behavior. This method not only provides a prediction of security risks, but, more importantly, it can also accurately analyze the trend of SIEN security risks. Compared with the ordinary Markov chain model, this method can better smooth the fluctuation of processing values, with higher real-time performance, stronger robustness, and higher precision. This method has been applied to the State Grid of China.


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1988 ◽  
Vol 25 (A) ◽  
pp. 335-346
Author(s):  
J. Gani

This paper considers a bivariate random walk modelon a rectangular lattice for a particle injected into a fluid flowing in a tank. The numbers of jumps of the particle in thexandydirections in this particular model are correlated. It is shown that when the random walk forms a bivariate Markov chain in continuous time, it is possible to obtain the state probabilitiespxy(t) through their Laplace transforms. Two exit rules are considered and results for both of them derived.


2004 ◽  
Vol 41 (04) ◽  
pp. 1237-1242 ◽  
Author(s):  
Offer Kella ◽  
Wolfgang Stadje

We consider a Brownian motion with time-reversible Markov-modulated speed and two reflecting barriers. A methodology depending on a certain multidimensional martingale together with some linear algebra is applied in order to explicitly compute the stationary distribution of the joint process of the content level and the state of the underlying Markov chain. It is shown that the stationary distribution is such that the two quantities are independent. The long-run average push at the two barriers at each of the states is also computed.


Sign in / Sign up

Export Citation Format

Share Document