Efficient Representation of Transition Matrix in the Markov Process Modeling of Computer Networks

Author(s):  
Piotr Pecka ◽  
Sebastian Deorowicz ◽  
Mateusz Nowak
2014 ◽  
Vol 754 ◽  
pp. 365-414 ◽  
Author(s):  
Eurika Kaiser ◽  
Bernd R. Noack ◽  
Laurent Cordier ◽  
Andreas Spohn ◽  
Marc Segond ◽  
...  

AbstractWe propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger’s group (Burkardt, Gunzburger & Lee,Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337–355) and transition matrix models introduced in fluid dynamics in Eckhardt’s group (Schneider, Eckhardt & Vollmer,Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam–Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron–Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e.g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.


1973 ◽  
Vol 10 (01) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.


1975 ◽  
Vol 12 (S1) ◽  
pp. 217-224 ◽  
Author(s):  
P. Whittle

It is well-known that the transition matrix of a reversible Markov process can have only real eigenvalues. An example is constructed which shows that the converse assertion does not hold. A generalised notion of reversibility is proposed, ‘dynamic reversibility’, which has many of the implications for the form of the transition matrix of the classical definition, but which does not exclude ‘circulation in state-space’ or, indeed, periodicity.


2013 ◽  
Vol 330 ◽  
pp. 1054-1058 ◽  
Author(s):  
Abbas Karimi ◽  
B. Kiamanesh ◽  
Faraneh Zarafshan ◽  
S.A.R. Al-Haddad

The purpose of this paper is calculating the availability of wireless sensor network for a virtual grid by using Markov model. Since, wireless sensor networks are constraints by energy, their energy consumption should be limited within a way to guarantee their quality of service. In this paper, availability and quality of service in a wireless sensor network with particular coverage are investigated by using an energy optimized algorithm and Markov process.


Author(s):  
Dr Swapna Datta Khan

Markov Processes are sequential events that are related to each other stochastically. Such events, also known as states, may be such that the probability of an event occurring depends only on the previous event and not on any event prior to that. This is known as the memoryless property of Markov Processes. Certain dynamic market conditions especially with respect to the Fast-Moving Consumer Goods Sector, the Telecommunications Sector may enable the use of finite and time-homogeneous Markov Processes to study the brand switching tendency of the consumer, and thus predict consumer loyalty. In this conceptual paper, we shall study how to predict brand switching tendencies using finite, time-homogeneous Markov Processes. KEYWORDS: Markov Process, Brand Switching, Time-Homogeneous, Consumer Loyalty, Transition Matrix


MAUSAM ◽  
2022 ◽  
Vol 45 (3) ◽  
pp. 267-270
Author(s):  
A. MASCARENHAS ◽  
A. D. GOUVEIA ◽  
R. G. PRABHU DESAI

One appl ica tion ofihe cuuiulative probability wind tra ns ition matrix is 10 determine the variousprnhal"lk ....i n,1....-ries th ai mi1;\'hl occur. during the period fur which offs hore oilspil1 risk is 10 be analysed. Du ring th is;Inalpli ~ ""'t" haw to gene la te different probable "';110.1 conditions at d iff erent Instances oftime. OnC" ur lhe method s tosiumlutcthe nl lhlu m wind behaviourt hrough lime. is 10 U ~ h istorical wi nd da ta presented in the fon n of wi nd Ira n...ilion lItutrix. Th is pa per h it;hli.::hts th r- 1llC'l h(Klulog)' and use ofthe cumulat ive probability v.i nd transition matrix. in~t'lh"ral ill~ 1111' ,Iifferl:'nl proba ble wind-series.  


COSMOS ◽  
2005 ◽  
Vol 01 (01) ◽  
pp. 87-94 ◽  
Author(s):  
CHII-RUEY HWANG

Let π be a probability density proportional to exp - U(x) in S. A convergent Markov process to π(x) may be regarded as a "conceptual" algorithm. Assume that S is a finite set. Let X0,X1,…,Xn,… be a Markov chain with transition matrix P and invariant probability π. Under suitable condition on P, it is known that [Formula: see text] converges to π(f) and the corresponding asymptotic variance v(f, P) depends only on f and P. It is natural to consider criteria vw(P) and va(P), defined respectively by maximizing and averaging v(f, P) over f. Two families of transition matrices are considered. There are four problems to be investigated. Some results and conjectures are given. As for the continuum case, to accelerate the convergence a family of diffusions with drift ∇U(x) + C(x) with div(C(x)exp - U(x)) = 0 is considered.


Sign in / Sign up

Export Citation Format

Share Document