finite markov chain
Recently Published Documents


TOTAL DOCUMENTS

153
(FIVE YEARS 11)

H-INDEX

21
(FIVE YEARS 1)

Author(s):  
Chen Fang ◽  
Lirong Cui

Based on some real backgrounds, a new balanced system structure, a consecutive k-out-of- m: F system with a symmetry line, is proposed in this paper. Considering different state numbers of a subsector, the new balanced system is analyzed under two situations respectively: the subsector with binary-state and the subsector with multi-state, while the multi-state balanced systems have not been studied in the previous research. Besides, two models are developed in terms of assumptions for the two situations, respectively. For this system, several methods, such as the finite Markov chain imbedding approach, the order statistics technique and the phase-type distributions, are used on the models. In addition to system reliability formulas, the means and variances of the system lifetimes under two models for different situations are given. Finally, numerical examples are presented to illustrate the results obtained in this paper.


Author(s):  
E. A. Perepelkin ◽  

The problem of constructing a state estimation of inhomogeneous finite Markov chain based on a Luenberger observer is solved. The conditions of existence of the observer are defined. An algorithm for synthesizing the observer is described.


2020 ◽  
Vol 1462 ◽  
pp. 012039
Author(s):  
M S Sinaga ◽  
O Purba ◽  
H Nasution

In this paper we have considered a finite discrete Markov chain and derived a recurrence relation for the calculating the return time probability distribution. The mean recurrence time is also calculated. Return time distribution helps to identify the most frequently visited states. Return time distribution plays a vital role in the classification of Markov chain. These concepts are illustrated through an example.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


Author(s):  
Steven Carr ◽  
Nils Jansen ◽  
Ralf Wimmer ◽  
Alexandru Serban ◽  
Bernd Becker ◽  
...  

We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.


Sign in / Sign up

Export Citation Format

Share Document