Performance evaluation of faulty iterative decoders using absorbing Markov chains

Author(s):  
Predrag Ivanis ◽  
Bane Vasic ◽  
David Declercq
1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


Author(s):  
Safaa K. Kadhem ◽  
Sadeq A. Kadhim

Recently, there are many works that proposed modeling approaches to describe the random movement of individuals for COVID-19 infection. However, these models have not taken into account some key aspects for disease such the prediction of expected time of patients remaining at certain health state before entering an absorption state (e.g., exit out of the system for ever such as death state or recovery). Therefore, we propose a dynamical model approach called the absorbing Markov chains for analyzing COVID-19 infections. From this modeling approach, we seek to focus and predict two states of absorption: recovery and death, as these two conditions are considered as important indicators in assessment of the health level. Based on the absorbing Markov model, the study suggested that there is a gradually increase in the predicted death number, while a decrease in the number of recovered individuals.


1999 ◽  
Vol 10 (08) ◽  
pp. 1483-1493 ◽  
Author(s):  
M. A. NOVOTNY

An overview of advanced dynamical algorithms capable of spanning the widely disparate time scales that govern the decay of metastable phases in discrete spin models is presented. The algorithms discussed include constrained transfer-matrix, Monte Carlo with Absorbing Markov Chains (MCAMC), and projective dynamics (PD) methods. The strengths and weaknesses of each of these algorithms are discussed, with particular emphasis on identifying the parameter regimes (system size, temperature, and field) in which each algorithm works best.


1999 ◽  
Vol 36 (01) ◽  
pp. 268-272 ◽  
Author(s):  
P. K. Pollett

Recently, Elmes et al. (see [2]) proposed a definition of a quasistationary distribution to accommodate absorbing Markov chains for which absorption occurs with probability less than 1. We will show that the probabilistic interpretation pertaining to cases where absorption is certain (see [13]) does not hold in the present context. We prove that the state probabilities at time t conditional on absorption taking place after t, generally depend on t. Conditions are derived under which there is no initial distribution such that the conditional state probabilities are stationary.


2018 ◽  
Vol 14 (3) ◽  
pp. 103-115
Author(s):  
Devyn Norman Woodfield ◽  
Gilbert W. Fellingham

Abstract A Bayesian model is used to evaluate the probability that a given skill performed in a specified area of the field will lead to a predetermined outcome by using discrete absorbing Markov chains. The transient states of the Markov process are defined by unique skill-area combinations. The absorbing states of the Markov process are defined by a shot, turnover, or bad turnover. Defining the states in this manner allows the probability of a transient state leading to an absorbing state to be derived. A non-informative prior specification of transition counts is used to permit the data to define the posterior distribution. A web application was created to collect play-by-play data from 34 Division 1 NCAA Women’s soccer matches for the 2013–2014 seasons. A prudent construction of updated transition probabilities facilitates a transformation through Monte Carlo simulation to obtain marginal probability estimates of each unique skill-area combination leading to an absorbing state. For each season, marginal probability estimates for given skills are compared both across and within areas to determine which skills and areas of the field are most advantageous.


Sign in / Sign up

Export Citation Format

Share Document