Quantifying the probability of a shot in women’s collegiate soccer through absorbing Markov chains

2018 ◽  
Vol 14 (3) ◽  
pp. 103-115
Author(s):  
Devyn Norman Woodfield ◽  
Gilbert W. Fellingham

Abstract A Bayesian model is used to evaluate the probability that a given skill performed in a specified area of the field will lead to a predetermined outcome by using discrete absorbing Markov chains. The transient states of the Markov process are defined by unique skill-area combinations. The absorbing states of the Markov process are defined by a shot, turnover, or bad turnover. Defining the states in this manner allows the probability of a transient state leading to an absorbing state to be derived. A non-informative prior specification of transition counts is used to permit the data to define the posterior distribution. A web application was created to collect play-by-play data from 34 Division 1 NCAA Women’s soccer matches for the 2013–2014 seasons. A prudent construction of updated transition probabilities facilitates a transformation through Monte Carlo simulation to obtain marginal probability estimates of each unique skill-area combination leading to an absorbing state. For each season, marginal probability estimates for given skills are compared both across and within areas to determine which skills and areas of the field are most advantageous.

2004 ◽  
Vol 36 (01) ◽  
pp. 57-77 ◽  
Author(s):  
A. Bobrowski

We consider a pair of Markov chains representing statistics of the Fisher-Wright-Moran model with mutations and drift. The chains have absorbing state at 0 and are related by the fact that some random time τ ago they were identical, evolving as a single Markov chain with values in {0,1,…}; from that time on they began to evolve independently, conditional on a state at the time of split, according to the same transition probabilities. The distribution of τ is a function of deterministic effective population size 2N(·). We study the impact of demographic history on the shape of the quasi-stationary distribution, conditional on nonabsorption at the margin (where one of the chains is at 0), and on the speed with which the probability mass escapes to the margin.


2004 ◽  
Vol 36 (1) ◽  
pp. 57-77 ◽  
Author(s):  
A. Bobrowski

We consider a pair of Markov chains representing statistics of the Fisher-Wright-Moran model with mutations and drift. The chains have absorbing state at 0 and are related by the fact that some random time τ ago they were identical, evolving as a single Markov chain with values in {0,1,…}; from that time on they began to evolve independently, conditional on a state at the time of split, according to the same transition probabilities. The distribution of τ is a function of deterministic effective population size 2N(·). We study the impact of demographic history on the shape of the quasi-stationary distribution, conditional on nonabsorption at the margin (where one of the chains is at 0), and on the speed with which the probability mass escapes to the margin.


2001 ◽  
Vol 34 (4) ◽  
pp. 1611 ◽  
Author(s):  
T. M. TSAPANOS

The well known stochastic model of the Markov chains is applied in south America, in order to search for pattern of great earthquakes recurrence. The model defines a process in which successive state occupancies are governed by the transition probabilities pij, of the Markov process and are presented as a transition matrix say P, which has NxN dimensions. We considered as states in the present study the predefined seismic zones of south America. Thus the visits from zone to zone, which is from state to state, carry with them the number of the zone in which they occurred. If these visits are considered to be earthquake occurrences we can inspect their migration between the zones (states) and estimate their genesis in a statistical way, through the transition probabilities. Attention is given in zones where very large earthquakes with Ms>7.8 have occurred. A pattern is revealed which is suggested migration of these large shocks from south towards north. The use of Monte Carlo simulation verify the defined pattern.


Genetics ◽  
1974 ◽  
Vol 76 (2) ◽  
pp. 367-377
Author(s):  
Takeo Maruyama

ABSTRACT A Markov process (chain) of gene frequency change is derived for a geographically-structured model of a population. The population consists of colonies which are connected by migration. Selection operates in each colony independently. It is shown that there exists a stochastic clock that transforms the originally complicated process of gene frequency change to a random walk which is independent of the geographical structure of the population. The time parameter is a local random time that is dependent on the sample path. In fact, if the alleles are selectively neutral, the time parameter is exactly equal to the sum of the average local genetic variation appearing in the population, and otherwise they are approximately equal. The Kolmogorov forward and backward equations of the process are obtained. As a limit of large population size, a diffusion process is derived. The transition probabilities of the Markov chain and of the diffusion process are obtained explicitly. Certain quantities of biological interest are shown to be independent of the population structure. The quantities are the fixation probability of a mutant, the sum of the average local genetic variation and the variation summed over the generations in which the gene frequency in the whole population assumes a specified value.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


Sign in / Sign up

Export Citation Format

Share Document