scholarly journals Chain-referral sampling on stochastic block models

2020 ◽  
Vol 24 ◽  
pp. 718-738
Author(s):  
Thi Phuong Thuy Vo

The discovery of the “hidden population”, whose size and membership are unknown, is made possible by assuming that its members are connected in a social network by their relationships. We explore these groups by a chain-referral sampling (CRS) method, where participants recommend the people they know. This leads to the study of a Markov chain on a random graph where vertices represent individuals and edges connecting any two nodes describe the relationships between corresponding people. We are interested in the study of CRS process on the stochastic block model (SBM), which extends the well-known Erdös-Rényi graphs to populations partitioned into communities. The SBM considered here is characterized by a number of vertices N, a number of communities (blocks) m, proportion of each community π = (π1, …, πm) and a pattern for connection between blocks P = (λkl∕N)(k,l)∈{1,…,m}2. In this paper, we give a precise description of the dynamic of CRS process in discrete time on an SBM. The difficulty lies in handling the heterogeneity of the graph. We prove that when the population’s size is large, the normalized stochastic process of the referral chain behaves like a deterministic curve which is the unique solution of a system of ODEs.

1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1985 ◽  
Vol 17 (4) ◽  
pp. 731-747
Author(s):  
Norman Kaplan ◽  
Thomas Darden

For each N≧1, let {XN(t, x), t≧0} be a discrete-time stochastic process with XN(0) = x. Let FN(y) = E(XN(t + 1) | XN(t) = y), and define YN(t, x) = FN(YN(t – 1, x)), t≧1 and YN(0, x) = x. Assume that in a neighborhood of the origin FN(y) = mNy(l + O(y)) where mN> 1, and define for δ> 0 and x> 0, υN(δ, x) = inf{t:xmtN>δ}. Conditions are given under which, for θ> 0 and ε> 0, there exist constants δ > 0 and L <∞, depending on εand 0, such that This result together with a result of Kurtz (1970), (1971) shows that, under appropriate conditions, the time needed for the stochastic process {XN(t, 1/N), t≧0} to escape a δ -neighborhood of the origin is of order log Νδ /log mN. To illustrate the results the Wright-Fisher model with selection is considered.


1987 ◽  
Vol 24 (2) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1985 ◽  
Vol 17 (04) ◽  
pp. 731-747
Author(s):  
Norman Kaplan ◽  
Thomas Darden

For each N≧1, let {XN(t, x), t≧0} be a discrete-time stochastic process with XN (0) = x. Let FN (y) = E(XN (t + 1) | XN (t) = y), and define YN (t, x) = FN(YN(t – 1, x)), t≧1 and YN (0, x) = x. Assume that in a neighborhood of the origin FN (y) = mNy(l + O(y)) where mN &gt; 1, and define for δ&gt; 0 and x&gt; 0, υ N (δ, x) = inf{t:xmt N &gt;δ}. Conditions are given under which, for θ&gt; 0 and ε&gt; 0, there exist constants δ &gt; 0 and L &lt;∞, depending on εand 0, such that This result together with a result of Kurtz (1970), (1971) shows that, under appropriate conditions, the time needed for the stochastic process {XN (t, 1/N), t≧0} to escape a δ -neighborhood of the origin is of order log Νδ /log mN . To illustrate the results the Wright-Fisher model with selection is considered.


Author(s):  
M. Saburov

A linear Markov chain is a discrete time stochastic process whose transitions depend only on the current state of the process. A nonlinear Markov chain is a discrete time stochastic process whose transitions may depend on both the current state and the current distribution of the process. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes. The nonlinear Markov processes were introduced by McKean and have been extensively studied in the context of nonlinear Chapman-Kolmogorov equations as well as nonlinear Fokker-Planck equations. The nonlinear Markov chain over a finite state space can be identified by a continuous mapping (a nonlinear Markov operator) defined on a set of all probability distributions (which is a simplex) of the finite state space and by a family of transition matrices depending on occupation probability distributions of states. Particularly, a linear Markov operator is a linear operator associated with a square stochastic matrix. It is well-known that a linear Markov operator is a surjection of the simplex if and only if it is a bijection. The similar problem was open for a nonlinear Markov operator associated with a stochastic hyper-matrix. We solve it in this paper. Namely, we show that a nonlinear Markov operator associated with a stochastic hyper-matrix is a surjection of the simplex if and only if it is a permutation of the Lotka-Volterra operator.


Author(s):  
Lyudmila A. Khalilova ◽  

A language cannot be a simple template of human activity; a language is the history and culture of the people, their long and thorny road to civilization. The informative nature of a discourse will be insignificant if we only take into consideration the visible data of the text. The single viable way to carry out research on the mentality and behavior of the representatives of different cultures is to dig into the implication and the conceptual framework of the discourse. The author’s idea might be interpreted according to the background knowledge of the reader. Such an approach turns the text into a conglomerate of sense messages that reveal the power of the language and its inextricable link to the history, culture and civilization of the nation whose language the students learn. This notional “intervention” is akin to a chain reaction and the language develops into a means of power over a human being. The conceptual approach to a foreign language material helps improve students’ cognitive and analytical skills, turns the educational process into a particular type of an innovative environment, leads to motivation increase in a foreign language instruction.


2019 ◽  
Author(s):  
Mohammad Dehghani ◽  
Akhondzadeh Shahin ◽  
Mesgarpour Bita ◽  
Ferdousi Reza

UNSTRUCTURED Iran has faced severe sanctions in recent years from some countries. Due to the dependence of the Iranian health industry on government payments, the health of the people in this country has suffered a lot. One of the solutions for Iran's sanctions to reduce the impact of sanctions on health is to rely on domestic researchers, but researchers in Iran are having problems. One way to reduce researchers' problems is to use the National Academic Social Network. This article describes the steps of setting up an academic social network in a developing country in four stages.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


Sign in / Sign up

Export Citation Format

Share Document