Stochastic majorization of random variables by proportional equilibrium rates

1987 ◽  
Vol 19 (04) ◽  
pp. 854-872 ◽  
Author(s):  
J. George Shanthikumar

The equilibrium rate rY of a random variable Y with support on non-negative integers is defined by rY (0) = 0 and rY (n) = P[Y = n – 1]/P[Y – n], Let (j = 1, …, m; i = 1,2) be 2m independent random variables that have proportional equilibrium rates with (j = 1, …, m; i = 1, 2) as the constant of proportionality. When the equilibrium rate is increasing and concave [convex] it is shown that , …, ) majorizes implies , …, for all increasing Schur-convex [concave] functions whenever the expectations exist. In addition if , (i = 1, 2), then

1987 ◽  
Vol 19 (4) ◽  
pp. 854-872 ◽  
Author(s):  
J. George Shanthikumar

The equilibrium rate rY of a random variable Y with support on non-negative integers is defined by rY(0) = 0 and rY(n) = P[Y = n – 1]/P[Y – n], Let (j = 1, …, m; i = 1,2) be 2m independent random variables that have proportional equilibrium rates with (j = 1, …, m; i = 1, 2) as the constant of proportionality. When the equilibrium rate is increasing and concave [convex] it is shown that , …, ) majorizes implies , …, for all increasing Schur-convex [concave] functions whenever the expectations exist. In addition if , (i = 1, 2), then


1968 ◽  
Vol 64 (2) ◽  
pp. 485-488 ◽  
Author(s):  
V. K. Rohatgi

Let {Xn: n ≥ 1} be a sequence of independent random variables and write Suppose that the random vairables Xn are uniformly bounded by a random variable X in the sense thatSet qn(x) = Pr(|Xn| > x) and q(x) = Pr(|Xn| > x). If qn ≤ q and E|X|r < ∞ with 0 < r < 2 then we have (see Loève(4), 242)where ak = 0, if 0 < r < 1, and = EXk if 1 ≤ r < 2 and ‘a.s.’ stands for almost sure convergence. the purpose of this paper is to study the rates of convergence ofto zero for arbitrary ε > 0. We shall extend to the present context, results of (3) where the case of identically distributed random variables was treated. The techniques used here are strongly related to those of (3).


1970 ◽  
Vol 7 (01) ◽  
pp. 89-98
Author(s):  
John Lamperti

In the first part of this paper, we will consider a class of Markov chains on the non-negative integers which resemble the Galton-Watson branching process, but with one major difference. If there are k individuals in the nth “generation”, and are independent random variables representing their respective numbers of offspring, then the (n + 1)th generation will contain max individuals rather than as in the branching case. Equivalently, the transition matrices Pij of the chains we will study are to be of the form where F(.) is the probability distribution function of a non-negative, integervalued random variable. The right-hand side of (1) is thus the probability that the maximum of i independent random variables distributed by F has the value j. Such a chain will be called a “maximal branching process”.


1970 ◽  
Vol 7 (1) ◽  
pp. 89-98 ◽  
Author(s):  
John Lamperti

In the first part of this paper, we will consider a class of Markov chains on the non-negative integers which resemble the Galton-Watson branching process, but with one major difference. If there are k individuals in the nth “generation”, and are independent random variables representing their respective numbers of offspring, then the (n + 1)th generation will contain max individuals rather than as in the branching case. Equivalently, the transition matrices Pij of the chains we will study are to be of the form where F(.) is the probability distribution function of a non-negative, integervalued random variable. The right-hand side of (1) is thus the probability that the maximum of i independent random variables distributed by F has the value j. Such a chain will be called a “maximal branching process”.


1959 ◽  
Vol 55 (4) ◽  
pp. 333-337 ◽  
Author(s):  
Harold Ruben

1. Introductory discussion and summary. Consider a sequence {ui} of independent real or complex-valued random variables such that E(ui) = 1, and a sequence of mutually exclusive events S1, S2,…, such that Si depends only on u1, u2, …,ui, with ΣP(Sj) = 1. Define the random variable n = n(u1, u2,…) = m when Sm occurs. We shall obtain the necessary and sufficient conditions under whichreferred to as the product theorem.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


1970 ◽  
Vol 13 (1) ◽  
pp. 151-152 ◽  
Author(s):  
J. C. Ahuja

Let X1, X2, …, Xn be n independent and identically distributed random variables having the positive binomial probability function1where 0 < p < 1, and T = {1, 2, …, N}. Define their sum as Y=X1 + X2 + … +Xn. The distribution of the random variable Y has been obtained by Malik [2] using the inversion formula for characteristic functions. It appears that his result needs some correction. The purpose of this note is to give an alternative derivation of the distribution of Y by applying one of the results, established by Patil [3], for the generalized power series distribution.


1975 ◽  
Vol 12 (04) ◽  
pp. 673-683
Author(s):  
G. R. Grimmett

I show that the sumof independent random variables converges in distribution when suitably normalised, so long as theXksatisfy the following two conditions:μ(n)= E |Xn|is comparable withE|Sn| for largen,andXk/μ(k) converges in distribution. Also I consider the associated birth processX(t) = max{n:Sn≦t} when eachXkis positive, and I show that there exists a continuous increasing functionv(t) such thatfor some variableYwith specified distribution, and for almost allu. The functionv, satisfiesv(t) =A(1 +o(t)) logt. The Markovian birth process with parameters λn= λn, where 0 &lt; λ &lt; 1, is an example of such a process.


Author(s):  
J. M. Hammersley

Let G be an infinite partially directed graph of finite outgoing degree. Thus G consists of an infinite set of vertices, together with a set of edges between certain prescribed pairs of vertices. Each edge may be directed or undirected, and the number of edges from (but not necessarily to) any given vertex is always finite (though possibly unbounded). A path on G from a vertex V1 to a vertex Vn (if such a path exists) is a finite sequence of alternate edges and vertices of the form E12, V2, E23, V3, …, En − 1, n, Vn such that Ei, i + 1 is an edge connecting Vi and Vi + 1 (and in the direction from Vi to Vi + 1 if that edge happens to be directed). In mixed Bernoulli percolation, each vertex Vi carries a random variable di, and each edge Eij carries a random variable dij. All these random variables di and dij are mutually independent, and take only the values 0 or 1; the di take the value 1 with probability p, while the dij take the value 1 with probability p. A path is said to be open if and only if all the random variables carried by all its edges and all its vertices assume the value 1. Let S be a given finite set of vertices, called the source set; and let T be the set of all vertices such that there exists at least one open path from some vertex of S to each vertex of T. (We imagine that fluid, supplied to all the source vertices, can flow along any open path; and thus T is the random set of vertices eventually wetted by the fluid). The percolation probabilityis defined to be the probability that T is an infinite set.


1999 ◽  
Vol 31 (1) ◽  
pp. 178-198 ◽  
Author(s):  
Frans A. Boshuizen ◽  
Robert P. Kertz

In this paper, in work strongly related with that of Coffman et al. [5], Bruss and Robertson [2], and Rhee and Talagrand [15], we focus our interest on an asymptotic distributional comparison between numbers of ‘smallest’ i.i.d. random variables selected by either on-line or off-line policies. Let X1,X2,… be a sequence of i.i.d. random variables with distribution function F(x), and let X1,n,…,Xn,n be the sequence of order statistics of X1,…,Xn. For a sequence (cn)n≥1 of positive constants, the smallest fit off-line counting random variable is defined by Ne(cn) := max {j ≤ n : X1,n + … + Xj,n ≤ cn}. The asymptotic joint distributional comparison is given between the off-line count Ne(cn) and on-line counts Nnτ for ‘good’ sequential (on-line) policies τ satisfying the sum constraint ∑j≥1XτjI(τj≤n) ≤ cn. Specifically, for such policies τ, under appropriate conditions on the distribution function F(x) and the constants (cn)n≥1, we find sequences of positive constants (Bn)n≥1, (Δn)n≥1 and (Δ'n)n≥1 such that for some non-degenerate random variables W and W'. The major tools used in the paper are convergence of point processes to Poisson random measure and continuous mapping theorems, strong approximation results of the normalized empirical process by Brownian bridges, and some renewal theory.


Sign in / Sign up

Export Citation Format

Share Document