Extensions of a renewal theorem

1955 ◽  
Vol 51 (4) ◽  
pp. 629-638 ◽  
Author(s):  
Walter L. Smith

Let X1, X2, X3,… be a sequence of independent, identically distributed, absolutely continuous random variables whose first moment is μ1. Let Sk = X1 + X2 + … + Xk, and let fk(x) be the frequency function of Sk, defined as the k-fold convolution of f1(x). When f1(x) has been defined for all x, fk(x) is uniquely defined for all k, x. Write

1973 ◽  
Vol 16 (3) ◽  
pp. 337-342 ◽  
Author(s):  
M. S. Srivastava

Let(X1, Y1), (X2, Y2),…, (Xn, Yn) be n mutually independent pairs of random variables with absolutely continuous (hereafter, a.c.) pdf given by(1)where f(ρ) denotes the conditional pdf of X given Y, g(y) the marginal pdf of Y, e(ρ)→ 1 and b(ρ)→0 as ρ→0 and,(2)We wish to test the hypothesis(3)against the alternative(4)For the two-sided alternative we take — ∞< b < ∞. A feature of the model (1) is that it covers both-sided alternatives which have not been considered in the literature so far.


1975 ◽  
Vol 12 (02) ◽  
pp. 289-297
Author(s):  
Andrew D. Barbour

LetX(t) be a continuous time Markov process on the integers such that, ifσis a time at whichXmakes a jump,X(σ)– X(σ–) is distributed independently ofX(σ–), and has finite meanμand variance. Letq(j) denote the residence time parameter for the statej.Iftndenotes the time of thenth jump andXn≡X(tb), it is easy to deduce limit theorems forfrom those for sums of independent identically distributed random variables. In this paper, it is shown how, forμ&gt; 0 and for suitableq(·), these theorems can be translated into limit theorems forX(t), by using the continuous mapping theorem.


1970 ◽  
Vol 7 (02) ◽  
pp. 432-439 ◽  
Author(s):  
William E. Strawderman ◽  
Paul T. Holmes

Let X 1, X2, X 3 , ··· be independent, identically distributed random variables on a probability space (Ω, F, P); and with a continuous distribution function. Let the sequence of indices {Vr } be defined as Also define The following theorem is due to Renyi [5].


1966 ◽  
Vol 3 (01) ◽  
pp. 272-273 ◽  
Author(s):  
H. Robbins ◽  
E. Samuel

We define a natural extension of the concept of expectation of a random variable y as follows: M(y) = a if there exists a constant − ∞ ≦ a ≦ ∞ such that if y 1, y 2, … is a sequence of independent identically distributed (i.i.d.) random variables with the common distribution of y then


1980 ◽  
Vol 87 (1) ◽  
pp. 179-187 ◽  
Author(s):  
Sujit K. Basu ◽  
Makoto Maejima

AbstractLet {Xn} be a sequence of independent random variables each having a common d.f. V1. Suppose that V1 belongs to the domain of normal attraction of a stable d.f. V0 of index α 0 ≤ α ≤ 2. Here we prove that, if the c.f. of X1 is absolutely integrable in rth power for some integer r > 1, then for all large n the d.f. of the normalized sum Zn of X1, X2, …, Xn is absolutely continuous with a p.d.f. vn such thatas n → ∞, where v0 is the p.d.f. of Vo.


1969 ◽  
Vol 9 (1-2) ◽  
pp. 100-108
Author(s):  
A. M. Hasofer

By the geometric moving average of the independent, identically distributed random variables {Xn}, we mean the stochastic process , where a is a real number such that 0≦a≦1.


1975 ◽  
Vol 12 (2) ◽  
pp. 289-297 ◽  
Author(s):  
Andrew D. Barbour

Let X(t) be a continuous time Markov process on the integers such that, if σ is a time at which X makes a jump, X(σ)– X(σ–) is distributed independently of X(σ–), and has finite mean μ and variance. Let q(j) denote the residence time parameter for the state j. If tn denotes the time of the nth jump and Xn ≡ X(tb), it is easy to deduce limit theorems for from those for sums of independent identically distributed random variables. In this paper, it is shown how, for μ > 0 and for suitable q(·), these theorems can be translated into limit theorems for X(t), by using the continuous mapping theorem.


1966 ◽  
Vol 3 (1) ◽  
pp. 272-273 ◽  
Author(s):  
H. Robbins ◽  
E. Samuel

We define a natural extension of the concept of expectation of a random variable y as follows: M(y) = a if there exists a constant − ∞ ≦ a ≦ ∞ such that if y1, y2, … is a sequence of independent identically distributed (i.i.d.) random variables with the common distribution of y then


Author(s):  
D. J. H. Garling

1. Introduction. Révész(8) has shown that if (fn) is a sequence of random variables, bounded in L2, there exists a subsequence (fnk) and a random variable f in L2 such that converges almost surely whenever . Komlós(5) has shown that if (fn) is a sequence of random variables, bounded in L1, then there is a subsequence (A*) with the property that the Cesàro averages of any subsequence converge almost surely. Subsequently Chatterji(2) showed that if (fn) is bounded in LP (where 0 < p ≤ 2) then there is a subsequence (gk) = (fnk) and f in Lp such thatalmost surely for every sub-subsequence. All of these results are examples of subsequence principles: a sequence of random variables, satisfying an appropriate moment condition, has a subsequence which satisfies some property enjoyed by sequences of independent identically distributed random variables. Recently Aldous(1), using tightness arguments, has shown that for a general class of properties such a subsequence principle holds: in particular, the results listed above are all special cases of Aldous' principal result.


Author(s):  
C. W. Anderson

Let , where the Xi, i = 1, 2, … are independent identically distributed random variables. Classical extreme value theory, described for example in the books of do Haan(6) and Galambos(3) gives conditions under which there exist constants an > 0 and bn such thatwhere G(x) is taken to be one of the extreme value distributions G1(x) = exp (− e−x), G2(x) = exp (− x−a) (x > 0, α > 0) and G3(x) = exp (−(− x)α) (x < 0, α > 0).


Sign in / Sign up

Export Citation Format

Share Document