scholarly journals On the Markov Transition Kernels for First Passage Percolation on the Ladder

2011 ◽  
Vol 48 (02) ◽  
pp. 366-388 ◽  
Author(s):  
Eckhard Schlemm

We consider the first passage percolation problem on the random graph with vertex set N x {0, 1}, edges joining vertices at a Euclidean distance equal to unity, and independent exponential edge weights. We provide a central limit theorem for the first passage times l n between the vertices (0, 0) and (n, 0), thus extending earlier results about the almost-sure convergence of l n / n as n → ∞. We use generating function techniques to compute the n-step transition kernels of a closely related Markov chain which can be used to explicitly calculate the asymptotic variance in the central limit theorem.

2011 ◽  
Vol 48 (2) ◽  
pp. 366-388 ◽  
Author(s):  
Eckhard Schlemm

We consider the first passage percolation problem on the random graph with vertex set N x {0, 1}, edges joining vertices at a Euclidean distance equal to unity, and independent exponential edge weights. We provide a central limit theorem for the first passage times ln between the vertices (0, 0) and (n, 0), thus extending earlier results about the almost-sure convergence of ln / n as n → ∞. We use generating function techniques to compute the n-step transition kernels of a closely related Markov chain which can be used to explicitly calculate the asymptotic variance in the central limit theorem.


1985 ◽  
Vol 22 (4) ◽  
pp. 766-775
Author(s):  
Norbert Herrndorf

We consider first-passage percolation in an infinite horizontal strip of finite height. Using methods from the theory of Markov chains, we prove a central limit theorem for first-passage times, and compute the time constants for some special cases.


1985 ◽  
Vol 22 (02) ◽  
pp. 280-287 ◽  
Author(s):  
Ştefan P. Niculescu ◽  
Edward Omey

Equivalence of rates of convergence in the central limit theorem for the vector of maximum sums and the corresponding first-passage variables is established. A similar result for the vector of partial sums and the corresponding renewal variables is also given. The results extend to several dimensions the bivariate results of Ahmad (1981).


2019 ◽  
Vol 8 (4) ◽  
pp. 817-849
Author(s):  
Eustasio del Barrio ◽  
Paula Gordaliza ◽  
Jean-Michel Loubes

Abstract We provide a central limit theorem for the Monge–Kantorovich distance between two empirical distributions with sizes $n$ and $m$, $\mathcal{W}_p(P_n,Q_m), \ p\geqslant 1,$ for observations on the real line. In the case $p>1$ our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centring constants. We provide a consistent estimate of the asymptotic variance, which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification.


2019 ◽  
Vol 09 (02) ◽  
pp. 2050001
Author(s):  
Renjie Feng ◽  
Gang Tian ◽  
Dongyi Wei

In [Spectrum of SYK model, preprint (2018), arXiv:1801.10073], we proved the almost sure convergence of eigenvalues of the SYK model, which can be viewed as a type of law of large numbers in probability theory; in [Spectrum of SYK model II: Central limit theorem, preprint (2018), arXiv:1806.05714], we proved that the linear statistic of eigenvalues satisfies the central limit theorem. In this paper, we continue to study another important theorem in probability theory — the concentration of measure theorem, especially for the Gaussian SYK model. We will prove a large deviation principle (LDP) for the normalized empirical measure of eigenvalues when [Formula: see text], in which case the eigenvalues can be expressed in terms of these of Gaussian random antisymmetric matrices. Such LDP result has its own independent interest in random matrix theory. For general [Formula: see text], we cannot prove the LDP, we will prove a concentration of measure theorem by estimating the Lipschitz norm of the Gaussian SYK model.


1971 ◽  
Vol 8 (01) ◽  
pp. 52-59 ◽  
Author(s):  
C. C. Heyde

It is possible to interpret the classical central limit theorem for sums of independent random variables as a convergence rate result for the law of large numbers. For example, ifXi, i= 1, 2, 3, ··· are independent and identically distributed random variables withEXi=μ, varXi= σ2< ∞ andthen the central limit theorem can be written in the formThis provides information on the rate of convergence in the strong lawas. (“a.s.” denotes almost sure convergence.) It is our object in this paper to discuss analogues for the super-critical Galton-Watson process.


1971 ◽  
Vol 8 (1) ◽  
pp. 52-59 ◽  
Author(s):  
C. C. Heyde

It is possible to interpret the classical central limit theorem for sums of independent random variables as a convergence rate result for the law of large numbers. For example, if Xi, i = 1, 2, 3, ··· are independent and identically distributed random variables with EXi = μ, var Xi = σ2 < ∞ and then the central limit theorem can be written in the form This provides information on the rate of convergence in the strong law as . (“a.s.” denotes almost sure convergence.) It is our object in this paper to discuss analogues for the super-critical Galton-Watson process.


Fractals ◽  
2007 ◽  
Vol 15 (04) ◽  
pp. 301-313 ◽  
Author(s):  
E. MOULINES ◽  
F. ROUEFF ◽  
MURAD S. TAQQU

We consider a Gaussian time series, stationary or not, with long memory exponent d ∈ ℝ. The generalized spectral density function of the time series is characterized by d and by a function f*(λ) which specifies the short-range dependence structure. Our setting is semi-parametric in that both d and f* are unknown, and only the smoothness of f* around λ = 0 matters. The parameter d is the one of interest. It is estimated by regression using the wavelet coefficients of the time series, which are dependent when d ≠ 0. We establish a Central Limit Theorem (CLT) for the resulting estimator [Formula: see text]. We show that the deviation [Formula: see text], adequately normalized, is asymptotically normal and specify the asymptotic variance.


Sign in / Sign up

Export Citation Format

Share Document