scholarly journals Best predictors in logarithmic distance between positive random variables

2019 ◽  
Vol 15 (2) ◽  
pp. 15-28
Author(s):  
H. Gzyl

Abstract The metric properties of the set in which random variables take their values lead to relevant probabilistic concepts. For example, the mean of a random variable is a best predictor in that it minimizes the L2 distance between a point and a random variable. Similarly, the median is the same concept but when the distance is measured by the L1 norm. Also, a geodesic distance can be defined on the cone of strictly positive vectors in ℝn in such a way that, the minimizer of the distance between a point and a collection of points is their geometric mean. That geodesic distance induces a distance on the class of strictly positive random variables, which in turn leads to an interesting notions of conditional expectation (or best predictors) and their estimators. It also leads to different versions of the Law of Large Numbers and the Central Limit Theorem. For example, the lognormal variables appear as the analogue of the Gaussian variables for version of the Central Limit Theorem in the logarithmic distance.

Author(s):  
Jean Walrand

AbstractChapter 10.1007/978-3-030-49995-2_3 used the Central Limit Theorem to determine the number of users that can safely share a common cable or link. We saw that this result is also fundamental to calculate confidence intervals. In this section, we prove this theorem. A key tool is the characteristic function that provides a simple way to study sums of independent random variables.Section 4.1 introduces the characteristic function and calculates it for a Gaussian random variable. Section 4.2 uses that function to prove the Central Limit Theorem. Section 4.3 uses the characteristic function to calculate the moments of a Gaussian random variable. The sum of squares of Gaussian random variables is a common model of noise in communication links. Section 4.4 proves a remarkable property of such a sum. Section 4.5 shows how to use characteristic functions to approximate binomial and geometric random variables. The error function arises in the calculation of the probability of errors in transmission systems and also in decisions based on random observations. Section 4.6 derives useful approximations of that function. Section 4.7 concludes the chapter with a discussion of an adaptive multiple access protocol similar to one used in WiFi networks.


2021 ◽  
Vol 36 (2) ◽  
pp. 243-255
Author(s):  
Wei Liu ◽  
Yong Zhang

AbstractIn this paper, we investigate the central limit theorem and the invariance principle for linear processes generated by a new notion of independently and identically distributed (IID) random variables for sub-linear expectations initiated by Peng [19]. It turns out that these theorems are natural and fairly neat extensions of the classical Kolmogorov’s central limit theorem and invariance principle to the case where probability measures are no longer additive.


1972 ◽  
Vol 12 (4) ◽  
pp. 183-194
Author(s):  
V. Paulauskas

The abstracts (in two languages) can be found in the pdf file of the article. Original author name(s) and title in Russian and Lithuanian: В. Паулаускас. Оценка скорости сходимости в центральной предельной теореме для разнораспределенных слагаемых V. Paulauskas. Konvergavimo greičio įvertinimas centrinėje ribinėje teoremoje nevienodai pasiskirsčiusiems dėmenims


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mingzhou Xu ◽  
Kun Cheng

By an inequality of partial sum and uniform convergence of the central limit theorem under sublinear expectations, we establish precise asymptotics in the law of the iterated logarithm for independent and identically distributed random variables under sublinear expectations.


1969 ◽  
Vol 10 (1-2) ◽  
pp. 219-230
Author(s):  
C. R. Heathcote

Let X1, X2,…be independent and identically distributed non-lattice random variables with zero, varianceσ2<∞, and partial sums Sn = X1+X2+…+X.


Sign in / Sign up

Export Citation Format

Share Document