scholarly journals Statistical Approximating Distributions Under Differential Privacy

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Yue Wang ◽  
Daniel Kifer ◽  
Jaewoo Lee ◽  
Vishesh Karwa

Statistics computed from data are viewed as random variables. When they are used for tasks like hypothesis testing and confidence intervals, their true finite sample distributions are often replaced by approximating distributions that are easier to work with (for example, the Gaussian, which results from using approximations justified by the Central Limit Theorem). When data are perturbed by differential privacy, the approximating distributions also need to be modified. Prior work provided various competing methods for creating such approximating distributions with little formal justification beyond the fact that they worked well empirically. In this paper, we study the question of how to generate statistical approximating distributions for differentially private statistics, provide finite sample guarantees for the quality of the approximations.

2018 ◽  
Vol 111 (6) ◽  
pp. 466-469
Author(s):  
Anne Quinn

While looking for an inexpensive Web application to illustrate the Central Limit theorem, I found the Rossman/Chance Applet Collection, a group of free Web-based statistics apps. In addition to illustrating the Central Limit theorem, the apps could be used to cover many classic statistics concepts, including confidence intervals, regression, and a virtual version of the popular Reese's® Pieces problem. The apps allow users to investigate concepts using either preprogrammed or original data.


2016 ◽  
Vol 109 (9) ◽  
pp. 708-711 ◽  
Author(s):  
Anne Quinn

StatKey, a free Web-based app, supplies real data to help with the central limit theorem, confidence intervals, and much more.


2008 ◽  
Vol 102 (2) ◽  
pp. 151-153
Author(s):  
Todd O. Moyer ◽  
Edward Gambler

The central limit theorem, the basis for confidence intervals and hypothesis testing, is a critical theorem in statistics. Instructors can approach this topic through lecture or activity. In the lecture method, the instructor tells students about the central limit theorem. Typically, students are informed that a sampling distribution of means for even an obviously skewed distribution will approach normality as the sample sizes used approach 30. Consequently, students may be able to use the theorem, but they may not necessarily understand the theorem.


2020 ◽  
Vol 19 ◽  

Confidence intervals for ratio of means for large paired and unpaired samples with finite variance,obtained by applying the central limit theorem and Cramér-Wold device, are given. Also, these intervals for ratiosare obtained under infinite variance and when considering independent populations, by using stable distributions.Numerical illustrations by considering problems typically presented in practice are given


2019 ◽  
Vol 2019 (2) ◽  
pp. 245-269
Author(s):  
David M. Sommer ◽  
Sebastian Meiser ◽  
Esfandiar Mohammadi

Abstract Quantifying the privacy loss of a privacy-preserving mechanism on potentially sensitive data is a complex and well-researched topic; the de-facto standard for privacy measures are ε-differential privacy (DP) and its versatile relaxation (ε, δ)-approximate differential privacy (ADP). Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation. In this paper we unify many previous works via the privacy loss distribution (PLD) of a mechanism. We show that for non-adaptive mechanisms, the privacy loss under sequential composition undergoes a convolution and will converge to a Gauss distribution (the central limit theorem for DP). We derive several relevant insights: we can now characterize mechanisms by their privacy loss class, i.e., by the Gauss distribution to which their PLD converges, which allows us to give novel ADP bounds for mechanisms based on their privacy loss class; we derive exact analytical guarantees for the approximate randomized response mechanism and an exact analytical and closed formula for the Gauss mechanism, that, given ε, calculates δ, s.t., the mechanism is (ε, δ)-ADP (not an over-approximating bound).


2021 ◽  
Vol 36 (2) ◽  
pp. 243-255
Author(s):  
Wei Liu ◽  
Yong Zhang

AbstractIn this paper, we investigate the central limit theorem and the invariance principle for linear processes generated by a new notion of independently and identically distributed (IID) random variables for sub-linear expectations initiated by Peng [19]. It turns out that these theorems are natural and fairly neat extensions of the classical Kolmogorov’s central limit theorem and invariance principle to the case where probability measures are no longer additive.


Author(s):  
Felix Herold ◽  
Daniel Hug ◽  
Christoph Thäle

AbstractPoisson processes in the space of $$(d-1)$$ ( d - 1 ) -dimensional totally geodesic subspaces (hyperplanes) in a d-dimensional hyperbolic space of constant curvature $$-1$$ - 1 are studied. The k-dimensional Hausdorff measure of their k-skeleton is considered. Explicit formulas for first- and second-order quantities restricted to bounded observation windows are obtained. The central limit problem for the k-dimensional Hausdorff measure of the k-skeleton is approached in two different set-ups: (i) for a fixed window and growing intensities, and (ii) for fixed intensity and growing spherical windows. While in case (i) the central limit theorem is valid for all $$d\ge 2$$ d ≥ 2 , it is shown that in case (ii) the central limit theorem holds for $$d\in \{2,3\}$$ d ∈ { 2 , 3 } and fails if $$d\ge 4$$ d ≥ 4 and $$k=d-1$$ k = d - 1 or if $$d\ge 7$$ d ≥ 7 and for general k. Also rates of convergence are studied and multivariate central limit theorems are obtained. Moreover, the situation in which the intensity and the spherical window are growing simultaneously is discussed. In the background are the Malliavin–Stein method for normal approximation and the combinatorial moment structure of Poisson U-statistics as well as tools from hyperbolic integral geometry.


Sign in / Sign up

Export Citation Format

Share Document