concentration inequality
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 14)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 58 (4) ◽  
pp. 890-908
Author(s):  
Caio Alves ◽  
Rodrigo Ribeiro ◽  
Rémy Sanchis

AbstractWe prove concentration inequality results for geometric graph properties of an instance of the Cooper–Frieze [5] preferential attachment model with edge-steps. More precisely, we investigate a random graph model that at each time $t\in \mathbb{N}$ , with probability p adds a new vertex to the graph (a vertex-step occurs) or with probability $1-p$ an edge connecting two existent vertices is added (an edge-step occurs). We prove concentration results for the global clustering coefficient as well as the clique number. More formally, we prove that the global clustering, with high probability, decays as $t^{-\gamma(p)}$ for a positive function $\gamma$ of p, whereas the clique number of these graphs is, up to subpolynomially small factors, of order $t^{(1-p)/(2-p)}$ .


Author(s):  
Asaf Ferber ◽  
Vishesh Jain ◽  
Yufei Zhao

Abstract Many problems in combinatorial linear algebra require upper bounds on the number of solutions to an underdetermined system of linear equations $Ax = b$ , where the coordinates of the vector x are restricted to take values in some small subset (e.g. $\{\pm 1\}$ ) of the underlying field. The classical ways of bounding this quantity are to use either a rank bound observation due to Odlyzko or a vector anti-concentration inequality due to Halász. The former gives a stronger conclusion except when the number of equations is significantly smaller than the number of variables; even in such situations, the hypotheses of Halász’s inequality are quite hard to verify in practice. In this paper, using a novel approach to the anti-concentration problem for vector sums, we obtain new Halász-type inequalities that beat the Odlyzko bound even in settings where the number of equations is comparable to the number of variables. In addition to being stronger, our inequalities have hypotheses that are considerably easier to verify. We present two applications of our inequalities to combinatorial (random) matrix theory: (i) we obtain the first non-trivial upper bound on the number of $n\times n$ Hadamard matrices and (ii) we improve a recent bound of Deneanu and Vu on the probability of normality of a random $\{\pm 1\}$ matrix.


Author(s):  
Mark A. Burgess ◽  
Archie C. Chapman

The Shapley value is a well recognised method for dividing the value of joint effort in cooperative games. However, computing the Shapley value is known to be computationally hard, so stratified sample-based estimation is sometimes used. For this task, we provide two contributions to the state of the art. First, we derive a novel concentration inequality that is tailored to stratified Shapley value estimation using sample variance information. Second, by sequentially choosing samples to minimize our inequality, we develop a new and more efficient method of sampling to estimate the Shapley value. We evaluate our sampling method on a suite of test cooperative games, and our results demonstrate that it outperforms or is competitive with existing stratified sample-based estimation approaches to computing the Shapley value.


Author(s):  
Moritz Moeller ◽  
Tino Ullrich

AbstractIn this paper we study $$L_2$$ L 2 -norm sampling discretization and sampling recovery of complex-valued functions in RKHS on $$D \subset \mathbb {R}^d$$ D ⊂ R d based on random function samples. We only assume the finite trace of the kernel (Hilbert–Schmidt embedding into $$L_2$$ L 2 ) and provide several concrete estimates with precise constants for the corresponding worst-case errors. In general, our analysis does not need any additional assumptions and also includes the case of non-Mercer kernels and also non-separable RKHS. The fail probability is controlled and decays polynomially in n, the number of samples. Under the mild additional assumption of separability we observe improved rates of convergence related to the decay of the singular values. Our main tool is a spectral norm concentration inequality for infinite complex random matrices with independent rows complementing earlier results by Rudelson, Mendelson, Pajor, Oliveira and Rauhut.


Author(s):  
Franck Barthe ◽  
Michał Strzelecki

AbstractProbability measures satisfying a Poincaré inequality are known to enjoy a dimension-free concentration inequality with exponential rate. A celebrated result of Bobkov and Ledoux shows that a Poincaré inequality automatically implies a modified logarithmic Sobolev inequality. As a consequence the Poincaré inequality ensures a stronger dimension-free concentration property, known as two-level concentration. We show that a similar phenomenon occurs for the Latała–Oleszkiewicz inequalities, which were devised to uncover dimension-free concentration with rate between exponential and Gaussian. Motivated by the search for counterexamples to related questions, we also develop analytic techniques to study functional inequalities for probability measures on the line with wild potentials.


Author(s):  
JACOB FOX ◽  
MATTHEW KWAN ◽  
LISA SAUERMANN

Abstract We prove several different anti-concentration inequalities for functions of independent Bernoulli-distributed random variables. First, motivated by a conjecture of Alon, Hefetz, Krivelevich and Tyomkyn, we prove some “Poisson-type” anti-concentration theorems that give bounds of the form 1/e + o(1) for the point probabilities of certain polynomials. Second, we prove an anti-concentration inequality for polynomials with nonnegative coefficients which extends the classical Erdős–Littlewood–Offord theorem and improves a theorem of Meka, Nguyen and Vu for polynomials of this type. As an application, we prove some new anti-concentration bounds for subgraph counts in random graphs.


2020 ◽  
Vol 36 (4) ◽  
pp. 658-706 ◽  
Author(s):  
Andrii Babii

AbstractThis article develops inferential methods for a very general class of ill-posed models in econometrics encompassing the nonparametric instrumental variable regression, various functional regressions, and the density deconvolution. We focus on uniform confidence sets for the parameter of interest estimated with Tikhonov regularization, as in Darolles et al. (2011, Econometrica 79, 1541–1565). Since it is impossible to have inferential methods based on the central limit theorem, we develop two alternative approaches relying on the concentration inequality and bootstrap approximations. We show that expected diameters and coverage properties of resulting sets have uniform validity over a large class of models, that is, constructed confidence sets are honest. Monte Carlo experiments illustrate that introduced confidence sets have reasonable width and coverage properties. Using U.S. data, we provide uniform confidence sets for Engel curves for various commodities.


Author(s):  
F. Baudier ◽  
G. Lancien ◽  
P. Motakis ◽  
Th. Schlumprecht

We prove that the class of reflexive asymptotic- $c_{0}$ Banach spaces is coarsely rigid, meaning that if a Banach space $X$ coarsely embeds into a reflexive asymptotic- $c_{0}$ space $Y$ , then $X$ is also reflexive and asymptotic- $c_{0}$ . In order to achieve this result, we provide a purely metric characterization of this class of Banach spaces. This metric characterization takes the form of a concentration inequality for Lipschitz maps on the Hamming graphs, which is rigid under coarse embeddings. Using an example of a quasi-reflexive asymptotic- $c_{0}$ space, we show that this concentration inequality is not equivalent to the non-equi-coarse embeddability of the Hamming graphs.


Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1144
Author(s):  
Salimeh Yasaei Sekeh ◽  
Morteza Noshad ◽  
Kevin R. Moon ◽  
Alfred O. Hero

Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze–Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman–Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is derived from a multicolored Euclidean minimal spanning tree (MST) that spans the merged samples. We obtain a concentration inequality for the Friedman–Rafsky estimator of the Henze–Penrose divergence. We validate our results experimentally and illustrate their application to real datasets.


Sign in / Sign up

Export Citation Format

Share Document