concentration inequalities
Recently Published Documents


TOTAL DOCUMENTS

162
(FIVE YEARS 44)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
pp. 109298
Author(s):  
M. Ashraf Bhat ◽  
G. Sankara Raju Kosuru

Author(s):  
Holger Sambale ◽  
Arthur Sinulis

AbstractWe present concentration inequalities on the multislice which are based on (modified) log-Sobolev inequalities. This includes bounds for convex functions and multilinear polynomials. As an application, we show concentration results for the triangle count in the G(n, M) Erdős–Rényi model resembling known bounds in the G(n, p) case. Moreover, we give a proof of Talagrand’s convex distance inequality for the multislice. Interpreting the multislice in a sampling without replacement context, we furthermore present concentration results for n out of N sampling without replacement. Based on a bounded difference inequality involving the finite-sampling correction factor $$1 - (n / N)$$ 1 - ( n / N ) , we present an easy proof of Serfling’s inequality with a slightly worse factor in the exponent, as well as a sub-Gaussian right tail for the Kolmogorov distance between the empirical measure and the true distribution of the sample.


Author(s):  
JACOB FOX ◽  
MATTHEW KWAN ◽  
LISA SAUERMANN

Abstract We prove several different anti-concentration inequalities for functions of independent Bernoulli-distributed random variables. First, motivated by a conjecture of Alon, Hefetz, Krivelevich and Tyomkyn, we prove some “Poisson-type” anti-concentration theorems that give bounds of the form 1/e + o(1) for the point probabilities of certain polynomials. Second, we prove an anti-concentration inequality for polynomials with nonnegative coefficients which extends the classical Erdős–Littlewood–Offord theorem and improves a theorem of Meka, Nguyen and Vu for polynomials of this type. As an application, we prove some new anti-concentration bounds for subgraph counts in random graphs.


Author(s):  
Yuansi Chen

AbstractWe prove an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. The lower bound has the dimension dependency $$d^{-o_d(1)}$$ d - o d ( 1 ) . When the dimension is large enough, our lower bound is tighter than the previous best bound which has the dimension dependency $$d^{-1/4}$$ d - 1 / 4 . Improving the current best lower bound of the isoperimetric coefficient in the KLS conjecture has many implications, including improvements of the current best bounds in Bourgain’s slicing conjecture and in the thin-shell conjecture, better concentration inequalities for Lipschitz functions of log-concave measures and better mixing time bounds for MCMC sampling algorithms on log-concave measures.


Sign in / Sign up

Export Citation Format

Share Document