sparse vector
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 32)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Ganyu Qin ◽  
Hongyang Chen ◽  
Xuewan Zhang ◽  
Takuro Sato ◽  
Di Zhang

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 961
Author(s):  
Mijung Park ◽  
Margarita Vinaroz ◽  
Wittawat Jitkrittum

We developed a novel approximate Bayesian computation (ABC) framework, ABCDP, which produces differentially private (DP) and approximate posterior samples. Our framework takes advantage of the sparse vector technique (SVT), widely studied in the differential privacy literature. SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met. If the condition is sparsely met during the repeated queries, SVT can drastically reduce the cumulative privacy loss, unlike the usual case where every query incurs the privacy loss. In ABC, the quantity of interest is the distance between observed and simulated data, and only when the distance is below a threshold can we take the corresponding prior sample as a posterior sample. Hence, applying SVT to ABC is an organic way to transform an ABC algorithm to a privacy-preserving variant with minimal modification, but yields the posterior samples with a high privacy level. We theoretically analyzed the interplay between the noise added for privacy and the accuracy of the posterior samples. We apply ABCDP to several data simulators and show the efficacy of the proposed framework.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 722
Author(s):  
Xin Li ◽  
Dongya Wu

In this paper, the high-dimensional linear regression model is considered, where the covariates are measured with additive noise. Different from most of the other methods, which are based on the assumption that the true covariates are fully obtained, results in this paper only require that the corrupted covariate matrix is observed. Then, by the application of information theory, the minimax rates of convergence for estimation are investigated in terms of the ℓp(1≤p<∞)-losses under the general sparsity assumption on the underlying regression parameter and some regularity conditions on the observed covariate matrix. The established lower and upper bounds on minimax risks agree up to constant factors when p=2, which together provide the information-theoretic limits of estimating a sparse vector in the high-dimensional linear errors-in-variables model. An estimator for the underlying parameter is also proposed and shown to be minimax optimal in the ℓ2-loss.


2021 ◽  
Vol 49 (3) ◽  
Author(s):  
L. Comminges ◽  
O. Collier ◽  
M. Ndaoud ◽  
A. B. Tsybakov

2021 ◽  
pp. 1-13
Author(s):  
Ning Bi ◽  
Jun Tan ◽  
Wai-Shing Tang

In this paper, we provide a necessary condition and a sufficient condition such that any [Formula: see text]-sparse vector [Formula: see text] can be recovered from [Formula: see text] via [Formula: see text] local minimization. Moreover, we further verify that the sufficient condition is naturally valid when the restricted isometry constant of the measurement matrix [Formula: see text] satisfies [Formula: see text]. Compared with the existing [Formula: see text] local recoverability condition [Formula: see text], this result shows that [Formula: see text] local recoverability contains more measurement matrices.


Sign in / Sign up

Export Citation Format

Share Document