scholarly journals Sample Complexity Bounds on Differentially Private Learning via Communication Complexity

2015 ◽  
Vol 44 (6) ◽  
pp. 1740-1764 ◽  
Author(s):  
Vitaly Feldman ◽  
David Xiao
Author(s):  
Philipp Trunschke ◽  
Martin Eigel ◽  
Reinhold Schneider

We consider best approximation problems in a nonlinear subset  [[EQUATION]] of a Banach space of functions [[EQUATION]] . The norm is assumed to be a generalization of the [[EQUATION]] -norm for which only a weighted Monte Carlo estimate [[EQUATION]] can be computed. The objective is to obtain an approximation [[EQUATION]] of an unknown function [[EQUATION]] by minimizing the empirical norm [[EQUATION]] . We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the nonlinear least squares setting. Several model classes are examined where analytical statements can be made about the RIP and the results are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.


2020 ◽  
Vol 9 (2) ◽  
pp. 473-504 ◽  
Author(s):  
Noah Golowich ◽  
Alexander Rakhlin ◽  
Ohad Shamir

Abstract We study the sample complexity of learning neural networks by providing new bounds on their Rademacher complexity, assuming norm constraints on the parameter matrix of each layer. Compared to previous work, these complexity bounds have improved dependence on the network depth and, under some additional assumptions, are fully independent of the network size (both depth and width). These results are derived using some novel techniques, which may be of independent interest.


2002 ◽  
Vol 11 (04) ◽  
pp. 499-511 ◽  
Author(s):  
ARTURO HERNÁNDEZ-AGUIRRE ◽  
CRIS KOUTSOUGERAS ◽  
BILL BUCKLES

We find new sample complexity bounds for real function learning tasks in the uniform distribution by means of linear neural networks. These bounds, tighter than the distribution-free ones reported elsewhere in the literature, are applicable to simple functional link networks and radial basis neural networks.


2020 ◽  
Vol 67 (6) ◽  
pp. 1-42
Author(s):  
Hassan Ashtiani ◽  
Shai Ben-David ◽  
Nicholas J. A. Harvey ◽  
Christopher Liaw ◽  
Abbas Mehrabian ◽  
...  

2012 ◽  
Vol 38 (3) ◽  
pp. 479-526 ◽  
Author(s):  
Shay B. Cohen ◽  
Noah A. Smith

Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk.


Sign in / Sign up

Export Citation Format

Share Document