Generalization Error Bounds Using Unlabeled Data

Author(s):  
Matti Kääriäinen
Author(s):  
Tingran Gao ◽  
Shahab Asoodeh ◽  
Yi Huang ◽  
James Evans

Inspired by recent interests of developing machine learning and data mining algorithms on hypergraphs, we investigate in this paper the semi-supervised learning algorithm of propagating ”soft labels” (e.g. probability distributions, class membership scores) over hypergraphs, by means of optimal transportation. Borrowing insights from Wasserstein propagation on graphs [Solomon et al. 2014], we re-formulate the label propagation procedure as a message-passing algorithm, which renders itself naturally to a generalization applicable to hypergraphs through Wasserstein barycenters. Furthermore, in a PAC learning framework, we provide generalization error bounds for propagating one-dimensional distributions on graphs and hypergraphs using 2-Wasserstein distance, by establishing the algorithmic stability of the proposed semisupervised learning algorithm. These theoretical results also shed new lights upon deeper understandings of the Wasserstein propagation on graphs.


2020 ◽  
Vol 27 ◽  
pp. 326-330 ◽  
Author(s):  
Pere Gimenez-Febrer ◽  
Alba Pages-Zamora ◽  
Georgios B. Giannakis

2020 ◽  
Vol 34 (04) ◽  
pp. 3349-3356
Author(s):  
Yuan Cao ◽  
Quanquan Gu

Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with over-parameterization and proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing generalization bounds is that they are based on uniform convergence and are independent of the training algorithm. In this work, we derive an algorithm-dependent generalization error bound for deep ReLU networks, and show that under certain assumptions on the data distribution, gradient descent (GD) with proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small generalization error. Our work sheds light on explaining the good generalization performance of over-parameterized deep neural networks.


Author(s):  
Amedeo Roberto Esposito ◽  
Michael Gastpar ◽  
Ibrahim Issa

Sign in / Sign up

Export Citation Format

Share Document