Evaluating and Diagnosing Convergence for Stochastic Gradient Langevin Dynamics

Author(s):  
Sergio Hernandez ◽  
Juan Luis Lopez
Bernoulli ◽  
2021 ◽  
Vol 27 (1) ◽  
pp. 1-33
Author(s):  
M. Barkhagen ◽  
N.H. Chau ◽  
É. Moulines ◽  
M. Rásonyi ◽  
S. Sabanis ◽  
...  

2021 ◽  
Vol 3 (3) ◽  
pp. 959-986
Author(s):  
Ngoc Huy Chau ◽  
Éric Moulines ◽  
Miklós Rásonyi ◽  
Sotirios Sabanis ◽  
Ying Zhang

2020 ◽  
Vol 34 (04) ◽  
pp. 6372-6379
Author(s):  
Bingzhe Wu ◽  
Chaochao Chen ◽  
Shiwan Zhao ◽  
Cen Chen ◽  
Yuan Yao ◽  
...  

Bayesian deep learning is recently regarded as an intrinsic way to characterize the weight uncertainty of deep neural networks (DNNs). Stochastic Gradient Langevin Dynamics (SGLD) is an effective method to enable Bayesian deep learning on large-scale datasets. Previous theoretical studies have shown various appealing properties of SGLD, ranging from the convergence properties to the generalization bounds. In this paper, we study the properties of SGLD from a novel perspective of membership privacy protection (i.e., preventing the membership attack). The membership attack, which aims to determine whether a specific sample is used for training a given DNN model, has emerged as a common threat against deep learning algorithms. To this end, we build a theoretical framework to analyze the information leakage (w.r.t. the training dataset) of a model trained using SGLD. Based on this framework, we demonstrate that SGLD can prevent the information leakage of the training dataset to a certain extent. Moreover, our theoretical analysis can be naturally extended to other types of Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods. Empirical results on different datasets and models verify our theoretical findings and suggest that the SGLD algorithm can not only reduce the information leakage but also improve the generalization ability of the DNN models in real-world applications.


Sign in / Sign up

Export Citation Format

Share Document