scholarly journals Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset

Author(s):  
Zhenyu Wu ◽  
Haotao Wang ◽  
Zhaowen Wang ◽  
Hailin Jin ◽  
Zhangyang Wang
2021 ◽  
Vol 13 (4) ◽  
pp. 94
Author(s):  
Haokun Fang ◽  
Quan Qian

Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper.


2019 ◽  
Author(s):  
Oluwatobi Olabiyi ◽  
Alan O Salimov ◽  
Anish Khazane ◽  
Erik Mueller

Author(s):  
Zihao W. Wang ◽  
Vibhav Vineet ◽  
Francesco Pittaluga ◽  
Sudipta N. Sinha ◽  
Oliver Cossairt ◽  
...  

Author(s):  
Yong Li ◽  
Yipeng Zhou ◽  
Alireza Jolfaei ◽  
Dongjin Yu ◽  
Gaochao Xu ◽  
...  

Author(s):  
Yang Zhao ◽  
Jianyi Zhang ◽  
Changyou Chen

Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.


Sign in / Sign up

Export Citation Format

Share Document