A Robust Game-theoretical Federated Learning Framework with Joint Differential Privacy

Author(s):  
Lefeng Zhang ◽  
Tianqing Zhu ◽  
Ping Xiong ◽  
Wanlei Zhou ◽  
Philip Yu
2020 ◽  
Vol 16 (5) ◽  
pp. 155014772091969
Author(s):  
Hui Cao ◽  
Shubo Liu ◽  
Renfang Zhao ◽  
Xingxing Xiong

Nowadays, wireless sensor network technology is being increasingly popular which is applied to a wide range of Internet of Things. Especially, Power Internet of Things is an important and rapidly growing section in Internet of Thing systems, which benefited from the application of wireless sensor networks to achieve fine-grained information collection. Meanwhile, the privacy risk is gradually exposed, which is the widespread concern for electricity power consumers. Non-intrusive load monitoring, in particular, is a technique to recover state of appliances from only the energy consumption data, which enables adversary inferring the behavior privacy of residents. There can be no doubt that applying local differential privacy to achieve privacy preserving in the local setting is more trustworthy than centralized approach for electricity customers. Although it is hard to control the risk and achieve the trade-off between privacy and utility by traditional local differential privacy obfuscation mechanisms, some existing obfuscation mechanisms based on artificial intelligence, called advanced obfuscation mechanisms, can achieve it. However, the large computing resource consumption to train the machine learning model is not affordable for most Power Internet of Thing terminal. In this article, to solve this problem, IFed was proposed—a novel federated learning framework that let electric provider who normally is adequate in computing resources to help Power Internet of Thing users. First, the optimized framework was proposed in which the trade-off between local differential privacy, data utility, and resource consumption was incorporated. Concurrently, the following problem of privacy preserving on the machine learning model transport between electricity provider and customers was noted and resolved. Last, users were categorized based on different levels of privacy requirements, and stronger privacy guarantee was provided for sensitive users. The formal local differential privacy analysis and the experiments demonstrated that IFed can fulfill the privacy requirements for Power Internet of Thing users.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Jianzhe Zhao ◽  
Keming Mao ◽  
Chenxi Huang ◽  
Yuyang Zeng

Secure and trusted cross-platform knowledge sharing is significant for modern intelligent data analysis. To address the trade-off problems between privacy and utility in complex federated learning, a novel differentially private federated learning framework is proposed. First, the impact of data heterogeneity of participants on global model accuracy is analyzed quantitatively based on 1-Wasserstein distance. Then, we design a multilevel and multiparticipant dynamic allocation method of privacy budget to reduce the injected noise, and the utility can be improved efficiently. Finally, they are integrated, and a novel adaptive differentially private federated learning algorithm (A-DPFL) is designed. Comprehensive experiments on redefined non-I.I.D MNIST and CIFAR-10 datasets are conducted, and the results demonstrate the superiority of model accuracy, convergence, and robustness.


2021 ◽  
Author(s):  
Shivam Kalra ◽  
Junfeng Wen ◽  
Jesse Cresswell ◽  
Maksims Volkovs ◽  
Hamid Tizhoosh

Abstract Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing. Federated learning is a distributed learning framework that enables multi-institutional collaborations on decentralized data with improved protection for each collaborator’s data privacy. In this paper, we propose a communication-efficient scheme for decentralized federated learning called ProxyFL, or proxy-based federated learning. Each participant in ProxyFL maintains two models, a private model, and a publicly shared proxy model designed to protect the participant’s privacy. Proxy models allow efficient information exchange among participants using the PushSum method without the need of a centralized server. The proposed method eliminates a significant limitation of canonical federated learning by allowing model heterogeneity; each participant can have a private model with any architecture. Furthermore, our protocol for communication by proxy leads to stronger privacy guarantees using differential privacy analysis. Experiments on popular image datasets, and a pan-cancer diagnostic problem using over 30,000 high-quality gigapixel histology whole slide images, show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.


Author(s):  
Hongbin Liu ◽  
Jinyuan Jia ◽  
Neil Zhenqiang Gong

Differentially private machine learning trains models while protecting privacy of the sensitive training data. The key to obtain differentially private models is to introduce noise/randomness to the training process. In particular, existing differentially private machine learning methods add noise to the training data, the gradients, the loss function, and/or the model itself. Bagging, a popular ensemble learning framework, randomly creates some subsamples of the training data, trains a base model for each subsample using a base learner, and takes majority vote among the base models when making predictions. Bagging has intrinsic randomness in the training process as it randomly creates subsamples. Our major theoretical results show that such intrinsic randomness already makes Bagging differentially private without the needs of additional noise. Moreover, we prove that if no assumptions about the base learner are made, our derived privacy guarantees are tight. We empirically evaluate Bagging on MNIST and CIFAR10. Our experimental results demonstrate that Bagging achieves significantly higher accuracies than state-of-the-art differentially private machine learning methods with the same privacy budgets.


1969 ◽  
Vol 12 (1) ◽  
pp. 185-192 ◽  
Author(s):  
John L. Locke

Ten children with high scores on an auditory memory span task were significantly better at imitating three non-English phones than 10 children with low auditory memory span scores. An additional 10 children with high scores on an oral stereognosis task were significantly better at imitating two of the three phones than 10 children with low oral stereognosis scores. Auditory memory span and oral stereognosis appear to be important subskills in the learning of new articulations, perhaps explaining their appearance in the literature as “etiologies” of disordered articulation. Although articulation development and the experimental acquisition of non-English phones have certain obvious differences, they seem to share some common processes, suggesting that the sound learning framework may be an efficacious technique for revealing otherwise inaccessible information.


Sign in / Sign up

Export Citation Format

Share Document