Privacy-preserving Streaming Truth Discovery in Crowdsourcing with Differential Privacy

Author(s):  
Dan Wang ◽  
Ju Ren ◽  
Zhibo Wang ◽  
Xiaoyi Pang ◽  
Yaoxue Zhang ◽  
...  
2021 ◽  
Vol 18 (11) ◽  
pp. 42-60
Author(s):  
Ting Bao ◽  
Lei Xu ◽  
Liehuang Zhu ◽  
Lihong Wang ◽  
Ruiguang Li ◽  
...  

Author(s):  
Shushu Liu ◽  
An Liu ◽  
Zhixu Li ◽  
Guanfeng Liu ◽  
Jiajie Xu ◽  
...  

2021 ◽  
Author(s):  
Jude TCHAYE-KONDI ◽  
Yanlong Zhai ◽  
Liehuang Zhu

<div>We address privacy and latency issues in the edge/cloud computing environment while training a centralized AI model. In our particular case, the edge devices are the only data source for the model to train on the central server. Current privacy-preserving and reducing network latency solutions rely on a pre-trained feature extractor deployed on the devices to help extract only important features from the sensitive dataset. However, finding a pre-trained model or pubic dataset to build a feature extractor for certain tasks may turn out to be very challenging. With the large amount of data generated by edge devices, the edge environment does not really lack data, but its improper access may lead to privacy concerns. In this paper, we present DeepGuess , a new privacy-preserving, and latency aware deeplearning framework. DeepGuess uses a new learning mechanism enabled by the AutoEncoder(AE) architecture called Inductive Learning, which makes it possible to train a central neural network using the data produced by end-devices while preserving their privacy. With inductive learning, sensitive data remains on devices and is not explicitly involved in any backpropagation process. The AE’s Encoder is deployed on devices to extracts and transfers important features to the server. To enhance privacy, we propose a new local deferentially private algorithm that allows the Edge devices to apply random noise to features extracted from their sensitive data before transferred to an untrusted server. The experimental evaluation of DeepGuess demonstrates its effectiveness and ability to converge on a series of experiments.</div>


2019 ◽  
Vol 90 ◽  
pp. 158-174 ◽  
Author(s):  
Chunhui Piao ◽  
Yajuan Shi ◽  
Jiaqi Yan ◽  
Changyou Zhang ◽  
Liping Liu

Author(s):  
J. Andrew Onesimu ◽  
Karthikeyan J. ◽  
D. Samuel Joshua Viswas ◽  
Robin D Sebastian

Deep learning is the buzz word in recent times in the research field due to its various advantages in the fields of healthcare, medicine, automobiles, etc. A huge amount of data is required for deep learning to achieve better accuracy; thus, it is important to protect the data from security and privacy breaches. In this chapter, a comprehensive survey of security and privacy challenges in deep learning is presented. The security attacks such as poisoning attacks, evasion attacks, and black-box attacks are explored with its prevention and defence techniques. A comparative analysis is done on various techniques to prevent the data from such security attacks. Privacy is another major challenge in deep learning. In this chapter, the authors presented an in-depth survey on various privacy-preserving techniques for deep learning such as differential privacy, homomorphic encryption, secret sharing, and secure multi-party computation. A detailed comparison table to compare the various privacy-preserving techniques and approaches is also presented.


2019 ◽  
Vol 1 (1) ◽  
pp. 483-491 ◽  
Author(s):  
Makhamisa Senekane

The ubiquity of data, including multi-media data such as images, enables easy mining and analysis of such data. However, such an analysis might involve the use of sensitive data such as medical records (including radiological images) and financial records. Privacy-preserving machine learning is an approach that is aimed at the analysis of such data in such a way that privacy is not compromised. There are various privacy-preserving data analysis approaches such as k-anonymity, l-diversity, t-closeness and Differential Privacy (DP). Currently, DP is a golden standard of privacy-preserving data analysis due to its robustness against background knowledge attacks. In this paper, we report a scheme for privacy-preserving image classification using Support Vector Machine (SVM) and DP. SVM is chosen as a classification algorithm because unlike variants of artificial neural networks, it converges to a global optimum. SVM kernels used are linear and Radial Basis Function (RBF), while ϵ -differential privacy was the DP framework used. The proposed scheme achieved an accuracy of up to 98%. The results obtained underline the utility of using SVM and DP for privacy-preserving image classification.


Sign in / Sign up

Export Citation Format

Share Document