scholarly journals Hiding Numerical Vectors in Local Private and Shuffled Messages

Author(s):  
Shaowei Wang ◽  
Jin Li ◽  
Yuqiu Qian ◽  
Jiachun Du ◽  
Wenqing Lin ◽  
...  

Numerical vector aggregation has numerous applications in privacy-sensitive scenarios, such as distributed gradient estimation in federated learning, and statistical analysis on key-value data. Within the framework of local differential privacy, this work gives tight minimax error bounds of O(d s/(n epsilon^2)), where d is the dimension of the numerical vector and s is the number of non-zero entries. An attainable mechanism is then designed to improve from existing approaches suffering error rate of O(d^2/(n epsilon^2)) or O(d s^2/(n epsilon^2)). To break the error barrier in the local privacy, this work further consider privacy amplification in the shuffle model with anonymous channels, and shows the mechanism satisfies centralized (14 ln(2/delta) (s e^epsilon+2s-1)/(n-1))^0.5, delta)-differential privacy, which is domain independent and thus scales to federated learning of large models. We experimentally validate and compare it with existing approaches, and demonstrate its significant error reduction.

2019 ◽  
Vol 2019 (3) ◽  
pp. 170-190
Author(s):  
Archita Agarwal ◽  
Maurice Herlihy ◽  
Seny Kamara ◽  
Tarik Moataz

Abstract The problem of privatizing statistical databases is a well-studied topic that has culminated with the notion of differential privacy. The complementary problem of securing these differentially private databases, however, has—as far as we know—not been considered in the past. While the security of private databases is in theory orthogonal to the problem of private statistical analysis (e.g., in the central model of differential privacy the curator is trusted) the recent real-world deployments of differentially-private systems suggest that it will become a problem of increasing importance. In this work, we consider the problem of designing encrypted databases (EDB) that support differentially-private statistical queries. More precisely, these EDBs should support a set of encrypted operations with which a curator can securely query and manage its data, and a set of private operations with which an analyst can privately analyze the data. Using such an EDB, a curator can securely outsource its database to an untrusted server (e.g., on-premise or in the cloud) while still allowing an analyst to privately query it. We show how to design an EDB that supports private histogram queries. As a building block, we introduce a differentially-private encrypted counter based on the binary mechanism of Chan et al. (ICALP, 2010). We then carefully combine multiple instances of this counter with a standard encrypted database scheme to support differentially-private histogram queries.


1996 ◽  
Vol 82 (1) ◽  
pp. 43-48 ◽  
Author(s):  
Waldemar W. Koczkodaj

A statistical experiment was designed to check whether the pairwise comparisons method, introduced by Thurstone in 1927, can really improve the accuracy of estimation of stimuli. This method was compared with direct rating. The experiment was designed and implemented to minimize statistical bias. Randomly generated bars were used since everyone is an expert on estimating lengths. The statistical analysis favoured the pairwise comparisons method The obtained results are decisive; more than a 300% improvement in accuracy was gained with at least 10 level of confidence.


Author(s):  
Oluwaseyi Feyisetan ◽  
Abhinav Aggarwal ◽  
Zekun Xu ◽  
Nathanael Teissier

Accurately learning from user data while ensuring quantifiable privacy guarantees provides an opportunity to build better ML models while maintaining user trust. Recent literature has demonstrated the applicability of a generalized form of Differential Privacy to provide guarantees over text queries. Such mechanisms add privacy preserving noise to vectorial representations of text in high dimension and return a text based projection of the noisy vectors. However, these mechanisms are sub-optimal in their trade-off between privacy and utility. In this proposal paper, we describe some challenges in balancing this trade-off. At a high level, we provide two proposals: (1) a framework called LAC which defers some of the noise to a privacy amplification step and (2), an additional suite of three different techniques for calibrating the noise based on the local region around a word. Our objective in this paper is not to evaluate a single solution but to further the conversation on these challenges and chart pathways for building better mechanisms.


2006 ◽  
Vol 130 (5) ◽  
pp. 630-632
Author(s):  
Raouf E. Nakhleh

Abstract Context.—Because of its complex nature, surgical pathology practice is inherently error prone. Currently, there is pressure to reduce errors in medicine, including pathology. Objective.—To review factors that contribute to errors and to discuss error-reduction strategies. Design.—Literature review. Results.—Multiple factors contribute to errors in medicine, including variable input, complexity, inconsistency, tight coupling, human intervention, time constraints, and a hierarchical culture. Strategies that may reduce errors include reducing reliance on memory, improving information access, error-proofing processes, decreasing reliance on vigilance, standardizing tasks and language, reducing the number of handoffs, simplifying processes, adjusting work schedules and environment, providing adequate training, and placing the correct people in the correct jobs. Conclusions.—Surgical pathology is a complex system with ample opportunity for error. Significant error reduction is unlikely to occur without a sustained comprehensive program of quality control and quality assurance. Incremental adoption of information technology and automation along with improved training in patient safety and quality management can help reduce errors.


Author(s):  
George Leal Jamil ◽  
Alexis Rocha da Silva

Users' personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it. Users can neither delete nor restrict the purposes for which it is used. Learning how to machine learning that protects privacy, we can make a huge difference in solving many social issues like curing disease, etc. Deep neural networks are susceptible to various inference attacks as they remember information about their training data. In this chapter, the authors introduce differential privacy, which ensures that different kinds of statistical analysis don't compromise privacy and federated learning, training a machine learning model on a data to which we do not have access to.


2012 ◽  
Vol 220-223 ◽  
pp. 1062-1065
Author(s):  
Chun Hua Wei ◽  
Jin Hai Wang ◽  
Yu Zheng

For determine the location of the ingestible electronic pill inside the human body ,a new positioning system of the micro medical device inside the human body is designed. CC2430 communication chips combined with ZigBee protocol software are used as radio frequency transceiver of the system. When transmission power is very low, significant error codes will be generated at the receiving terminal. We can locate the electronic pill according to the analysis results of the packet error rate. And we adopt Trilateration to build up a set of nonlinear equations, which is related to the inter-node distance and spatial location variables of the pill. At last, equations are solved with numerical solution. It is found that the method is feasible.


2006 ◽  
Vol 04 (06) ◽  
pp. 935-946 ◽  
Author(s):  
SHUN WATANABE ◽  
RYUTAROH MATSUMOTO ◽  
TOMOHIKO UYEMATSU

This paper shows that the random privacy amplification is secure with a higher key rate than Mayers' evaluation at the same error rate in the BB84 protocol with one-way or two-way classical communications. There exists only Mayers' evaluation on the secure key rate with random privacy amplification that is applicable to the BB84 protocol with two-way classical communications. Our result improves the secure key rate of the random privacy amplification in the BB84 protocol with two-way classical communications.


Author(s):  
Moushira Abdallah Mohamed Ahmed ◽  
Shuhui Wu ◽  
Laure Deveriane Dushime ◽  
Yuanhong Tao

The emerging of shuffle model has attracted considerable attention of scientists owing to his unique properties in solving the privacy problems in federated learning, specifically the trade off problem between privacy and utility in central and local model. Where, the central model relies on a trusted server which collects users’ raw data and then perturbs it. While in the local model all users perturb their data locally then they send their perturbed data to server. Both models have pron and con. The server in central model enjoys with high accuracy but the users suffer from insufficient privacy in contrast, the local model which provides sufficient privacy at users’ side but the server suffers from limited accuracy. Shuffle model has advanced property of hide position of input messages by perturbing it with perturbation π. Therefore, the scientists considered on adding shuffle model between users and servers to make the server untrusted where the users communicate with the server through the shuffle and boosting the privacy by adding perturbation π for users’ messages without increasing the noise level. Consequently, the usage of modified technique differential privacy federated learning with shuffle model will explores the gap between privacy and accuracy in both models. So this new model attracted many researchers in recent work. In this review, we initiate the analytic learning of a shuffled model for distributed differentially private mechanisms. We focused on the role of shuffle model for solving the problem between privacy and accuracy by summarizing the recent researches about shuffle model and its practical results. Furthermore, we present two types of shuffle, single shuffle and m shuffles with the statistical analysis for each one in boosting the privacy amplification of users with the same level of accuracy by reasoning the practical results of recent papers.


Sign in / Sign up

Export Citation Format

Share Document