scholarly journals A Differential Privacy Mechanism that Accounts for Network Effects for Crowdsourcing Systems

2020 ◽  
Vol 69 ◽  
pp. 1127-1164
Author(s):  
Yuan Luo ◽  
Nicholas R. Jennings

In crowdsourcing systems, it is important for the crowdsource campaign initiator to incentivize users to share their data to produce results of the desired computational accuracy. This problem becomes especially challenging when users are concerned about the privacy of their data. To overcome this challenge, existing work often aims to provide users with differential privacy guarantees to incentivize privacy-sensitive users to share their data. However, this work neglects the network effect that a user enjoys greater privacy protection when he aligns his participation behaviour with that of other users. To explore this network effect, we formulate the interaction among users regarding their participation decisions as a population game, because a user’s welfare from the interaction depends not only on his own participation decision but also the distribution of others’ decisions. We show that the Nash equilibrium of this game consists of a threshold strategy, where all users whose privacy sensitivity is below a certain threshold will participate and the remaining users will not. We characterize the existence and uniqueness of this equilibrium, which depends on the privacy guarantee, the reward provided by the initiator and the population size. Based on this equilibria analysis, we design the PINE (Privacy Incentivization with Network Effects) mechanism and prove that it maximizes the initiator’s payoff while providing participating users with a guaranteed degree of privacy protection. Numerical simulations, on both real and synthetic data, show that (i) PINE improves the initiator’s expected payoff by up to 75%, compared to state of the art mechanisms that do not consider this effect; (ii) the performance gain by exploiting the network effect is particularly good when the majority of users are flexible over their privacy attitudes and when there are a large number of low quality task performers.

2020 ◽  
Vol 16 (5) ◽  
pp. 800-821
Author(s):  
E.V. Popov ◽  
K.A. Semyachkov

Subject. The article addresses economic relations that are formed in various areas of economic application of digital platforms. The target of the research is the modern economy of digital platforms across different economic activities. Objectives. The aim is to systematize principles for share economy formation in the context of the digital society development. Methods. We employ general scientific methods of research. Results. The study shows that the development of digital platforms is one of the most important trends in the development of the modern economy. We classified certain characteristic features of modern digital platforms, analyzed principles for their creation. The paper emphasizes that the network effects achieved through the use of digital platforms are an important factor in the development of the share economy. The network effect describes the impact of the number of the platform users on the value created for each of them. The paper also considers differences in the organization of traditional economy companies and companies that are based on the digital platform model, reveals specifics of changes in socio-economic systems caused by the development of digital platforms, systematizes principles of the sharing economy formation in the context of the digital society development. Conclusions. The analyzed principles for sharing economy development on the basis of digital platforms can be applied to create models for the purpose of forecasting the transformation of economic activity in the post-industrial society.


2014 ◽  
Vol 8 (2) ◽  
pp. 13-24 ◽  
Author(s):  
Arkadiusz Liber

Introduction: Medical documentation ought to be accessible with the preservation of its integrity as well as the protection of personal data. One of the manners of its protection against disclosure is anonymization. Contemporary methods ensure anonymity without the possibility of sensitive data access control. it seems that the future of sensitive data processing systems belongs to the personalized method. In the first part of the paper k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, and (k,e)-Anonymity methods were discussed. these methods belong to well - known elementary methods which are the subject of a significant number of publications. As the source papers to this part, Samarati, Sweeney, wang, wong and zhang’s works were accredited. the selection of these publications is justified by their wider research review work led, for instance, by Fung, Wang, Fu and y. however, it should be noted that the methods of anonymization derive from the methods of statistical databases protection from the 70s of 20th century. Due to the interrelated content and literature references the first and the second part of this article constitute the integral whole.Aim of the study: The analysis of the methods of anonymization, the analysis of the methods of protection of anonymized data, the study of a new security type of privacy enabling device to control disclosing sensitive data by the entity which this data concerns.Material and methods: Analytical methods, algebraic methods.Results: Delivering material supporting the choice and analysis of the ways of anonymization of medical data, developing a new privacy protection solution enabling the control of sensitive data by entities which this data concerns.Conclusions: In the paper the analysis of solutions for data anonymization, to ensure privacy protection in medical data sets, was conducted. the methods of: k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, (k,e)-Anonymity, (X,y)-Privacy, lKc-Privacy, l-Diversity, (X,y)-linkability, t-closeness, confidence Bounding and Personalized Privacy were described, explained and analyzed. The analysis of solutions of controlling sensitive data by their owner was also conducted. Apart from the existing methods of the anonymization, the analysis of methods of the protection of anonymized data was included. In particular, the methods of: δ-Presence, e-Differential Privacy, (d,γ)-Privacy, (α,β)-Distributing Privacy and protections against (c,t)-isolation were analyzed. Moreover, the author introduced a new solution of the controlled protection of privacy. the solution is based on marking a protected field and the multi-key encryption of sensitive value. The suggested way of marking the fields is in accordance with Xmlstandard. For the encryption, (n,p) different keys cipher was selected. to decipher the content the p keys of n were used. The proposed solution enables to apply brand new methods to control privacy of disclosing sensitive data.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jing Zhao ◽  
Shubo Liu ◽  
Xingxing Xiong ◽  
Zhaohui Cai

Privacy protection is one of the major obstacles for data sharing. Time-series data have the characteristics of autocorrelation, continuity, and large scale. Current research on time-series data publication mainly ignores the correlation of time-series data and the lack of privacy protection. In this paper, we study the problem of correlated time-series data publication and propose a sliding window-based autocorrelation time-series data publication algorithm, called SW-ATS. Instead of using global sensitivity in the traditional differential privacy mechanisms, we proposed periodic sensitivity to provide a stronger degree of privacy guarantee. SW-ATS introduces a sliding window mechanism, with the correlation between the noise-adding sequence and the original time-series data guaranteed by sequence indistinguishability, to protect the privacy of the latest data. We prove that SW-ATS satisfies ε-differential privacy. Compared with the state-of-the-art algorithm, SW-ATS is superior in reducing the error rate of MAE which is about 25%, improving the utility of data, and providing stronger privacy protection.


Author(s):  
Poushali Sengupta ◽  
Sudipta Paul ◽  
Subhankar Mishra

The leakage of data might have an extreme effect on the personal level if it contains sensitive information. Common prevention methods like encryption-decryption, endpoint protection, intrusion detection systems are prone to leakage. Differential privacy comes to the rescue with a proper promise of protection against leakage, as it uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility. Differential privacy allows one to access the forest of data by describing their pattern of groups without disclosing any individual trees. The current adaption of differential privacy by leading tech companies and academia encourages authors to explore the topic in detail. The different aspects of differential privacy, its application in privacy protection and leakage of information, a comparative discussion on the current research approaches in this field, its utility in the real world as well as the trade-offs will be discussed.


Sign in / Sign up

Export Citation Format

Share Document