ADT

2021 ◽  
Vol 15 (3) ◽  
pp. 83-105
Author(s):  
Vartika Puri ◽  
Parmeet Kaur ◽  
Shelly Sachdeva

Data anonymization is commonly utilized for the protection of an individual's identity when his personal or sensitive data is published. A well-known anonymization model to define the privacy of transactional data is the km-anonymity model. This model ensures that an adversary who knows up to m items of an individual cannot determine which record in the dataset corresponds to the individual with a probability greater than 1/k. However, the existing techniques generally rely on the presence of similarity between items in the dataset tuples to achieve km-anonymization and are not suitable when transactional data contains tuples without many common values. The authors refer to this type of transactional data as diverse transactional data and propose an algorithm, anonymization of diverse transactional data (ADT). ADT is based on slicing and generalization to achieve km-anonymity for diverse transactional data. ADT has been experimentally evaluated on two datasets, and it has been found that ADT yields higher privacy protection and causes a lower loss in data utility as compared to existing methods.

2014 ◽  
Vol 8 (2) ◽  
pp. 13-24 ◽  
Author(s):  
Arkadiusz Liber

Introduction: Medical documentation ought to be accessible with the preservation of its integrity as well as the protection of personal data. One of the manners of its protection against disclosure is anonymization. Contemporary methods ensure anonymity without the possibility of sensitive data access control. it seems that the future of sensitive data processing systems belongs to the personalized method. In the first part of the paper k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, and (k,e)-Anonymity methods were discussed. these methods belong to well - known elementary methods which are the subject of a significant number of publications. As the source papers to this part, Samarati, Sweeney, wang, wong and zhang’s works were accredited. the selection of these publications is justified by their wider research review work led, for instance, by Fung, Wang, Fu and y. however, it should be noted that the methods of anonymization derive from the methods of statistical databases protection from the 70s of 20th century. Due to the interrelated content and literature references the first and the second part of this article constitute the integral whole.Aim of the study: The analysis of the methods of anonymization, the analysis of the methods of protection of anonymized data, the study of a new security type of privacy enabling device to control disclosing sensitive data by the entity which this data concerns.Material and methods: Analytical methods, algebraic methods.Results: Delivering material supporting the choice and analysis of the ways of anonymization of medical data, developing a new privacy protection solution enabling the control of sensitive data by entities which this data concerns.Conclusions: In the paper the analysis of solutions for data anonymization, to ensure privacy protection in medical data sets, was conducted. the methods of: k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, (k,e)-Anonymity, (X,y)-Privacy, lKc-Privacy, l-Diversity, (X,y)-linkability, t-closeness, confidence Bounding and Personalized Privacy were described, explained and analyzed. The analysis of solutions of controlling sensitive data by their owner was also conducted. Apart from the existing methods of the anonymization, the analysis of methods of the protection of anonymized data was included. In particular, the methods of: δ-Presence, e-Differential Privacy, (d,γ)-Privacy, (α,β)-Distributing Privacy and protections against (c,t)-isolation were analyzed. Moreover, the author introduced a new solution of the controlled protection of privacy. the solution is based on marking a protected field and the multi-key encryption of sensitive value. The suggested way of marking the fields is in accordance with Xmlstandard. For the encryption, (n,p) different keys cipher was selected. to decipher the content the p keys of n were used. The proposed solution enables to apply brand new methods to control privacy of disclosing sensitive data.


2021 ◽  
Vol 11 (5) ◽  
pp. 529-535
Author(s):  
Jihane El Mokhtari ◽  
Anas Abou El Kalam ◽  
Siham Benhaddou ◽  
Jean-Philippe Leroy

This article is devoted to the topic of coupling access and inference controls into security policies. The coupling of these two mechanisms is necessary to strengthen the protection of the privacy of complex systems users. Although the PrivOrBAC access control model covers several privacy protection requirements, the risk of inferring sensitive data may exist. Indeed, the accumulation of several pieces of data to which access is authorized can create an inference. This work proposes an inference control mechanism implemented through multidimensional analysis. This analysis will take into account several elements such as the history of access to the data that may create an inference, as well as their influence on the inference. The idea is that this mechanism delivers metrics that reflect the level of risk. These measures will be considered in the access control rules and will participate in the refusal or authorization decision with or without obligation. This is how the coupling of access and inference controls will be applied. The implementation of this coupling will be done via the multidimensional OLAP databases which will be requested by the Policy Information Point, the gateway brick of XACML to the various external data sources, which will route the inference measurements to the decision-making point.


Author(s):  
Maria N. Koukovini ◽  
Eugenia I. Papagiannakopoulou ◽  
Georgios V. Lioudakis ◽  
Nikolaos L. Dellas ◽  
Dimitra I. Kaklamani ◽  
...  

Workflow management systems are used to run day-to-day applications in numerous domains, often including exchange and processing of sensitive data. Their native “leakage-proneness,” being the consequence of their distributed and collaborative nature, calls for sophisticated mechanisms able to guarantee proper enforcement of the necessary privacy protection measures. Motivated by the principles of Privacy by Design and its potential for workflow environments, this chapter investigates the associated issues, challenges, and requirements. With the legal and regulatory provisions regarding privacy in information systems as a baseline, the chapter elaborates on the challenges and derived requirements in the context of workflow environments, taking into account the particular needs and implications of the latter. Further, it highlights important aspects that need to be considered regarding, on the one hand, the incorporation of privacy-enhancing features in the workflow models themselves and, on the other, the evaluation of the latter against privacy provisions.


Author(s):  
Barbara Sandfuchs

To fight the risks caused by excessive self-disclosure especially regarding sensitive data such as genetic ones, it might be desirable to prevent certain disclosures. When doing so, regulators traditionally compel protection, for example by prohibiting the collection and/or use of genetic data even if citizens would like to share these data. This chapter provides an introduction into an alternative approach which has recently received increased scholarly attention: privacy protection by the use of nudges. Such nudges may in the future provide an alternative to compelled protection of genetic data or complement the traditional approach. This chapter first describes behavioral psychology's findings that citizens sometimes act irrational. This statement is consequently explained with the insights that these irrationalities are often predictable. Thus, a solution might be to correct them by the use of nudges.


Author(s):  
Wei Chang ◽  
Jie Wu

Many smartphone-based applications need microdata, but publishing a microdata table may leak respondents' privacy. Conventional researches on privacy-preserving data publishing focus on providing identical privacy protection to all data requesters. Considering that, instead of trapping in a small coterie, information usually propagates from friend to friend. The authors study the privacy-preserving data publishing problem on a mobile social network. Along a propagation path, a series of tables will be locally created at each participant, and the tables' privacy-levels should be gradually enhanced. However, the tradeoff between these tables' overall utility and their individual privacy requirements are not trivial: any inappropriate sanitization operation under a lower privacy requirement may cause dramatic utility loss on the subsequent tables. For solving the problem, the authors propose an approximation algorithm by previewing the future privacy requirements. Extensive results show that this approach successfully increases the overall data utility, and meet the strengthening privacy requirements.


1970 ◽  
Vol 30 (3) ◽  
pp. 767-773 ◽  
Author(s):  
Russell Eisenman ◽  
Rachel T. Hare

Ss from a Quaker secondary school ( N = 121), a high prestige liberal art college ( N = 32), and a nursing school ( N = 70) were given five personality tests, and their responses examined whenever any one group agreed 80% or more. High commonality was found, and the values of the groups were inferred from their responses. Common values pose a problem for the assessment of the individual, since his responses to tests may reflect over-all cultural values rather than his unique opinions.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2307 ◽  
Author(s):  
Yancheng Shi ◽  
Zhenjiang Zhang ◽  
Han-Chieh Chao ◽  
Bo Shen

With the rapid development of information technology, large-scale personal data, including those collected by sensors or IoT devices, is stored in the cloud or data centers. In some cases, the owners of the cloud or data centers need to publish the data. Therefore, how to make the best use of the data in the risk of personal information leakage has become a popular research topic. The most common method of data privacy protection is the data anonymization, which has two main problems: (1) The availability of information after clustering will be reduced, and it cannot be flexibly adjusted. (2) Most methods are static. When the data is released multiple times, it will cause personal privacy leakage. To solve the problems, this article has two contributions. The first one is to propose a new method based on micro-aggregation to complete the process of clustering. In this way, the data availability and the privacy protection can be adjusted flexibly by considering the concepts of distance and information entropy. The second contribution of this article is to propose a dynamic update mechanism that guarantees that the individual privacy is not compromised after the data has been subjected to multiple releases, and minimizes the loss of information. At the end of the article, the algorithm is simulated with real data sets. The availability and advantages of the method are demonstrated by calculating the time, the average information loss and the number of forged data.


A grid (electrical) that is capable of being electronically controlled and that grid is used for connecting transmission, power generation, distribution (of electricity) as well as consumers using communication and or along with information technologies is called Smart Grid. Information flow that is Bi-directional in nature between the one that provide utility and the one that consumes electricity is one the key feature characteristic of the smart grid. This interaction that is two way in nature permits real time generation of electricity orin real-real--time period based on the demands of the consumer and requirement requests for power. The result of which is, privacy of the client becomes a vital importance and concern, when the usage data that is related to energy is collected with adoption as well as the deployment of smart grid technologies. For the protection of such sensitive data and information (related to consumer), it makes the use of mechanism that are used for privacy protection very much imperative or important for the protection the of smart grid user’s privacy. This paper proposes an analysis related to the privacy mechanisms and solutions of the smart grid that are recently proposed and intern identifying their weaknesses as well as strengths in terms of their efficiency, complexity of implementation, simplicity and robustness.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Kun Niu ◽  
Changgen Peng ◽  
Weijie Tan ◽  
Zhou Zhou ◽  
Yi Xu

Benefiting from the development of smart urban computing, the mobile crowd sensing (MCS) network has emerged as momentous communication technology to sense and collect data. The users upload data for specific sensing tasks, and the server completes the aggregation analysis and submits to the sensing platform. However, users’ privacy may be disclosed, and aggregate results may be unreliable. Those are challenges in the trust computation and privacy protection, especially for sensitive data aggregation with spatial information. To address these problems, a verifiable location-encrypted spatial aggregation computing (LeSAC) scheme is proposed for MCS privacy protection. In order to solve the spatial domain distributed user ciphertext computing, firstly, we propose an enhanced-distance-based interpolation calculation scheme, which participates in delegate evaluator based on Paillier homomorphic encryption. Then, we use aggregation signature of the sensing data to ensure the integrity and security of the data. In addition, security analysis indicates that the LeSAC can achieve the IND-CPA indistinguishability semantic security. The efficiency analysis and simulation results demonstrate the communication and computation overhead of the LeSAC. Meanwhile, we use the real environment sensing data sets to verify availability of proposed scheme, and the loss of accuracy (global RMSE) is only less than 5%, which can meet the application requirements.


Sign in / Sign up

Export Citation Format

Share Document