Personalized trajectory privacy-preserving method based on sensitive attribute generalization and location perturbation

2021 ◽  
Vol 25 (5) ◽  
pp. 1247-1271
Author(s):  
Chuanming Chen ◽  
Wenshi Lin ◽  
Shuanggui Zhang ◽  
Zitong Ye ◽  
Qingying Yu ◽  
...  

Trajectory data may include the user’s occupation, medical records, and other similar information. However, attackers can use specific background knowledge to analyze published trajectory data and access a user’s private information. Different users have different requirements regarding the anonymity of sensitive information. To satisfy personalized privacy protection requirements and minimize data loss, we propose a novel trajectory privacy preservation method based on sensitive attribute generalization and trajectory perturbation. The proposed method can prevent an attacker who has a large amount of background knowledge and has exchanged information with other attackers from stealing private user information. First, a trajectory dataset is clustered and frequent patterns are mined according to the clustering results. Thereafter, the sensitive attributes found within the frequent patterns are generalized according to the user requirements. Finally, the trajectory locations are perturbed to achieve trajectory privacy protection. The results of theoretical analyses and experimental evaluations demonstrate the effectiveness of the proposed method in preserving personalized privacy in published trajectory data.

Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2347
Author(s):  
Fandi Aditya Putra ◽  
Kalamullah Ramli ◽  
Nur Hayati ◽  
Teddy Surya Gunawan

Over recent years, the incidence of data breaches and cyberattacks has increased significantly. This has highlighted the need for sectoral organizations to share information about such events so that lessons can be learned to mitigate the prevalence and severity of cyber incidents against other organizations. Sectoral organizations embody a governance relationship between cross-sector public and private entities, called public-private partnerships (PPPs). However, organizations are hesitant to share such information due to a lack of trust and business-critical confidentially issues. This problem occurs because of the absence of any protocols that guarantee privacy protection and protect sensitive information. To address this issue, this paper proposes a novel protocol, Putra-Ramli Secure Cyber-incident Information Sharing (PURA-SCIS), to secure cyber incident information sharing. PURA-SCIS has been designed to offer exceptional data and privacy protection and run on the cloud services of sectoral organizations. The relationship between organizations in PURA-SCIS is symmetrical, where the entities must collectively maintain the security of classified cyber incident information. Furthermore, the organizations must be legitimate entities in the PURA-SCIS protocol. The Scyther tool was used for protocol verification in PURA-SCIS. The experimental results showed that the proposed PURA-SCIS protocol provided good security properties, including public verifiability for all entities, blockless verification, data privacy preservation, identity privacy preservation and traceability, and private information sharing. PURA-SCIS also provided a high degree of confidentiality to protect the security and integrity of cyber-incident-related information exchanged among sectoral organizations via cloud services.


Author(s):  
Nandkishor P. Karlekar ◽  
N. Gomathi

Due to widespread growth of cloud technology, virtual server accomplished in cloud platform may collect useful data from a client and then jointly disclose the client’s sensitive data without permission. Hence, from the perspective of cloud clients, it is very important to take confident technical actions to defend their privacy at client side. Accordingly, different privacy protection techniques have been presented in the literature for safeguarding the original data. This paper presents a technique for privacy preservation of cloud data using Kronecker product and Bat algorithm-based coefficient generation. Overall, the proposed privacy preservation method is performed using two important steps. In the first step, PU coefficient is optimally found out using PUBAT algorithm with new objective function. In the second step, input data and PU coefficient is then utilized for finding the privacy protected data for further data publishing in cloud environment. For the performance analysis, the experimentation is performed with three datasets namely, Cleveland, Switzerland and Hungarian and evaluation is performed using accuracy and DBDR. From the outcome, the proposed algorithm obtained the accuracy of 94.28% but the existing algorithm obtained only the 83.64% to prove the utility. On the other hand, the proposed algorithm obtained DBDR of 35.28% but the existing algorithm obtained only 12.89% to prove the privacy measure.


2013 ◽  
Vol 433-435 ◽  
pp. 1689-1692 ◽  
Author(s):  
Xiangmin Ren ◽  
Boxuan Jia ◽  
Kechao Wang

Uncertain data management has become an important research direction and a hot area of research. This paper proposes an UDAK-anonymity algorithm via anatomy for relational uncertain data. Uncertain data influence matrix based on background knowledge is built in order to describe the influence degree of sensitive attribute and Quasi-identifier (QI) attributes. We use generalization and BK(L,K)-clustering to present equivalent class, L makes sensitive attributes diversity in one equivalent class. Experimental results show that UDAK-anonymity algorithm are utility, effective and efficient, and can make anonymous uncertainty data effectively resist background knowledge attack and homogeneity attack.


2021 ◽  
Vol 15 (2) ◽  
pp. 68-86
Author(s):  
Sowmyarani C. N. ◽  
Veena Gadad ◽  
Dayananda P.

Privacy preservation is a major concern in current technology where enormous amounts of data are being collected and published for carrying out analysis. These data may contain sensitive information related to individual who owns them. If the data is published in their original form, they may lead to privacy disclosure which threats privacy requirements. Hence, the data should be anonymized before publishing so that it becomes challenging for intruders to obtain sensitive information by means of any privacy attack model. There are popular data anonymization techniques such as k-anonymity, l-diversity, p-sensitive k-anonymity, (l, m, d) anonymity, and t-closeness, which are vulnerable to different privacy attacks discussed in this paper. The proposed technique called (p+, α, t)-anonymity aims to anonymize the data in such a way that even though intruder has sufficient background knowledge on the target individual he will not be able to infer anything and breach private information. The anonymized data also provide sufficient data utility by allowing various data analytics to be performed.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Tong Yi ◽  
Minyong Shi

At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.


2021 ◽  
Author(s):  
Fengmei Jin ◽  
Wen Hua ◽  
Matteo Francia ◽  
Pingfu Chao ◽  
Maria Orlowska ◽  
...  

<div>Trajectory data has become ubiquitous nowadays, which can benefit various real-world applications such as traffic management and location-based services. However, trajectories may disclose highly sensitive information of an individual including mobility patterns, personal profiles and gazetteers, social relationships, etc, making it indispensable to consider privacy protection when releasing trajectory data. Ensuring privacy on trajectories demands more than hiding single locations, since trajectories are intrinsically sparse and high-dimensional, and require to protect multi-scale correlations. To this end, extensive research has been conducted to design effective techniques for privacy-preserving trajectory data publishing. Furthermore, protecting privacy requires carefully balance two metrics: privacy and utility. In other words, it needs to protect as much privacy as possible and meanwhile guarantee the usefulness of the released trajectories for data analysis. In this survey, we provide a comprehensive study and systematic summarization of existing protection models, privacy and utility metrics for trajectories developed in the literature. We also conduct extensive experiments on a real-life public trajectory dataset to evaluate the performance of several representative privacy protection models, demonstrate the trade-off between privacy and utility, and guide the choice of the right privacy model for trajectory publishing given certain privacy and utility desiderata.</div>


2021 ◽  
Author(s):  
Fengmei Jin ◽  
Wen Hua ◽  
Matteo Francia ◽  
Pingfu Chao ◽  
Maria Orlowska ◽  
...  

<div>Trajectory data has become ubiquitous nowadays, which can benefit various real-world applications such as traffic management and location-based services. However, trajectories may disclose highly sensitive information of an individual including mobility patterns, personal profiles and gazetteers, social relationships, etc, making it indispensable to consider privacy protection when releasing trajectory data. Ensuring privacy on trajectories demands more than hiding single locations, since trajectories are intrinsically sparse and high-dimensional, and require to protect multi-scale correlations. To this end, extensive research has been conducted to design effective techniques for privacy-preserving trajectory data publishing. Furthermore, protecting privacy requires carefully balance two metrics: privacy and utility. In other words, it needs to protect as much privacy as possible and meanwhile guarantee the usefulness of the released trajectories for data analysis. In this survey, we provide a comprehensive study and systematic summarization of existing protection models, privacy and utility metrics for trajectories developed in the literature. We also conduct extensive experiments on a real-life public trajectory dataset to evaluate the performance of several representative privacy protection models, demonstrate the trade-off between privacy and utility, and guide the choice of the right privacy model for trajectory publishing given certain privacy and utility desiderata.</div>


Sign in / Sign up

Export Citation Format

Share Document