Privacy Models for RFID Schemes

Author(s):  
Serge Vaudenay
Keyword(s):  
2022 ◽  
Vol 412 ◽  
pp. 126546
Author(s):  
Jesse Laeuchli ◽  
Yunior Ramírez-Cruz ◽  
Rolando Trujillo-Rasua

Author(s):  
Seungho Jeon ◽  
Jeongeun Seo ◽  
Sukyoung Kim ◽  
Jeongmoon Lee ◽  
Jong-Ho Kim ◽  
...  

BACKGROUND De-identifying personal information is critical when using personal health data for secondary research. The Observational Medical Outcomes Partnership Common Data Model (CDM), defined by the nonprofit organization Observational Health Data Sciences and Informatics, has been gaining attention for its use in the analysis of patient-level clinical data obtained from various medical institutions. When analyzing such data in a public environment such as a cloud-computing system, an appropriate de-identification strategy is required to protect patient privacy. OBJECTIVE This study proposes and evaluates a de-identification strategy that is comprised of several rules along with privacy models such as k-anonymity, l-diversity, and t-closeness. The proposed strategy was evaluated using the actual CDM database. METHODS The CDM database used in this study was constructed by the Anam Hospital of Korea University. Analysis and evaluation were performed using the ARX anonymizing framework in combination with the k-anonymity, l-diversity, and t-closeness privacy models. RESULTS The CDM database, which was constructed according to the rules established by Observational Health Data Sciences and Informatics, exhibited a low risk of re-identification: The highest re-identifiable record rate (11.3%) in the dataset was exhibited by the DRUG_EXPOSURE table, with a re-identification success rate of 0.03%. However, because all tables include at least one “highest risk” value of 100%, suitable anonymizing techniques are required; moreover, the CDM database preserves the “source values” (raw data), a combination of which could increase the risk of re-identification. Therefore, this study proposes an enhanced strategy to de-identify the source values to significantly reduce not only the highest risk in the k-anonymity, l-diversity, and t-closeness privacy models but also the overall possibility of re-identification. CONCLUSIONS Our proposed de-identification strategy effectively enhanced the privacy of the CDM database, thereby encouraging clinical research involving multiple centers.


Author(s):  
Haoti Zhong ◽  
Anna Squicciarini ◽  
David Miller ◽  
Cornelia Caragea

We address machine prediction of an individual's label (private or public) for a given image. This problem is difficult due to user subjectivity and inadequate labeled examples to train individual, personalized models. It is also time and space consuming to train a classifier for each user. We propose a Group-Based Personalized Model for image privacy classification in online social media sites, which learns a set of archetypical privacy models (groups), and associates a given user with one of these groups. Our system can be used to provide accurate ``early warnings'' with respect to a user's privacy awareness level.


Sign in / Sign up

Export Citation Format

Share Document