privacy models
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Ying Zhao ◽  
Jinjun Chen

Huge amount of unstructured data including image, video, audio, and text are ubiquitously generated and shared, it is a challenge to protect sensitive personal information in them, such as human faces, voiceprints, and authorships. Differential privacy is the standard privacy protection technology that provides rigorous privacy guarantees for various data. This survey summarizes and analyzes differential privacy solutions to protect unstructured data content before they are shared with untrusted parties. These differential privacy methods obfuscate unstructured data after they are represented with vectors, and then reconstruct them with obfuscated vectors. We summarize specific privacy models and mechanisms together with possible challenges in them. We also conclude their privacy guarantees against AI attacks and utility losses. Finally, we discuss several possible directions for future research.


2022 ◽  
Vol 412 ◽  
pp. 126546
Author(s):  
Jesse Laeuchli ◽  
Yunior Ramírez-Cruz ◽  
Rolando Trujillo-Rasua

2021 ◽  
Author(s):  
R Sudha ◽  
G Pooja ◽  
V Revathy ◽  
S Dilip Kumar

The use of online net banking official sites has been rapidly increased now a days. In online transaction attackers need only little information to steal the private information of bank users and can do any kind of fraudulent activities. One of the major drawbacks of commercial losses in online banking is fraud detected by credit card fraud detection system, which has a significant impact on clients. Fraudulent transactions will be discovered after the transaction is completed in the existing novel privacy models. As a result, in this paper, three level server systems are implemented to partition the intermediate gateway with better security. User details, transaction details and account details are considered as sensitive attributes and stored in separate database. And also data suppression scheme to replace the string and numerical characters into special symbols to overcome the traditional cryptography schemes is implemented. The Quasi-Identifiers are hidden by using Anonymization algorithm so that the transactions can be done efficiently.


Author(s):  
Chuhao Wu ◽  
Tianhao Wang ◽  
Robert W. Proctor ◽  
Ninghui Li ◽  
Jeremiah Blocki ◽  
...  

2021 ◽  
Author(s):  
Vikas Thammanna Gowda

Although k-Anonymity is a good way to publish microdata for research purposes, it still suffers from various attacks. Hence, many refinements of k-Anonymity have been proposed such as ldiversity and t-Closeness, with t-Closeness being one of the strictest privacy models. Satisfying t-Closeness for a lower value of t may yield equivalence classes with high number of records which results in a greater information loss. For a higher value of t, equivalence classes are still prone to homogeneity, skewness, and similarity attacks. This is because equivalence classes can be formed with fewer distinct sensitive attribute values and still satisfy the constraint t. In this paper, we introduce a new algorithm that overcomes the limitations of k-Anonymity and lDiversity and yields equivalence classes of size k with greater diversity and frequency of a SA value in all the equivalence classes differ by at-most one.


2021 ◽  
Author(s):  
Noura Al Moubayed ◽  
Zheming Zuo ◽  
Matthew Watson ◽  
Robert Hall ◽  
Chris Kennelly ◽  
...  

BACKGROUND Data science offers an unparalleled opportunity to identify new insight into many aspects of human life, with recent advances in healthcare. Using data science in digital health raises significant challenges in data privacy, transparency, and trustworthiness. Recent regulations enforce the need for a clear legal basis to collect, process, and share data, for example under the General Data Protection Regulation (GDPR) and UK Data Protection Act (DPA) 2018. For healthcare providers, the legal basis of using the electronic health record (EHR) is strictly for clinical care. Any other use of the data requires thoughtful considerations of the legal context and direct patient consent. Identifiable personal and sensitive information must be sufficiently anonymized. Raw data are commonly anonymized to be used for research purposes with risk assessment for re-identification and utility. Whilst healthcare organizations have internal policies defined for information governance, there is a significant lack of practical tools and intuitive guidance about the use of data for research and modelling. Off-the-shelf data anonymization tools are developed frequently, but privacy-related functionalities are often incomparable for use in different problem domains. Additionally, tools to support measuring the risk of the anonymized data regarding re-identification against its usefulness exist but it can be unclear as to their efficacy. OBJECTIVE In this systematic literature mapping (SLM) study, we aim to alleviate those issues by reviewing the landscape of data anonymization for digital healthcare. METHODS We employ the Google Scholar, Web of Science, Elsevier Scopus, and PubMed for to retrieve academic studies published in English up to June 2020. Noteworthy, grey literature is also involved to initialize the search. We focus on review questions covering five bottom-up aspects: 1) basic anonymization operations; 2) privacy models; 3) re-identification risk and usability metrics; 4) off-the-shelf anonymization tools; 5) lawful basis for EHR data anonymization. RESULTS We identified 239 eligible studies in which 60 articles are related to general background introduction, 16 papers are selected for seven basic anonymization operations, 104 studies are covered for seventy-two conventional and machine-learning-based privacy models, seven and fifteen metrics are respectively included for measuring the re-identification risk and degree of usability in 4 and 19 papers, and twenty data anonymization software tools are explored in 36 publications. In addition, we also evaluate the practical feasibility of performing anonymization on HER data with reference to its usability of medical decision-making. Furthermore, we summarize the lawful basis to deliver guidance for practical EHR anonymization. CONCLUSIONS This SLM study indicates that data anonymization on EHR is theoretically achievable yet practically, requires more research efforts in practical implementations to balance privacy-preserving and usability, thus, to ensure more reliable healthcare applications.


Author(s):  
Rajwinder Kaur ◽  
Karan Verma ◽  
Shelendra Kumar Jain ◽  
Nishtha Kesswani

Internet of Things is a norm which has expanded very swiftly with high magnitude of heterogeneity and functionalities. Security and privacy became the prime factors of Internet of Things due to unsecured character of wireless communication. Thus, because of unsecured network, it is easy for invaders to trace and find the position of nodes during communication and leak the information. Issues related to location information may include sharing of information, storage, sensing, and processing which can be used by external entities in different contexts, i.e. contexts can be: technical, legal, and social. These issues make privacy a major concern. Here, the research this article presents notions of existing privacy models and the amplified techniques using a random path. The article then describes possible solutions to preserve the location of nodes with less transmission time. Results of proposed scheme depict effectual behavior of the approach.


10.2196/19597 ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. e19597
Author(s):  
Seungho Jeon ◽  
Jeongeun Seo ◽  
Sukyoung Kim ◽  
Jeongmoon Lee ◽  
Jong-Ho Kim ◽  
...  

Background De-identifying personal information is critical when using personal health data for secondary research. The Observational Medical Outcomes Partnership Common Data Model (CDM), defined by the nonprofit organization Observational Health Data Sciences and Informatics, has been gaining attention for its use in the analysis of patient-level clinical data obtained from various medical institutions. When analyzing such data in a public environment such as a cloud-computing system, an appropriate de-identification strategy is required to protect patient privacy. Objective This study proposes and evaluates a de-identification strategy that is comprised of several rules along with privacy models such as k-anonymity, l-diversity, and t-closeness. The proposed strategy was evaluated using the actual CDM database. Methods The CDM database used in this study was constructed by the Anam Hospital of Korea University. Analysis and evaluation were performed using the ARX anonymizing framework in combination with the k-anonymity, l-diversity, and t-closeness privacy models. Results The CDM database, which was constructed according to the rules established by Observational Health Data Sciences and Informatics, exhibited a low risk of re-identification: The highest re-identifiable record rate (11.3%) in the dataset was exhibited by the DRUG_EXPOSURE table, with a re-identification success rate of 0.03%. However, because all tables include at least one “highest risk” value of 100%, suitable anonymizing techniques are required; moreover, the CDM database preserves the “source values” (raw data), a combination of which could increase the risk of re-identification. Therefore, this study proposes an enhanced strategy to de-identify the source values to significantly reduce not only the highest risk in the k-anonymity, l-diversity, and t-closeness privacy models but also the overall possibility of re-identification. Conclusions Our proposed de-identification strategy effectively enhanced the privacy of the CDM database, thereby encouraging clinical research involving multiple centers.


2020 ◽  
Author(s):  
Ferucio Laurenţiu Ţiplea ◽  
Cristian Andriesei ◽  
Cristian Hristea

The last decade has shown an increasing interest in the use of the physically unclonable function (PUF) technology in the design of radio frequency identification (RFID) systems. PUFs can bring extra security and privacy at the physical level that cannot be obtained by symmetric or asymmetric cryptography at the moment. However, many PUF-based RFID schemes proposed in recent years do not even achieve the lowest privacy level in reputable security and privacy models, such as Vaudenay’s model. In contrast, the lowest privacy in this model can be achieved through standard RFID schemes that use only symmetric cryptography. The purpose of this chapter is to analyze this aspect. Thus, it is emphasized the need to use formal models in the study of the security and privacy of (PUF-based) RFID schemes. We broadly discuss the tag corruption oracle and highlight some aspects that can lead to schemes without security or privacy. We also insist on the need to formally treat the cryptographic properties of PUFs to obtain security and privacy proofs. In the end, we point out a significant benefit of using PUF technology in RFID, namely getting schemes that offer destructive privacy in Vaudenay’s model.


Sign in / Sign up

Export Citation Format

Share Document