scholarly journals Automating Privacy Compliance Using Policy Integrated Blockchain

Cryptography ◽  
2019 ◽  
Vol 3 (1) ◽  
pp. 7 ◽  
Author(s):  
Karuna Pande Joshi ◽  
Agniva Banerjee

An essential requirement of any information management system is to protect data and resources against breach or improper modifications, while at the same time ensuring data access to legitimate users. Systems handling personal data are mandated to track its flow to comply with data protection regulations. We have built a novel framework that integrates semantically rich data privacy knowledge graph with Hyperledger Fabric blockchain technology, to develop an automated access-control and audit mechanism that enforces users' data privacy policies while sharing their data with third parties. Our blockchain based data-sharing solution addresses two of the most critical challenges: transaction verification and permissioned data obfuscation. Our solution ensures accountability for data sharing in the cloud by incorporating a secure and efficient system for End-to-End provenance. In this paper, we describe this framework along with the comprehensive semantically rich knowledge graph that we have developed to capture rules embedded in data privacy policy documents. Our framework can be used by organizations to automate compliance of their Cloud datasets.

2019 ◽  
Vol 9 (1) ◽  
pp. 80-91 ◽  
Author(s):  
Md Mehedi Hassan Onik ◽  
Chul-Soo Kim ◽  
Nam-Yong Lee ◽  
Jinhong Yang

AbstractSecure data distribution is critical for data accountability. Surveillance caused privacy breaching incidents have already questioned existing personal data collection techniques. Organizations assemble a huge amount of personally identifiable information (PII) for data-driven market analysis and prediction. However, the limitation of data tracking tools restricts the detection of exact data breaching points. Blockchain technology, an ‘immutable’ distributed ledger, can be leveraged to establish a transparent data auditing platform. However, Art. 42 and Art. 25 of general data protection regulation (GDPR) demands ‘right to forget’ and ‘right to erase’ of personal information, which goes against the immutability of blockchain technology. This paper proposes a GDPR complied decentralized and trusted PII sharing and tracking scheme. Proposed blockchain based personally identifiable information management system (BcPIIMS) demonstrates data movement among GDPR entities (user, controller and processor). Considering GDPR limitations, BcPIIMS used off-the-chain data storing architecture. A prototype was created to validate the proposed architecture using multichain. The use of off-the-chain storage reduces individual block size. Additionally, private blockchain also limits personal data leaking by collecting fast approval from restricted peers. This study presents personal data sharing, deleting, modifying and tracking features to verify the privacy of proposed blockchain based personally identifiable information management system.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Nabil Rifi ◽  
Nazim Agoulmine ◽  
Nada Chendeb Taher ◽  
Elie Rachkidi

In the past few years, the number of wireless devices connected to the Internet has increased to a number that could reach billions in the next few years. While cloud computing is being seen as the solution to process this data, security challenges could not be addressed solely with this technology. Security problems will continue to increase with such a model, especially for private and sensitive data such as personal data and medical data collected with more and more smarter connected devices constituting the so called Internet of Things. As a consequence, there is an urgent need for a fully decentralized peer-to-peer and secure technology solution to overcome these problems. The blockchain technology is a promising just-in-time solution that brings the required properties to the field. However, there are still challenges to address before using it in the context of IoT. This paper discusses these challenges and proposes a secure IoT architecture for medical data based on blockchain technology. The solution introduces a protocol for data access, smart contracts and a publisher-subscriber mechanism for notification. A simple analytical model is also presented to highlight the performance of the system. An implementation of the solution as a proof of concept is also presented.


Author(s):  
Sue Milton

The proliferation of data exposure via social media implies privacy and security are a lost cause. Regulation counters this through personal data usage compliance. Organizations must also keep non-personal data safe from competitors, criminals, and nation states. The chapter introduces leaders to the two data governance fundamentals: data privacy and data security. The chapter argues that data security cannot be achieved until data privacy issues have been addressed. Simply put, data privacy is fundamental to any data usage policy and data security to the data access policy. The fundamentals are then discussed more broadly, covering data and information management, cyber security, governance, and innovations in IT service provisioning. The chapter clarifies the complementary fundamentals and how they reduce data abuse. The link between privacy and security also demystifies the high resource costs in implementing and maintaining security practices and explains why leaders must provide strong IT leadership to ensure IT investment is defined and implemented wisely.


Author(s):  
Tore Hoel ◽  
Weiqin Chen ◽  
Jan M. Pawlowski

Abstract There is a gap between people’s online sharing of personal data and their concerns about privacy. Till now, this gap is addressed by attempting to match individual privacy preferences with service providers’ options for data handling. This approach has ignored the role different contexts play in data sharing. This paper aims at giving privacy engineering a new direction putting context centre stage and exploiting the affordances of machine learning in handling contexts and negotiating data sharing policies. This research is explorative and conceptual, representing the first development cycle of a design science research project in privacy engineering. The paper offers a concise understanding of data privacy as a foundation for design extending the seminal contextual integrity theory of Helen Nissenbaum. This theory started out as a normative theory describing the moral appropriateness of data transfers. In our work, the contextual integrity model is extended to a socio-technical theory that could have practical impact in the era of artificial intelligence. New conceptual constructs such as ‘context trigger’, ‘data sharing policy’ and ‘data sharing smart contract’ are defined, and their application is discussed from an organisational and technical level. The constructs and design are validated through expert interviews; contributions to design science research are discussed, and the paper concludes with presenting a framework for further privacy engineering development cycles.


Author(s):  
Shaveta Malik ◽  
Archana Mire ◽  
Amit Kumar Tyagi ◽  
Arathi Boyanapalli

Clinical research comprises participation from patients. Often there are concerns of enrolment from patients. Hence, it has to face various challenges related to personal data, such as data sharing, privacy and reproducibility, etc. Patients and researchers need to track a set plan called study protocol. This protocol spans through various stages such as registration, collection and analysis of data, report generation, and finally, results in publication of findings. The Blockchain technology has emerged as one of the possible solutions to these challenges. It has a potential to address all the problem associated with clinical research. It provides the comfort for building transparent, secure services relying on trusted third party. This technology enables one to share the control of the data, security, and the parameters with a single patient or a group of patients or any other stakeholders of clinical trial. This chapter addresses the use of blockchain in execution of secure and trusted clinical trials.


2021 ◽  
Vol 3 ◽  
Author(s):  
Deborah Lupton

Self-tracking technologies and practices offer ways of generating vast reams of personal details, raising questions about how these data are revealed or exposed to others. In this article, I report on findings from an interview-based study of long-term Australian self-trackers who were collecting and reviewing personal information about their bodies and other aspects of their everyday lives. The discussion focuses on the participants' understandings and practices related to sharing their personal data and to data privacy. The contextual elements of self-tracked sharing and privacy concerns were evident in the participants' accounts and were strongly related to ideas about why and how these details should be accessed by others. Sharing personal information from self-tracking was largely viewed as an intimate social experience. The value of self-tracked data to contribute to close face-to-face relationships was recognized and related aspects of social privacy were identified. However, most participants did not consider the possibilities that their personal information could be distributed well-beyond these relationships by third parties for commercial purposes (or what has been termed “institutional privacy”). These findings contribute to a more-than-digital approach to personal data sharing and privacy practices that recognizes the interplay between digital and non-digital practices and contexts. They also highlight the relational and social dimensions of self-tracking and concepts of data privacy.


2019 ◽  
Vol 40 (1) ◽  
pp. 48-52 ◽  
Author(s):  
Peter Buell Hirsch

Purpose This viewpoint is intended to examine the issue of the monetization of personal data and the risks to companies that fail to understand this trend. Design/methodology/approach This paper reviews the recent literature on the use and abuse of personal data to identify relevant trends and issues. Findings It is likely, whether through blockchain technology or some other means, that individual consumers will be able to monetize their data. Research limitations/implications As a review of secondary sources rather than original sources, the findings are anecdotal and not comprehensive. Practical implications In the rapidly changing environment of data privacy and security, one should anticipate that the findings may become outdated by sudden events such as a new global data privacy breach. Social implications Ownership of personal data and its use or abuse is one of the single most important social issues in today’s world, with profound implications for civil society. Originality/value While there have been numerous studies cataloguing attempts to create monetization platforms for consumer data, there are not many studies on the reputational risks for companies in handling data from the Internet of Things.


2017 ◽  
Vol 2017 (4) ◽  
pp. 232-250 ◽  
Author(s):  
David Froelicher ◽  
Patricia Egger ◽  
João Sá Sousa ◽  
Jean Louis Raisaro ◽  
Zhicong Huang ◽  
...  

Abstract Current solutions for privacy-preserving data sharing among multiple parties either depend on a centralized authority that must be trusted and provides only weakest-link security (e.g., the entity that manages private/secret cryptographic keys), or leverage on decentralized but impractical approaches (e.g., secure multi-party computation). When the data to be shared are of a sensitive nature and the number of data providers is high, these solutions are not appropriate. Therefore, we present UnLynx, a new decentralized system for efficient privacy-preserving data sharing. We consider m servers that constitute a collective authority whose goal is to verifiably compute on data sent from n data providers. UnLynx guarantees the confidentiality, unlinkability between data providers and their data, privacy of the end result and the correctness of computations by the servers. Furthermore, to support differentially private queries, UnLynx can collectively add noise under encryption. All of this is achieved through a combination of a set of new distributed and secure protocols that are based on homomorphic cryptography, verifiable shuffling and zero-knowledge proofs. UnLynx is highly parallelizable and modular by design as it enables multiple security/privacy vs. runtime tradeoffs. Our evaluation shows that UnLynx can execute a secure survey on 400,000 personal data records containing 5 encrypted attributes, distributed over 20 independent databases, for a total of 2,000,000 ciphertexts, in 24 minutes.


10.2196/18087 ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. e18087
Author(s):  
Christine Suver ◽  
Adrian Thorogood ◽  
Megan Doerr ◽  
John Wilbanks ◽  
Bartha Knoppers

Developing or independently evaluating algorithms in biomedical research is difficult because of restrictions on access to clinical data. Access is restricted because of privacy concerns, the proprietary treatment of data by institutions (fueled in part by the cost of data hosting, curation, and distribution), concerns over misuse, and the complexities of applicable regulatory frameworks. The use of cloud technology and services can address many of the barriers to data sharing. For example, researchers can access data in high performance, secure, and auditable cloud computing environments without the need for copying or downloading. An alternative path to accessing data sets requiring additional protection is the model-to-data approach. In model-to-data, researchers submit algorithms to run on secure data sets that remain hidden. Model-to-data is designed to enhance security and local control while enabling communities of researchers to generate new knowledge from sequestered data. Model-to-data has not yet been widely implemented, but pilots have demonstrated its utility when technical or legal constraints preclude other methods of sharing. We argue that model-to-data can make a valuable addition to our data sharing arsenal, with 2 caveats. First, model-to-data should only be adopted where necessary to supplement rather than replace existing data-sharing approaches given that it requires significant resource commitments from data stewards and limits scientific freedom, reproducibility, and scalability. Second, although model-to-data reduces concerns over data privacy and loss of local control when sharing clinical data, it is not an ethical panacea. Data stewards will remain hesitant to adopt model-to-data approaches without guidance on how to do so responsibly. To address this gap, we explored how commitments to open science, reproducibility, security, respect for data subjects, and research ethics oversight must be re-evaluated in a model-to-data context.


2020 ◽  
Author(s):  
Christine Suver ◽  
Adrian Thorogood ◽  
Megan Doerr ◽  
John Wilbanks ◽  
Bartha Knoppers

UNSTRUCTURED Developing or independently evaluating algorithms in biomedical research is difficult because of restrictions on access to clinical data. Access is restricted because of privacy concerns, the proprietary treatment of data by institutions (fueled in part by the cost of data hosting, curation, and distribution), concerns over misuse, and the complexities of applicable regulatory frameworks. The use of cloud technology and services can address many of the barriers to data sharing. For example, researchers can access data in high performance, secure, and auditable cloud computing environments without the need for copying or downloading. An alternative path to accessing data sets requiring additional protection is the model-to-data approach. In model-to-data, researchers submit algorithms to run on secure data sets that remain hidden. Model-to-data is designed to enhance security and local control while enabling communities of researchers to generate new knowledge from sequestered data. Model-to-data has not yet been widely implemented, but pilots have demonstrated its utility when technical or legal constraints preclude other methods of sharing. We argue that model-to-data can make a valuable addition to our data sharing arsenal, with 2 caveats. First, model-to-data should only be adopted where necessary to supplement rather than replace existing data-sharing approaches given that it requires significant resource commitments from data stewards and limits scientific freedom, reproducibility, and scalability. Second, although model-to-data reduces concerns over data privacy and loss of local control when sharing clinical data, it is not an ethical panacea. Data stewards will remain hesitant to adopt model-to-data approaches without guidance on how to do so responsibly. To address this gap, we explored how commitments to open science, reproducibility, security, respect for data subjects, and research ethics oversight must be re-evaluated in a model-to-data context.


Sign in / Sign up

Export Citation Format

Share Document