scholarly journals Clouded data: Privacy and the promise of encryption

2019 ◽  
Vol 6 (1) ◽  
pp. 205395171984878
Author(s):  
Luke Munn ◽  
Tsvetelina Hristova ◽  
Liam Magee

Personal data is highly vulnerable to security exploits, spurring moves to lock it down through encryption, to cryptographically ‘cloud’ it. But personal data is also highly valuable to corporations and states, triggering moves to unlock its insights by relocating it in the cloud. We characterise this twinned condition as ‘clouded data’. Clouded data constructs a political and technological notion of privacy that operates through the intersection of corporate power, computational resources and the ability to obfuscate, gain insights from and valorise a dependency between public and private. First, we survey prominent clouded data approaches (blockchain, multiparty computation, differential privacy, and homomorphic encryption), suggesting their particular affordances produce distinctive versions of privacy. Next, we perform two notional code-based experiments using synthetic datasets. In the field of health, we submit a patient’s blood pressure to a notional cloud-based diagnostics service; in education, we construct a student survey that enables aggregate reporting without individual identification. We argue that these technical affordances legitimate new political claims to capture and commodify personal data. The final section broadens the discussion to consider the political force of clouded data and its reconstitution of traditional notions such as the public and the private.

2021 ◽  
Author(s):  
Kai Rannenberg ◽  
Sebastian Pape ◽  
Frédéric Tronnier ◽  
Sascha Löbner

The aim of this study was to identify and evaluate different de-identification techniques that may be used in several mobility-related use cases. To do so, four use cases have been defined in accordance with a project partner that focused on the legal aspects of this project, as well as with the VDA/FAT working group. Each use case aims to create different legal and technical issues with regards to the data and information that are to be gathered, used and transferred in the specific scenario. Use cases should therefore differ in the type and frequency of data that is gathered as well as the level of privacy and the speed of computation that is needed for the data. Upon identifying use cases, a systematic literature review has been performed to identify suitable de-identification techniques to provide data privacy. Additionally, external databases have been considered as data that is expected to be anonymous might be reidentified through the combination of existing data with such external data. For each case, requirements and possible attack scenarios were created to illustrate where exactly privacy-related issues could occur and how exactly such issues could impact data subjects, data processors or data controllers. Suitable de-identification techniques should be able to withstand these attack scenarios. Based on a series of additional criteria, de-identification techniques are then analyzed for each use case. Possible solutions are then discussed individually in chapters 6.1 - 6.2. It is evident that no one-size-fits-all approach to protect privacy in the mobility domain exists. While all techniques that are analyzed in detail in this report, e.g., homomorphic encryption, differential privacy, secure multiparty computation and federated learning, are able to successfully protect user privacy in certain instances, their overall effectiveness differs depending on the specifics of each use case.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jinbo Xiong ◽  
Rong Ma ◽  
Lei Chen ◽  
Youliang Tian ◽  
Li Lin ◽  
...  

Mobile crowdsensing as a novel service schema of the Internet of Things (IoT) provides an innovative way to implement ubiquitous social sensing. How to establish an effective mechanism to improve the participation of sensing users and the authenticity of sensing data, protect the users’ data privacy, and prevent malicious users from providing false data are among the urgent problems in mobile crowdsensing services in IoT. These issues raise a gargantuan challenge hindering the further development of mobile crowdsensing. In order to tackle the above issues, in this paper, we propose a reliable hybrid incentive mechanism for enhancing crowdsensing participations by encouraging and stimulating sensing users with both reputation and service returns in mobile crowdsensing tasks. Moreover, we propose a privacy preserving data aggregation scheme, where the mediator and/or sensing users may not be fully trusted. In this scheme, differential privacy mechanism is utilized through allowing different sensing users to add noise data, then employing homomorphic encryption for protecting the sensing data, and finally uploading ciphertext to the mediator, who is able to obtain the collection of ciphertext of the sensing data without actual decryption. Even in the case of partial sensing data leakage, differential privacy mechanism can still ensure the security of the sensing user’s privacy. Finally, we introduce a novel secure multiparty auction mechanism based on the auction game theory and secure multiparty computation, which effectively solves the problem of prisoners’ dilemma incurred in the sensing data transaction between the service provider and mediator. Security analysis and performance evaluation demonstrate that the proposed scheme is secure and efficient.


Author(s):  
Ioannis Chrysakis ◽  
Giorgos Flouris ◽  
George Ioannidis ◽  
Maria Makridaki ◽  
Theodore Patkos ◽  
...  

The utilisation of personal data by mobile apps is often hidden behind vague Privacy Policy documents, which are typically lengthy, difficult to read (containing legal terms and definitions) and frequently changing. This paper discusses a suite of tools developed in the context of the CAP-A project, aiming to harness the collective power of users to improve their privacy awareness and to promote privacy-friendly behaviour by mobile apps. Through crowdsourcing techniques, users can evaluate the privacy friendliness of apps, annotate and understand Privacy Policy documents, and help other users become aware of privacy-related aspects of mobile apps and their implications, whereas developers and policy makers can identify trends and the general stance of the public in privacy-related matters. The tools are available for public use in: https://cap-a.eu/tools/.


2021 ◽  
Vol 4 (3) ◽  
pp. 16-26
Author(s):  
V. V. Zotov

Digital network platforms are built on sociotechnical interaction between actors and actors. The creation and development of new public services based on digital platforms inevitably leads to the transformation of the relationship between the state and citizens. The attractiveness of state digital platforms for citizens increases when resolving the contradiction between the possibilities of new forms of social interaction and the threat of misuse of personal data, the risk of harm or persecution.The article presents the results of the analysis of the boundaries of the public and private in the interaction of the state with citizens on digital network platforms. The research method is a comparative analysis, which is based on the dichotomy of public and private, reflected in the concept of private and public X. Arendt, concepts of the public sphere J. Habermas, regulatory and legal concepts of privacy by R. Gavison. The empirical base was made up of a sociological study conducted to obtain information about the boundaries of privacy and publicity of personal data in the digital network space (n = 1 000 among the population over 18 years old living in metropolitan megacities and median regions by the level of informatization, 2020) and the results of Kaspersky Lab surveys conducted in 2019–2020.The conducted research allows us to assert that almost 2/3 of citizens have faced the misuse of confidential information on the Internet. Most of the respondents are aware that websites, social networks and search engines can collect data for web analytics. At the same time, citizens consider it possible to transfer personal data to the authorities in a generalized form for making managerial decisions. Half of the surveyed population does not object to the implementation of digital control over the actions and movements of citizens. Thus, despite the existing negative experience, it is unlikely that there will be any obvious resistance to organizing the collection of personal information on digital network platforms.


Author(s):  
P Alison Paprica ◽  
Kimberlyn McGrail ◽  
Michael J Schull

Population data science [1] researchers are not alone in recognizing the value of health and health-related data. In the era of big data, and with advent of machine learning and other artificial intelligence methods, organizations around the world are actively working to turn data into knowledge, and, in some cases, profit. The media and members of the public have taken notice, with high profile news stories about data breaches and privacy concerns [2-4] alongside some stories that call for increased use of data [5,6]. In response, public and private sector data-holding organizations and jurisdictions are turning their attention to policies, processes and regulations intended to ensure that personal data are used in ways that that the public supports. In some cases, these efforts include involving “publics” in decisions about data, such as using patient and lay person advice and other inputs to help shape policies [7-10].


Author(s):  
Ana Serrano Tellería

Mobile communication and devices have raised a series of challenges concerning the delimitation of public and private, intimate and personal spheres. Specifically, and because of its close connection to the nervous system and emotions, these devices allow a wide variety of affordances while, and in accordance to the broad scope of previous dimensions, a series of worrying risks – because of the same relationship and interdependence between users' rational and sensorial sides. Thus, an international state of the art review will be discussed and the results and conclusions of the ‘Public and Private in Mobile Communications' European FEDER will be offered. A range of quantitative and qualitative methodologies were applied: surveys about general use and habits, personal data and images; focus groups; interviews in person and by telephone; content analysis with a special focus on social media and an observation ethnography and digital ethnography.


2018 ◽  
Vol 0 (7/2018) ◽  
pp. 11-18
Author(s):  
Aleksandra Horubała ◽  
Daniel Waszkiewicz ◽  
Michał Andrzejczak ◽  
Piotr Sapiecha

Cloud services are gaining interest and are very interesting option for public administration. Although, there is a lot of concern about security and privacy of storing personal data in cloud. In this work mathematical tools for securing data and hiding computations are presented. Data privacy is obtained by using homomorphic encryption schemes. Computation hiding is done by algorithm cryptographic obfuscation. Both primitives are presented and their application for public administration is discussed.


Author(s):  
Kyoohyung Han ◽  
Seungwan Hong ◽  
Jung Hee Cheon ◽  
Daejun Park

Machine learning on (homomorphic) encrypted data is a cryptographic method for analyzing private and/or sensitive data while keeping privacy. In the training phase, it takes as input an encrypted training data and outputs an encrypted model without ever decrypting. In the prediction phase, it uses the encrypted model to predict results on new encrypted data. In each phase, no decryption key is needed, and thus the data privacy is ultimately guaranteed. It has many applications in various areas such as finance, education, genomics, and medical field that have sensitive private data. While several studies have been reported on the prediction phase, few studies have been conducted on the training phase.In this paper, we present an efficient algorithm for logistic regression on homomorphic encrypted data, and evaluate our algorithm on real financial data consisting of 422,108 samples over 200 features. Our experiment shows that an encrypted model with a sufficient Kolmogorov Smirnow statistic value can be obtained in ∼17 hours in a single machine. We also evaluate our algorithm on the public MNIST dataset, and it takes ∼2 hours to learn an encrypted model with 96.4% accuracy. Considering the inefficiency of homomorphic encryption, our result is encouraging and demonstrates the practical feasibility of the logistic regression training on large encrypted data, for the first time to the best of our knowledge.


Data & Policy ◽  
2021 ◽  
Vol 3 ◽  
Author(s):  
Veronica Qin Ting Li ◽  
Masaru Yarime

Abstract Contemporary data tools such as online dashboards have been instrumental in monitoring the spread of the COVID-19 pandemic. These real-time interactive platforms allow citizens to understand the local, regional, and global spread of COVID-19 in a consolidated and intuitive manner. Despite this, little research has been conducted on how citizens respond to the data on the dashboards in terms of the pandemic and data governance issues such as privacy. In this paper, we seek to answer the research question: how can governments use data tools, such as dashboards, to balance the trade-offs between safeguarding public health and protecting data privacy during a public health crisis? This study used surveys and semi-structured interviews to understand the perspectives of the developers and users of COVID-19 dashboards in Hong Kong. A typology was also developed to assess how Hong Kong’s dashboards navigated trade-offs between data disclosure and privacy at a time of crisis compared to dashboards in other jurisdictions. Results reveal that two key factors were present in the design and improvement of COVID-19 dashboards in Hong Kong: informed actions based on open COVID-19 case data, and significant public trust built on data transparency. Finally, this study argues that norms surrounding reporting on COVID-19 cases, as well as cases for future pandemics, should be co-constructed among citizens and governments so that policies founded on such norms can be acknowledged as salient, credible, and legitimate.


2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Cynthia Dwork ◽  
Nitin Kohli ◽  
Deirdre Mulligan

Differential privacy is at a turning point. Implementations have been successfully leveraged in private industry, the public sector, and academia in a wide variety of applications, allowing scientists, engineers, and researchers the ability to learn about populations of interest without specifically learning about these individuals. Because differential privacy allows us to quantify cumulative privacy loss, these differentially private systems will, for the first time, allow us to measure and compare the total privacy loss due to these personal data-intensive activities. Appropriately leveraged, this could be a watershed moment for privacy. Like other technologies and techniques that allow for a range of instantiations, implementation details matter. When meaningfully implemented, differential privacy supports deep data-driven insights with minimal worst-case privacy loss. When not meaningfully implemented, differential privacy delivers privacy mostly in name. Using differential privacy to maximize learning while providing a meaningful degree of privacy requires judicious choices with respect to the privacy parameter epsilon, among other factors. However, there is little understanding of what is the optimal value of epsilon for a given system or classes of systems/purposes/data etc. or how to go about figuring it out. To understand current differential privacy implementations and how organizations make these key choices in practice, we conducted interviews with practitioners to learn from their experiences of implementing differential privacy. We found no clear consensus on how to choose epsilon, nor is there agreement on how to approach this and other key implementation decisions. Given the importance of these implementation details there is a need for shared learning amongst the differential privacy community. To serve these purposes, we propose the creation of the Epsilon Registry—a publicly available communal body of knowledge about differential privacy implementations that can be used by various stakeholders to drive the identification and adoption of judicious differentially private implementations.


Sign in / Sign up

Export Citation Format

Share Document