Proceedings on Privacy Enhancing Technologies
Latest Publications


TOTAL DOCUMENTS

444
(FIVE YEARS 269)

H-INDEX

22
(FIVE YEARS 9)

Published By Walter De Gruyter Gmbh

2299-0984

2021 ◽  
Vol 2022 (1) ◽  
pp. 586-607
Author(s):  
Maximilian Zinkus ◽  
Tushar M. Jois ◽  
Matthew Green

Abstract Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.


2021 ◽  
Vol 2022 (1) ◽  
pp. 28-48
Author(s):  
Jiafan Wang ◽  
Sherman S. M. Chow

Abstract Dynamic searchable symmetric encryption (DSSE) allows a client to query or update an outsourced encrypted database. Range queries are commonly needed. Previous range-searchable schemes either do not support updates natively (SIGMOD’16) or use file indexes of many long bit-vectors for distinct keywords, which only support toggling updates via homomorphically flipping the presence bit. (ESORICS’18). We propose a generic upgrade of any (inverted-index) DSSE to support range queries (a.k.a. range DSSE), without homomorphic encryption, and a specific instantiation with a new trade-off reducing client-side storage. Our schemes achieve forward security, an important property that mitigates file injection attacks. Moreover, we identify a variant of injection attacks against the first somewhat dynamic scheme (ESORICS’18). We also extend the definition of backward security to range DSSE and show that our schemes are compatible with a generic upgrade of backward security (CCS’17). We comprehensively analyze the computation and communication overheads, including implementation details of client-side index-related operations omitted by prior schemes. We show high empirical efficiency for million-scale databases over a million-scale keyword space.


2021 ◽  
Vol 2022 (1) ◽  
pp. 373-395
Author(s):  
Badih Ghazi ◽  
Ben Kreuter ◽  
Ravi Kumar ◽  
Pasin Manurangsi ◽  
Jiayu Peng ◽  
...  

Abstract Consider the setting where multiple parties each hold a multiset of users and the task is to estimate the reach (i.e., the number of distinct users appearing across all parties) and the frequency histogram (i.e., fraction of users appearing a given number of times across all parties). In this work we introduce a new sketch for this task, based on an exponentially distributed counting Bloom filter. We combine this sketch with a communication-efficient multi-party protocol to solve the task in the multi-worker setting. Our protocol exhibits both differential privacy and security guarantees in the honest-but-curious model and in the presence of large subsets of colluding workers; furthermore, its reach and frequency histogram estimates have a provably small error. Finally, we show the practicality of the protocol by evaluating it on internet-scale audiences.


2021 ◽  
Vol 2022 (1) ◽  
pp. 460-480
Author(s):  
Bogdan Kulynych ◽  
Mohammad Yaghini ◽  
Giovanni Cherubin ◽  
Michael Veale ◽  
Carmela Troncoso

Abstract A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding significant evidence of disparate vulnerability in realistic settings.


2021 ◽  
Vol 2022 (1) ◽  
pp. 291-316
Author(s):  
Théo Ryffel ◽  
Pierre Tholoniat ◽  
David Pointcheval ◽  
Francis Bach

Abstract We propose AriaNN, a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data. Our semi-honest 2-party computation protocol (with a trusted dealer) leverages function secret sharing, a recent lightweight cryptographic protocol that allows us to achieve an efficient online phase. We design optimized primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm. For instance, we perform private comparison for ReLU operations with a single message of the size of the input during the online phase, and with preprocessing keys close to 4× smaller than previous work. Last, we propose an extension to support n-party private federated learning. We implement our framework as an extensible system on top of PyTorch that leverages CPU and GPU hardware acceleration for cryptographic and machine learning operations. We evaluate our end-to-end system for private inference between distant servers on standard neural networks such as AlexNet, VGG16 or ResNet18, and for private training on smaller networks like LeNet. We show that computation rather than communication is the main bottleneck and that using GPUs together with reduced key size is a promising solution to overcome this barrier.


2021 ◽  
Vol 2022 (1) ◽  
pp. 166-186
Author(s):  
Mahsa Saeidi ◽  
McKenzie Calvert ◽  
Audrey W. Au ◽  
Anita Sarma ◽  
Rakesh B. Bobba

Abstract End users are increasingly using trigger-action platforms like If-This-Then-That (IFTTT) to create applets to connect smart-home devices and services. However, there are inherent implicit risks in using such applets—even non-malicious ones—as sensitive information may leak through their use in certain contexts (e.g., where the device is located, who can observe the resultant action). This work aims to understand to what extent end users can assess this implicit risk. More importantly we explore whether usage context makes a difference in end-users’ perception of such risks. Our work complements prior work that has identified the impact of usage context on expert evaluation of risks in IFTTT by focusing the impact of usage context on end-users’ risk perception. Through a Mechanical Turk survey of 386 participants on 49 smart-home IFTTT applets, we found that participants have a nuanced view of contextual factors and that different values for contextual factors impact end-users’ risk perception differently. Further, our findings show that nudging the participants to think about different usage contexts led them to think deeper about the associated risks and raise their concern scores.


2021 ◽  
Vol 2022 (1) ◽  
pp. 207-226
Author(s):  
Ruben Recabarren ◽  
Bogdan Carbunar

Abstract Providing unrestricted access to sensitive content such as news and software is difficult in the presence of adaptive and resourceful surveillance and censoring adversaries. In this paper we leverage the distributed and resilient nature of commercial Satoshi blockchains to develop the first provably secure, censorship resistant, cost-efficient storage system with anonymous and private access, built on top of commercial cryptocurrency transactions. We introduce max-rate transactions, a practical construct to persist data of arbitrary size entirely in a Satoshi blockchain. We leverage max-rate transactions to develop UWeb, a blockchain-based storage system that charges publishers to self-sustain its decentralized infrastructure. UWeb organizes blockchain-stored content for easy retrieval, and enables clients to store and access content with provable anonymity, privacy and censorship resistance properties. We present results from UWeb experiments with writing 268.21 MB of data into the live Litecoin blockchain, including 4.5 months of live-feed BBC articles, and 41 censorship resistant tools. The max-rate writing throughput (183 KB/s) and blockchain utilization (88%) exceed those of state-of-the-art solutions by 2-3 orders of magnitude and broke Litecoin’s record of the daily average block size. Our simulations with up to 3,000 concurrent UWeb writers confirm that UWeb does not impact the confirmation delays of financial transactions.


2021 ◽  
Vol 2022 (1) ◽  
pp. 544-564
Author(s):  
Shihui Fu ◽  
Guang Gong

Abstract We present a new zero-knowledge succinct argument of knowledge (zkSNARK) scheme for Rank-1 Constraint Satisfaction (RICS), a widely deployed NP-complete language that generalizes arithmetic circuit satisfiability. By instantiating with different commitment schemes, we obtain several zkSNARKs where the verifier’s costs and the proof size range from O(log2 N) to O ( N ) O\left( {\sqrt N } \right) depending on the underlying polynomial commitment schemes when applied to an N-gate arithmetic circuit. All these schemes do not require a trusted setup. It is plausibly post-quantum secure when instantiated with a secure collision-resistant hash function. We report on experiments for evaluating the performance of our proposed system. For instance, for verifying a SHA-256 preimage (less than 23k AND gates) in zero-knowledge with 128 bits security, the proof size is less than 150kB and the verification time is less than 11ms, both competitive to existing systems.


2021 ◽  
Vol 2022 (1) ◽  
pp. 148-165
Author(s):  
Thomas Cilloni ◽  
Wei Wang ◽  
Charles Walter ◽  
Charles Fleming

Abstract Facial recognition tools are becoming exceptionally accurate in identifying people from images. However, this comes at the cost of privacy for users of online services with photo management (e.g. social media platforms). Particularly troubling is the ability to leverage unsupervised learning to recognize faces even when the user has not labeled their images. In this paper we propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples, preventing the formation of identifiable user clusters in the embedding space of facial encoders. This is applicable even when a user is unmasked and labeled images are available online. We demonstrate the effectiveness of Ulixes by showing that various classification and clustering methods cannot reliably label the adversarial examples we generate. We also study the effects of Ulixes in various black-box settings and compare it to the current state of the art in adversarial machine learning. Finally, we challenge the effectiveness of Ulixes against adversarially trained models and show that it is robust to countermeasures.


2021 ◽  
Vol 2022 (1) ◽  
pp. 629-648
Author(s):  
Moses Namara ◽  
Henry Sloan ◽  
Bart P. Knijnenburg

Abstract Research finds that the users of Social Networking Sites (SNSs) often fail to comprehensively engage with the plethora of available privacy features— arguably due to their sheer number and the fact that they are often hidden from sight. As different users are likely interested in engaging with different subsets of privacy features, an SNS could improve privacy management practices by adapting its interface in a way that proactively assists, guides, or prompts users to engage with the subset of privacy features they are most likely to benefit from. Whereas recent work presents algorithmic implementations of such privacy adaptation methods, this study investigates the optimal user interface mechanism to present such adaptations. In particular, we tested three proposed “adaptation methods” (automation, suggestions, highlights) in an online between-subjects user experiment in which 406 participants used a carefully controlled SNS prototype. We systematically evaluate the effect of these adaptation methods on participants’ engagement with the privacy features, their tendency to set stricter settings (protection), and their subjective evaluation of the assigned adaptation method. We find that the automation of privacy features afforded users the most privacy protection, while giving privacy suggestions caused the highest level of engagement with the features and the highest subjective ratings (as long as awkward suggestions are avoided). We discuss the practical implications of these findings in the effectiveness of adaptations improving user awareness of, and engagement with, privacy features on social media.


Sign in / Sign up

Export Citation Format

Share Document