Private identity agreement for private set functionalities

2021 ◽  
pp. 1-29
Author(s):  
Ben Kreuter ◽  
Sarvar Patel ◽  
Ben Terner

Private set intersection and related functionalities are among the most prominent real-world applications of secure multiparty computation. While such protocols have attracted significant attention from the research community, other functionalities are often required to support a PSI application in practice. For example, in order for two parties to run a PSI over the unique users contained in their databases, they might first invoke a support functionality to agree on the primary keys to represent their users. This paper studies a secure approach to agreeing on primary keys. We introduce and realize a functionality that computes a common set of identifiers based on incomplete information held by two parties, which we refer to as private identity agreement, and we prove the security of our protocol in the honest-but-curious model. We explain the subtleties in designing such a functionality that arise from privacy requirements when intending to compose securely with PSI protocols. We also argue that the cost of invoking this functionality can be amortized over a large number of PSI sessions, and that for applications that require many repeated PSI executions, this represents an improvement over a PSI protocol that directly uses incomplete or fuzzy matches.

2019 ◽  
Vol 2019 (3) ◽  
pp. 6-25 ◽  
Author(s):  
Adam Groce ◽  
Peter Rindal ◽  
Mike Rosulek

Abstract In this work we demonstrate that allowing differentially private leakage can significantly improve the concrete performance of secure 2-party computation (2PC) protocols. Specifically, we focus on the private set intersection (PSI) protocol of Rindal and Rosulek (CCS 2017), which is the fastest PSI protocol with security against malicious participants. We show that if differentially private leakage is allowed, the cost of the protocol can be reduced by up to 63%, depending on the desired level of differential privacy. On the technical side, we introduce a security model for differentially-private leakage in malicious-secure 2PC. We also introduce two new and improved mechanisms for “differentially private histogram overestimates,” the main technical challenge for differentially-private PSI.


Author(s):  
Roman Bresson ◽  
Johanne Cohen ◽  
Eyke Hüllermeier ◽  
Christophe Labreuche ◽  
Michèle Sebag

Multi-Criteria Decision Making (MCDM) aims at modelling expert preferences and assisting decision makers in identifying options best accommodating expert criteria. An instance of MCDM model, the Choquet integral is widely used in real-world applications, due to its ability to capture interactions between criteria while retaining interpretability. Aimed at a better scalability and modularity, hierarchical Choquet integrals involve intermediate aggregations of the interacting criteria, at the cost of a more complex elicitation. The paper presents a machine learning-based approach for the automatic identification of hierarchical MCDM models, composed of 2-additive Choquet integral aggregators and of marginal utility functions on the raw features from data reflecting expert preferences. The proposed NEUR-HCI framework relies on a specific neural architecture, enforcing by design the Choquet model constraints and supporting its end-to-end training. The empirical validation of NEUR-HCI on real-world and artificial benchmarks demonstrates the merits of the approach compared to state-of-art baselines.


Author(s):  
Suzanne Tsacoumis

High fidelity measures have proven to be powerful tools for measuring a broad range of competencies and their validity is well documented. However, their high-touch nature is often a deterrent to their use due to the cost and time required to develop and implement them. In addition, given the increased reliance on technology to screen and evaluate job candidates, organizations are continuing to search for more efficient ways to gather the information they need about one's capabilities. This chapter describes how innovative, interactive rich-media simulations that incorporate branching technology have been used in several real-world applications. The main focus is on describing the nature of these assessments and highlighting potential solutions to the unique measurement challenges associated with these types of assessments.


2021 ◽  
Author(s):  
Jesús Giráldez-Cru ◽  
Pedro Almagro-Blanco

The remarkable advances in SAT solving achieved in the last years have allowed to use this technology in many real-world applications of Artificial Intelligence, such as planning, formal verification, and scheduling, among others. Interestingly, these industrial SAT problems are commonly believed to be easier than classical random SAT formulas, but estimating their actual hardness is still a very challenging question, which in some cases even requires to solve them. In this context, realistic pseudo-industrial random SAT generators have emerged with the aim of reproducing the main features shared by the majority of these application problems. The study of these models may help to better understand the success of those SAT solving techniques and possibly improve them. In this work, we present a model to estimate the temperature of real-world SAT instances. This temperature represents the degree of distortion into the expected structure of the formula, from highly structured benchmarks (more similar to real-world SAT instances) to the complete absence of structure (observed in the classical random SAT model). Our solution is based on the Popularity-Similarity (PS) random model for SAT, which has been recently presented to reproduce two crucial features of application SAT benchmarks: scale-free and community structures. The PS model is able to control the hardness of the generated formula by introducing some randomizations in the expected structure. Our solution is a first step towards a hardness oracle based on the temperature of SAT formulas, which may be able to estimate the cost of solving real-world SAT instances without solving them.


2017 ◽  
Author(s):  
Santi J. Vives

Hash-based signatures use a one-time signature (OTS) as its main building block, and transform it into a many-times scheme, to sign a larger number of signatures. In known constructions, the cost and the size of each signature increase as the number of needed signatures grows. In real-world applications, requiring a significant number of signatures, the signatures can get quite large. As a result, it is usually believed that post-quantum signatures based on hashes need more computation and much larger sizes than classical signatures. We introduce a construction to challenge that idea: we show that it is possible to construct a many-times signatures scheme that is more efficient than the OTS it is built from, rather than less.We study the generation of signatures in conjunction with a blockchain, like bitcoin. The proposed scheme permits an unlimited number of signatures. The size of each signatures is constant and the same as in the OTS. The verification cost starts the same as in the OTS and decreases with each new signature, becoming more efficient on average as the number of signatures grows.


2020 ◽  
Author(s):  
Eduardo S. L. Gastal ◽  
Manuel M. Oliveira

High-dimensional filters are a fundamental building block for several applications, having recently received considerable attention from the research community. Unfortunately, naive implementations of such an important class of filters are too slow for many practical uses. This dissertation describes three novel approaches to efficiently perform high-dimensional filtering with linear cost in both the number of pixels and in the dimensionality of the space in which the filters operate. Our filters address the main limitations of previous techniques, in addition to providing the fastest performance (both on CPU and GPU) for a variety of real-world applications.


2005 ◽  
Vol 12 (18) ◽  
Author(s):  
Peter Bogetoft ◽  
Ivan B. Damgård ◽  
Thomas Jakobsen ◽  
Kurt Nielsen ◽  
Jakob Pagter ◽  
...  

In this paper we consider the problem of constructing secure auctions based on techniques from modern cryptography. We combine knowledge from economics, cryptography and security engineering and develop and implement secure auctions for practical real-world problems.<br /> <br />In essence this paper is an overview of the research project SCET--Secure Computing, Economy, and Trust-- which attempts to build auctions for real applications using secure multiparty computation.<br /> <br />The main contributions of this project are: A generic setup for secure evaluation of integer arithmetic including comparisons; general double auctions expressed by such operations; a real world double auction tailored to the complexity and performance of the basic primitives '+' and '<='; and finally evidence that our approach is practically feasible based on experiments with prototypes.


Sign in / Sign up

Export Citation Format

Share Document