scholarly journals Refinement Orders for Quantitative Information Flow and Differential Privacy

2020 ◽  
Vol 1 (1) ◽  
pp. 40-77
Author(s):  
Konstantinos Chatzikokolakis ◽  
Natasha Fernandes ◽  
Catuscia Palamidessi

Quantitative Information Flow (QIF) and Differential Privacy (DP) are both concerned with the protection of sensitive information, but they are rather different approaches. In particular, QIF considers the expected probability of a successful attack, while DP (in both its standard and local versions) is a max-case measure, in the sense that it is compromised by the existence of a possible attack, regardless of its probability. Comparing systems is a fundamental task in these areas: one wishes to guarantee that replacing a system A by a system B is a safe operation that is the privacy of B is no worse than that of A. In QIF, a refinement order provides strong such guarantees, while, in DP, mechanisms are typically compared w.r.t. the privacy parameter ε in their definition. In this paper, we explore a variety of refinement orders, inspired by the one of QIF, providing precise guarantees for max-case leakage. We study simple structural ways of characterising them, the relation between them, efficient methods for verifying them and their lattice properties. Moreover, we apply these orders in the task of comparing DP mechanisms, raising the question of whether the order based on ε provides strong privacy guarantees. We show that, while it is often the case for mechanisms of the same “family” (geometric, randomised response, etc.), it rarely holds across different families.

2014 ◽  
Vol 25 (2) ◽  
pp. 203-206 ◽  
Author(s):  
MIGUEL E. ANDRÉS ◽  
CATUSCIA PALAMIDESSI ◽  
GEOFFREY SMITH

A long-standing and fundamental issue in computer security is to control the flow of information, whether to prevent confidential information from being leaked, or to prevent trusted information from being tainted. While there have been many efforts aimed at preventing improper flows completely (see for example, the survey by Sabelfeld and Myers (2003)), it has long been recognized that perfection is often impossible in practice. A basic example is a login program – whenever it rejects an incorrect password, it unavoidably reveals that the secret password differs from the one that was entered. More subtly, systems may be vulnerable to side channel attacks, because observable characteristics like running time and power consumption may depend, at least partially, on sensitive information.


2012 ◽  
Vol 37 (6) ◽  
pp. 1-5 ◽  
Author(s):  
Quoc-Sang Phan ◽  
Pasquale Malacaria ◽  
Oksana Tkachuk ◽  
Corina S. Păsăreanu

2015 ◽  
Vol 6 (2) ◽  
pp. 23-46
Author(s):  
Tom Chothia ◽  
Chris Novakovic ◽  
Rajiv Ranjan Singh

This paper presents a framework for calculating measures of data integrity for programs in a small imperative language. The authors develop a Markov chain semantics for their language which calculates Clarkson and Schneider's definitions of data contamination, data suppression, program suppression and program transmission. The authors then propose their own definition of program integrity for probabilistic specifications. These definitions are based on conditional mutual information and entropy; they present a result relating them to mutual information, which can be calculated by a number of existing tools. The authors extend a quantitative information flow tool (CH-IMP) to calculate these measures of integrity and demonstrate this tool with examples including error correcting codes, the Dining Cryptographers protocol and the attempts by a number of banks to influence the Libor rate.


Sign in / Sign up

Export Citation Format

Share Document