Inference attacks on genomic privacy with an improved HMM and an RCNN model for unrelated individuals

2020 ◽  
Vol 512 ◽  
pp. 207-218 ◽  
Author(s):  
Hongfa Ding ◽  
Youliang Tian ◽  
Changgen Peng ◽  
Youshan Zhang ◽  
Shuwen Xiang
2017 ◽  
Vol 15 (5) ◽  
pp. 29-37 ◽  
Author(s):  
Erman Ayday ◽  
Mathias Humbert

2021 ◽  
Vol 24 (2) ◽  
pp. 1-35
Author(s):  
Isabel Wagner ◽  
Iryna Yevseyeva

The ability to measure privacy accurately and consistently is key in the development of new privacy protections. However, recent studies have uncovered weaknesses in existing privacy metrics, as well as weaknesses caused by the use of only a single privacy metric. Metrics suites, or combinations of privacy metrics, are a promising mechanism to alleviate these weaknesses, if we can solve two open problems: which metrics should be combined and how. In this article, we tackle the first problem, i.e., the selection of metrics for strong metrics suites, by formulating it as a knapsack optimization problem with both single and multiple objectives. Because solving this problem exactly is difficult due to the large number of combinations and many qualities/objectives that need to be evaluated for each metrics suite, we apply 16 existing evolutionary and metaheuristic optimization algorithms. We solve the optimization problem for three privacy application domains: genomic privacy, graph privacy, and vehicular communications privacy. We find that the resulting metrics suites have better properties, i.e., higher monotonicity, diversity, evenness, and shared value range, than previously proposed metrics suites.


Author(s):  
Michael Veale ◽  
Reuben Binns ◽  
Lilian Edwards

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.


2017 ◽  
Vol 10 (34) ◽  
pp. 1-5
Author(s):  
D. Sai Eswari ◽  
Afreen Rafiq ◽  
R. Deepthi ◽  
◽  
◽  
...  

2021 ◽  
Author(s):  
Dario Pasquini ◽  
Giuseppe Ateniese ◽  
Massimo Bernaschi
Keyword(s):  

2021 ◽  
Author(s):  
Benjamin Zi Hao Zhao ◽  
Aviral Agrawal ◽  
Catisha Coburn ◽  
Hassan Jameel Asghar ◽  
Raghav Bhaskar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document