inference attacks
Recently Published Documents


TOTAL DOCUMENTS

175
(FIVE YEARS 89)

H-INDEX

16
(FIVE YEARS 6)

2022 ◽  
Vol 70 (3) ◽  
pp. 4897-4919
Author(s):  
Sana Ben Hamida ◽  
Hichem Mrabet ◽  
Sana Belguith ◽  
Adeeb Alhomoud ◽  
Abderrazak Jemai

Author(s):  
Yuanyuan He ◽  
Jianbing Ni ◽  
Laurence T. Yang ◽  
Wei Wei ◽  
Xianjun Deng ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Dongdong Yang ◽  
Baopeng Ye ◽  
Wenyin Zhang ◽  
Huiyu Zhou ◽  
Xiaobin Qian

Protecting location privacy has become an irreversible trend; some problems also come such as system structures adopted by location privacy protection schemes suffer from single point of failure or the mobile device performance bottlenecks, and these schemes cannot resist single-point attacks and inference attacks and achieve a tradeoff between privacy level and service quality. To solve these problems, we propose a k-anonymous location privacy protection scheme via dummies and Stackelberg game. First, we analyze the merits and drawbacks of the existing location privacy preservation system architecture and propose a semitrusted third party-based location privacy preservation architecture. Next, taking into account both location semantic diversity, physical dispersion, and query probability, etc., we design a dummy location selection algorithm based on location semantics and physical distance, which can protect users’ privacy against single-point attack. And then, we propose a location anonymous optimization method based on Stackelberg game to improve the algorithm. Specifically, we formalize the mutual optimization of user-adversary objectives by using the framework of Stackelberg game to find an optimal dummy location set. The optimal dummy location set can resist single-point attacks and inference attacks while effectively balancing service quality and location privacy. Finally, we provide exhaustive simulation evaluation for the proposed scheme compared with existing schemes in multiple aspects, and the results show that the proposed scheme can effectively resist the single-point attack and inference attack while balancing the service quality and location privacy.


2021 ◽  
pp. 103977
Author(s):  
Ziqi Zhang ◽  
Chao Yan ◽  
Bradley A. Malin

2021 ◽  
Vol 2022 (1) ◽  
pp. 460-480
Author(s):  
Bogdan Kulynych ◽  
Mohammad Yaghini ◽  
Giovanni Cherubin ◽  
Michael Veale ◽  
Carmela Troncoso

Abstract A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding significant evidence of disparate vulnerability in realistic settings.


2021 ◽  
Author(s):  
Ülkü Meteriz-Yıldıran ◽  
Necip Fazil Yildiran ◽  
David Mohaisen

2021 ◽  
Author(s):  
Dario Pasquini ◽  
Giuseppe Ateniese ◽  
Massimo Bernaschi
Keyword(s):  

2021 ◽  
Author(s):  
Minxing Zhang ◽  
Zhaochun Ren ◽  
Zihan Wang ◽  
Pengjie Ren ◽  
Zhunmin Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document