inference attack
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 37)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xujie Ren ◽  
Tao Shang ◽  
Yatong Jiang ◽  
Jianwei Liu

In the era of big data, next-generation sequencing produces a large amount of genomic data. With these genetic sequence data, research in biology fields will be further advanced. However, the growth of data scale often leads to privacy issues. Even if the data is not open, it is still possible for an attacker to steal private information by a member inference attack. In this paper, we proposed a private profile hidden Markov model (PHMM) with differential identifiability for gene sequence clustering. By adding random noise into the model, the probability of identifying individuals in the database is limited. The gene sequences could be unsupervised clustered without labels according to the output scores of private PHMM. The variation of the divergence distance in the experimental results shows that the addition of noise makes the profile hidden Markov model distort to a certain extent, and the maximum divergence distance can reach 15.47 when the amount of data is small. Also, the cosine similarity comparison of the clustering model before and after adding noise shows that as the privacy parameters changes, the clustering model distorts at a low or high level, which makes it defend the member inference attack.


2021 ◽  
Author(s):  
Wenqiang Jin ◽  
Srinivasan Murali ◽  
Huadi Zhu ◽  
Ming Li
Keyword(s):  

Author(s):  
Yijue Wang ◽  
Chenghong Wang ◽  
Zigeng Wang ◽  
Shanglin Zhou ◽  
Hang Liu ◽  
...  

The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieves competitive accuracy with the original DNNs. We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
Mingzhen Li ◽  
Yunfeng Wang ◽  
Yang Xin ◽  
Hongliang Zhu ◽  
Qifeng Tang ◽  
...  

As a review system, the Crowd-Sourced Local Businesses Service System (CSLBSS) allows users to publicly publish reviews for businesses that include display name, avatar, and review content. While these reviews can maintain the business reputation and provide valuable references for others, the adversary also can legitimately obtain the user’s display name and a large number of historical reviews. For this problem, we show that the adversary can launch connecting user identities attack (CUIA) and statistical inference attack (SIA) to obtain user privacy by exploiting the acquired display names and historical reviews. However, the existing methods based on anonymity and suppressing reviews cannot resist these two attacks. Also, suppressing reviews may result in some reiews with the higher usefulness not being published. To solve these problems, we propose a cross-platform strong privacy protection mechanism (CSPPM) based on the partial publication and the complete anonymity mechanism. In CSPPM, based on the consistency between the user score and the business score, we propose a partial publication mechanism to publish reviews with the higher usefulness of review and filter false or untrue reviews. It ensures that our mechanism does not suppress reviews with the higher usefulness of reviews and improves system utility. We also propose a complete anonymity mechanism to anonymize the display name and avatars of reviews that are publicly published. It ensures that the adversary cannot obtain user privacy through CUIA and SIA. Finally, we evaluate CSPPM from both theoretical and experimental aspects. The results show that it can resist CUIA and SIA and improve system utility.


Author(s):  
Hongkyu Lee ◽  
Jeehyeong Kim ◽  
Rasheed Hussain ◽  
Sunghyun Cho ◽  
Junggab Son

2021 ◽  
Vol 2021 (3) ◽  
pp. 122-141
Author(s):  
Sébastien Gambs ◽  
Frédéric Ladouceur ◽  
Antoine Laurent ◽  
Alexandre Roy-Gaumond

Abstract In this work, we propose a novel approach for the synthetization of data based on copulas, which are interpretable and robust models, extensively used in the actuarial domain. More precisely, our method COPULA-SHIRLEY is based on the differentially-private training of vine copulas, which are a family of copulas allowing to model and generate data of arbitrary dimensions. The framework of COPULA-SHIRLEY is simple yet flexible, as it can be applied to many types of data while preserving the utility as demonstrated by experiments conducted on real datasets. We also evaluate the protection level of our data synthesis method through a membership inference attack recently proposed in the literature.


2021 ◽  
Vol 218 ◽  
pp. 106674
Author(s):  
Xingping Xian ◽  
Tao Wu ◽  
Yanbing Liu ◽  
Wei Wang ◽  
Chao Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document