A Fuzzy based Data Perturbation Technique for Privacy Preserved Data Mining

Author(s):  
P. G. Shynu ◽  
H Md. Shayan. ◽  
Chiranji Lal Chowdhary
2017 ◽  
Vol 17 (3) ◽  
pp. 92-108 ◽  
Author(s):  
P. Gayathiri ◽  
B. Poorna

Abstract Association Rule Hiding methodology is a privacy preserving data mining technique that sanitizes the original database by hide sensitive association rules generated from the transactional database. The side effect of association rules hiding technique is to hide certain rules that are not sensitive, failing to hide certain sensitive rules and generating false rules in the resulted database. This affects the privacy of the data and the utility of data mining results. In this paper, a method called Gene Patterned Association Rule Hiding (GPARH) is proposed for preserving privacy of the data and maintaining the data utility, based on data perturbation technique. Using gene selection operation, privacy linked hidden and exposed data items are mapped to the vector data items, thereby obtaining gene based data item. The performance of proposed GPARH is evaluated in terms of metrics such as number of sensitive rules generated, true positive privacy rate and execution time for selecting the sensitive rules by using Abalone and Taxi Service Trajectory datasets.


2008 ◽  
pp. 1550-1561
Author(s):  
Rick L. Wilson ◽  
Peter A. Rosen

Data perturbation is a data security technique that adds ‘noise’ to databases allowing individual record confidentiality. This technique allows users to ascertain key summary information about the data that is not distorted and does not lead to a security breach. Four bias types have been proposed which assess the effectiveness of such techniques. However, these biases only deal with simple aggregate concepts (averages, etc.) found in the database. To compete in today’s business environment, it is critical that organizations utilize data mining approaches to discover additional knowledge about themselves ‘hidden’ in their databases. Thus, database administrators are faced with competing objectives: protection of confidential data versus data disclosure for data mining applications. This paper empirically explores whether data protection provided by perturbation techniques adds a so-called data mining bias to the database. The results find initial support for the existence of this bias.


2017 ◽  
Vol 17 (2) ◽  
pp. 44-55 ◽  
Author(s):  
M. Antony Sheela ◽  
K. Vijayalakshmi

Abstract Data mining on vertically or horizontally partitioned dataset has the overhead of protecting the private data. Perturbation is a technique that protects the revealing of data. This paper proposes a perturbation and anonymization technique that is performed on the vertically partitioned data. A third-party coordinator is used to partition the data recursively in various parties. The parties perturb the data by finding the mean, when the specified threshold level is reached. The perturbation maintains the statistical relationship among attributes.


Sign in / Sign up

Export Citation Format

Share Document