scholarly journals ON ASSESSING THE DISCLOSURE RISK OF CONTROLLED ADJUSTMENT METHODS FOR STATISTICAL TABULAR DATA

Author(s):  
JORDI CASTRO

Minimum distance controlled tabular adjustment is a recent perturbative approach for statistical disclosure control in tabular data. Given a table to be protected, it looks for the closest safe table, using some particular distance. Controlled adjustment is known to provide high data utility. However, the disclosure risk has only been partially analyzed using theoretical results from optimization. This work extends these previous results, providing both a more detailed theoretical analysis, and an extensive empirical assessment of the disclosure risk of the method. A set of 25 instances from the literature and four different attacker scenarios are considered, with several random replications for each scenario, both for L1 and L2 distances. This amounts to the solution of more than 2000 optimization problems. The analysis of the results shows that the approach has low disclosure risk when the attacker has no good information on the bounds of the optimization problem. On the other hand, when the attacker has good estimates of the bounds, and the only uncertainty is in the objective function (which is a very strong assumption), the disclosure risk of controlled adjustment is high and it should be avoided.

2014 ◽  
Vol 43 (4) ◽  
pp. 247-254
Author(s):  
Matthias Templ

The demand of data from surveys, registers or other data sets containing sensibleinformation on people or enterprises have been increased significantly over the last years.However, before providing data to the public or to researchers, confidentiality has to berespected for any data set containing sensible individual information. Confidentiality canbe achieved by applying statistical disclosure control (SDC) methods to the data. Theresearch on SDC methods becomes more and more important in the last years because ofan increase of the awareness on data privacy and because of the fact that more and moredata are provided to the public or to researchers. However, for legal reasons this is onlyvisible when the released data has (very) low disclosure risk.In this contribution existing disclosure risk methods are review and summarized. Thesemethods are finally applied on a popular real-world data set - the Structural EarningsSurvey (SES) of Austria. It is shown that the application of few selected anonymisationmethods leads to well-protected anonymised data with high data utility and low informationloss.


2012 ◽  
Vol 9 (1) ◽  
Author(s):  
Neeraj Tiwari

The most common method of providing data to the public is through statistical tables. The problem of protecting confidentiality in statistical tables containing sensitive information has been of great concern during the recent years. Rounding methods are perturbation techniques widely used by statistical agencies for protecting the confidential data. Random rounding is one of these methods. In this paper, using the technique of random rounding and quadratic programming, we introduce a new methodology for protecting the confidential information of tabular data with minimum loss of information. The tables obtained through the proposed method consist of unbiasedly rounded values, are additive and have specified level of confidentiality protection. Some numerical examples are also discussed to demonstrate the superiority of the proposed procedure over the existing procedures.


Author(s):  
JOSEP DOMINGO-FERRER ◽  
VICENÇ TORRA

In statistical disclosure control of tabular data, sensitivity rules are commonly used to decide whether a table cell is sensitive and should therefore not be published. The most popular sensitivity rules are the dominance rule, the p%-rule and the pq-rule. The dominance rule has received critiques based on specific numerical examples and is being gradually abandoned by leading statistical agencies. In this paper, we construct general counterexamples which show that none of the above rules does adequately reflect disclosure risk if cell contributors or coalitions of them behave as intruders: in that case, releasing a cell declared non-sensitive can imply higher disclosure risk than releasing a cell declared sensitive. As possible solutions, we propose an alternative sensitivity rule based on the concentration of relative contributions. More generally, we suggest to complement a priori risk assessment based on sensitivity rules with a posteriori risk assessment which takes into account tables after they have been protected.


Author(s):  
James Robards ◽  
David Martin ◽  
Chris Gale

ABSTRACT ObjectivesTo explore the application of automated zone design tools to protect record-level datasets with attribute detail and a large data volume in a way that might be implemented by a data provider (e.g. National Statistical Organisation/Health Service Provider), initially using a synthetic microdataset. Successful implementation could facilitate the release of rich linked record datasets to researchers so as to preserve small area geographical associations, while not revealing actual locations which are currently lost due to the high level of geographical coding required by data providers prior to release to researchers. Data perturbation is undesirable because of the need for detailed information on certain spatial attributes (e.g. distance to a medical practitioner, exposure to local environment) which has driven demand for new linked administrative datasets, along with provision of suitable research environments. The outcome is a bespoke aggregation of the microdata that meets a set of design constraints but the exact configuration of which is never revealed. Researchers are provided with detailed data and suitable geographies, yet with appropriately reduced disclosure risk. ApproachUsing a synthetic flat file microdataset of individual records with locality-level (MSOA) geography codes for England and Wales (variables: age, gender, economic activity, marital status, occupation, number of hours worked and general health), we synthesize address-level locations within MSOAs using 2011 Census headcount data. These synthetic locations are then associated with a range of spatial measures and indicators such as distance to a medical practitioner. Implementation of the AZTool zone design software enables a bespoke, non-disclosive zone design solution, providing area codes that can be added to the research data without revealing their true locations to the researcher. ResultsTwo sets of results will be presented. Firstly, we will explain the spatial characteristics of the new synthetic dataset which we propose may have broader utility. Secondly, we will present results showing changing risk of disclosure and utility when coding to spatial units from different scales and aggregations. Using the synthetic dataset will therefore demonstrate the utility of the approach for a variety of linked and administrative data without any actual disclosure risk. ConclusionsThis approach is applicable to a variety of datasets. The ability to quantify the zone design solution and security in relation to statistical disclosure control will be discussed. Provision of parameters from the zone design process to the data user and the implications of this for security and data users will be considered.


2010 ◽  
Vol 37 (4) ◽  
pp. 3256-3263 ◽  
Author(s):  
Jun-Lin Lin ◽  
Tsung-Hsien Wen ◽  
Jui-Chien Hsieh ◽  
Pei-Chann Chang

Sign in / Sign up

Export Citation Format

Share Document