Information loss resulting from statistical disclosure control of output data

2020 ◽  
Vol 65 (9) ◽  
pp. 7-27
Author(s):  
Andrzej Młodak

The most important methods of assessing information loss caused by statistical disclosure control (SDC) are presented in the paper. The aim of SDC is to protect an individual against identification or obtaining any sensitive information relating to them by anyone unauthorised. The application of methods based either on the concealment of specific data or on their perturbation results in information loss, which affects the quality of output data, including the distributions of variables, the forms of relationships between them, or any estimations. The aim of this paper is to perform a critical analysis of the strengths and weaknesses of the particular types of methods of assessing information loss resulting from SDC. Moreover, some novel ideas on how to obtain effective and well-interpretable measures are proposed, including an innovative way of using a cyclometric function (arcus tangent) to determine the deviation of values from the original ones, as a result of SDC. Additionally, the inverse correlation matrix was applied in order to assess the influence of SDC on the strength of relationships between variables. The first presented method allows obtaining effective and well- -interpretable measures, while the other makes it possible to fully use the potential of the mutual relationships between variables (including the ones difficult to detect by means of classical statistical methods) for a better analysis of the consequences of SDC. Among other findings, the empirical verification of the utility of the suggested methods confirmed the superiority of the cyclometric function in measuring the distance between the curved deviations and the original data, and also heighlighted the need for a skilful correction of its flattening when large value arguments occur.

2020 ◽  
Vol 3 (348) ◽  
pp. 7-24
Author(s):  
Michał Pietrzak

The aim of this article is to analyse the possibility of applying selected perturbative masking methods of Statistical Disclosure Control to microdata, i.e. unit‑level data from the Labour Force Survey. In the first step, the author assessed to what extent the confidentiality of information was protected in the original dataset. In the second step, after applying selected methods implemented in the sdcMicro package in the R programme, the impact of those methods on the disclosure risk, the loss of information and the quality of estimation of population quantities was assessed. The conclusion highlights some problematic aspects of the use of Statistical Disclosure Control methods which were observed during the conducted analysis.


Author(s):  
Amanda M. Y. Chu ◽  
Benson S. Y. Lam ◽  
Agnes Tiwari ◽  
Mike K. P. So

Patient data or information collected from public health and health care surveys are of great research value. Usually, the data contain sensitive personal information. Doctors, nurses, or researchers in the public health and health care sector do not analyze the available datasets or survey data on their own, and may outsource the tasks to third parties. Even though all identifiers such as names and ID card numbers are removed, there may still be some occasions in which an individual can be re-identified via the demographic or particular information provided in the datasets. Such data privacy issues can become an obstacle in health-related research. Statistical disclosure control (SDC) is a useful technique used to resolve this problem by masking and designing released data based on the original data. Whilst ensuring the released data can satisfy the needs of researchers for data analysis, there is high protection of the original data from disclosure. In this research, we discuss the statistical properties of two SDC methods: the General Additive Data Perturbation (GADP) method and the Gaussian Copula General Additive Data Perturbation (CGADP) method. An empirical study is provided to demonstrate how we can apply these two SDC methods in public health research.


Author(s):  
JOSEP DOMINGO-FERRER ◽  
ANNA OGANIAN ◽  
ÀNGEL TORRES ◽  
JOSEP M. MATEO-SANZ

Microaggregation is a statistical disclosure control technique. Raw microdata (i.e. individual records) are grouped into small aggregates prior to publication. With fixed-size groups, each aggregate contains k records to prevent disclosure of individual information. Individual ranking is a usual criterion to reduce multivariate microaggregation to univariate case: the idea is to perform microaggregation independently for each variable in the record. Using distributional assumptions, we show in this paper how to find interval estimates for the original data based on the microaggregated data. Such intervals can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggregated data set. Analytical arguments given in this paper confirm recent empirical results about the unsafety of individual ranking microaggregation.


2012 ◽  
Vol 9 (1) ◽  
Author(s):  
Neeraj Tiwari

The most common method of providing data to the public is through statistical tables. The problem of protecting confidentiality in statistical tables containing sensitive information has been of great concern during the recent years. Rounding methods are perturbation techniques widely used by statistical agencies for protecting the confidential data. Random rounding is one of these methods. In this paper, using the technique of random rounding and quadratic programming, we introduce a new methodology for protecting the confidential information of tabular data with minimum loss of information. The tables obtained through the proposed method consist of unbiasedly rounded values, are additive and have specified level of confidentiality protection. Some numerical examples are also discussed to demonstrate the superiority of the proposed procedure over the existing procedures.


2019 ◽  
Vol 66 (1) ◽  
pp. 7-26 ◽  
Author(s):  
Andrzej Młodak

The paper contains a proposal of original method of assessment of information loss resulted from an application of the Statistical Disclosure Control (SDC) conducted during preparation of the resulting data to the publication and disclosure to interested users. The SDC tools enable protection of sensitive data from their disclosure – both direct and indirect. The article focuses on pseudonimised microdata, i.e. individual data without fundamental identifiers, used for scientific purposes. This control is usually to suppress, swapping or disturbing of original data. However, such intervention is connected with the loss of some information. Optimization of choice of relevant SDC method requires then a minimization of such loss (and risk of disclosure of protected data). Traditionally used methods of measurement of such loss are not rarely sensitive to dissimilarities resulting from scale and scope of values of variables and cannot be used for ordinal data. Many of them weakly take also connections between variables into account, what can be important in various analyses. Hence, this paper is aimed at presentation of a proposal (having the source in papers by Zdzisław Hellwig) concerning use of a method of normalized and easy interpretable complex measure (called also the synthetic indicator) for connected features based on benchmark and anti–benchmark of development to the assessment of information loss resulted from an application of some SDC techniques and at studying its practical utility. The measure is here constructed on the basis of distances between original data and data after application of the SDC taking measurement scales into account.


2010 ◽  
Vol 37 (4) ◽  
pp. 3256-3263 ◽  
Author(s):  
Jun-Lin Lin ◽  
Tsung-Hsien Wen ◽  
Jui-Chien Hsieh ◽  
Pei-Chann Chang

Sign in / Sign up

Export Citation Format

Share Document