nonsampling errors
Recently Published Documents


TOTAL DOCUMENTS

31
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 37 (2) ◽  
pp. 289-316
Author(s):  
Gian Luigi Mazzi ◽  
James Mitchell ◽  
Florabela Carausu

Abstract Official economic statistics are uncertain even if not always interpreted or treated as such. From a historical perspective, this article reviews different categorisations of data uncertainty, specifically the traditional typology that distinguishes sampling from nonsampling errors and a newer typology of Manski (2015). Throughout, the importance of measuring and communicating these uncertainties is emphasised, as hard as it can prove to measure some sources of data uncertainty, especially those relevant to administrative and big data sets. Accordingly, this article both seeks to encourage further work into the measurement and communication of data uncertainty in general and to introduce the Comunikos (COMmunicating UNcertainty In Key Official Statistics) project at Eurostat. Comunikos is designed to evaluate alternative ways of measuring and communicating data uncertainty specifically in contexts relevant to official economic statistics.


2021 ◽  
pp. 089443932110095
Author(s):  
Tuğba Adalı ◽  
Ahmet Sinan Türkyılmaz ◽  
James M. Lepkowski

The Demographic and Health Surveys (DHS) have been carried out in over 90 countries since 1984, as interviewer administered household surveys conducted initially by paper and pencil interviews (PAPI). Computer assisted personal interviews (CAPI) were introduced in the 2004 Peru DHS, and since then numerous countries have also switched. However, DHS randomized mode comparisons have been limited. The 2018 Sixth Turkey DHS was conducted using CAPI but allocated one household from 21 in each of 754 clusters to PAPI. This analysis examines a wide range of potential differences between modes: interviewer attitudes toward modes; response rates, underreporting and misreporting of persons or events, number of selections to “check all that applies” questions, respondents' attitudes towards modes reflected by responses to sensitive questions, satisficing behavior such as age heaping, straight-line response patterns, and use of don’t know options; and some operational aspects of modes such as retrospective monthly contraceptive prevalence rates, presence of others during interview, and interview length. Findings show that, despite strong interviewer CAPI preference, CAPI and PAPI were on average almost identical in terms of responses. CAPI took 11 min less (total duration of 33 min). Analysis of retrospective monthly contraception use indicated potential underreporting by CAPI for past use, an issue highlighted before in DHS literature. Overall, the switch to computer technology in DHS surveys does not appear to change estimates or levels of nonsampling errors, although some differences with respect to PAPI mode may need DHS designer attention.


2016 ◽  
Vol 32 (3) ◽  
pp. 619-642 ◽  
Author(s):  
Arnout van Delden ◽  
Sander Scholtus ◽  
Joep Burger

Abstract Publications in official statistics are increasingly based on a combination of sources. Although combining data sources may result in nearly complete coverage of the target population, the outcomes are not error free. Estimating the effect of nonsampling errors on the accuracy of mixed-source statistics is crucial for decision making, but it is not straightforward. Here we simulate the effect of classification errors on the accuracy of turnover-level estimates in car-trade industries. We combine an audit sample, the dynamics in the business register, and expert knowledge to estimate a transition matrix of classification-error probabilities. Bias and variance of the turnover estimates caused by classification errors are estimated by a bootstrap resampling approach. In addition, we study the extent to which manual selective editing at micro level can improve the accuracy. Our analyses reveal which industries do not meet preset quality criteria. Surprisingly, more selective editing can result in less accurate estimates for specific industries, and a fixed allocation of editing effort over industries is more effective than an allocation in proportion with the accuracy and population size of each industry. We discuss how to develop a practical method that can be implemented in production to estimate the accuracy of register-based estimates.


2016 ◽  
Vol 32 (1) ◽  
pp. 129-145 ◽  
Author(s):  
David Haziza ◽  
Éric Lesage

Abstract Weighting procedures are commonly applied in surveys to compensate for nonsampling errors such as nonresponse errors and coverage errors. Two types of weight-adjustment procedures are commonly used in the context of unit nonresponse: (i) nonresponse propensity weighting followed by calibration, also known as the two-step approach and (ii) nonresponse calibration weighting, also known as the one-step approach. In this article, we discuss both approaches and warn against the potential pitfalls of the one-step procedure. Results from a simulation study, evaluating the properties of several point estimators, are presented.


Sign in / Sign up

Export Citation Format

Share Document