nonresponse error
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
pp. 1-11
Author(s):  
Jonas Klingwort ◽  
Joep Burger ◽  
Bart Buelens ◽  
Rainer Schnell

Capture-recapture (CRC) is currently considered a promising method to integrate big data in official statistics. We previously applied CRC to estimate road freight transport with survey data (as the first capture) and road sensor data (as the second capture), using license plate and time-stamp to identify re-captured vehicles. A considerable difference was found between the single-source, design-based survey estimate, and the multiple-source, model-based CRC estimate. One possible explanation is underreporting in the survey, which is conceivable given the response burden of diary questionnaires. In this paper, we explore alternative explanations by quantifying their effect on the estimated amount of underreporting. In particular, we study the effects of 1) reporting errors, including a mismatch between the reported day of loading and the measured day of driving, 2) measurement errors, including false positives and OCR failure, 3) considering vehicles reported not owned as nonresponse error instead of frame error, and 4) response mode. We conclude that alternative hypotheses are unlikely to fully explain the difference between the survey estimate and the CRC estimate. Underreporting, therefore, remains a likely explanation, illustrating the power of combining survey and sensor data.


2020 ◽  
Vol 36 (3) ◽  
pp. 631-645
Author(s):  
Floyd J. Fowler ◽  
Philip Brenner ◽  
Anthony M. Roman ◽  
J. Lee Hargraves

AbstractWith declining response rates and challenges of using RDD sampling for telephone surveys, collecting data from address-based samples has become more attractive. Two approaches are doing telephone interviews at telephone numbers matched to addresses and asking those at sampled addresses to call into an Interactive Voice Response (IVR) system to answer questions. This study used in-person interviewing to evaluate the effects of nonresponse and problems matching telephone numbers when telephone and IVR were used as the initial modes of data collection. The survey questions were selected from major US federal surveys covering a variety of topics. Both nonresponse and, for telephone, inability to find matches result in important nonresponse error for nearly half the measures across all topics, even after adjustments to fit the known demographic characteristics of the residents. Producing credible estimates requires using supplemental data collection strategies to reduce error from nonresponse.


2019 ◽  
Vol 8 (1) ◽  
pp. 62-88
Author(s):  
Jennifer Unangst ◽  
Ashley E Amaya ◽  
Herschel L Sanders ◽  
Jennifer Howard ◽  
Abigail Ferrell ◽  
...  

Abstract As survey methods evolve, researchers require a comprehensive understanding of the error sources in their data. Comparative studies, which assess differences between the estimates from emerging survey methods and those from traditional surveys, are a popular tool for evaluating total error; however, they do not provide insight on the contributing error sources themselves. The Total Survey Error (TSE) framework is a natural fit for evaluations that examine survey error components across multiple data sources. In this article, we present a case study that demonstrates how the TSE framework can support both qualitative and quantitative evaluations comparing probability and nonprobability surveys. Our case study focuses on five internet panels that are intended to represent the US population and are used to measure health statistics. For these panels, we analyze the total survey error in two ways: (1) using a qualitative assessment that describes how panel construction and management methods may introduce error and (2) using a quantitative assessment that estimates and partitions the total error for two probability-based panels into coverage error and nonresponse error. This work can serve as a “proof of concept” for how the TSE framework may be applied to understand and compare the error structure of probability and nonprobability surveys. For those working specifically with internet panels, our findings will further provide an example of how researchers may choose the panel option best suited to their study aims and help vendors prioritize areas of improvement.


Author(s):  
Justin J. Gengler ◽  
Kien T. Le ◽  
Jill Wittrock

AbstractMore research than ever before uses public opinion data to investigate society and politics in the Middle East and North Africa (MENA). Ethnic identities are widely theorized to mediate many of the political attitudes and behaviors that MENA surveys commonly seek to measure, but, to date, no research has systematically investigated how the observable ethnic category(s) of the interviewer may influence participation and answers given in Middle East surveys. Here we measure the impact of one highly salient and outwardly observable ascriptive attribute of interviewers—nationality—using data from an original survey experiment conducted in the Arab Gulf state of Qatar. Applying the total survey error (TSE) framework and utilizing an innovative nonparametric matching technique, we estimate treatment effects on both nonresponse error and measurement error. We find that Qatari nationals are more likely to begin and finish a survey, and respond to questions, when interviewed by a fellow national. Qataris also edit their answers to sensitive questions relating to the unequal status of citizens and noncitizens, reporting views that are more exclusionary and less positive toward out-group members, when the interviewer is a conational. The findings have direct implications for consumers and producers of a growing number of surveys conducted inside and outside the Arab world, where migration and conflict have made respondent-interviewer mismatches along national and other ethnic dimensions more salient and more common.


2019 ◽  
Vol 8 (5) ◽  
pp. 877-902 ◽  
Author(s):  
Mengyao Hu ◽  
John A Kirlin ◽  
Brady T West ◽  
Wenyi He ◽  
Ai Rene Ong ◽  
...  

Abstract Diary surveys are used to collect data on a variety of topics, including health, time use, nutrition, and expenditures. The US National Household Food Acquisition and Purchase Survey (FoodAPS) is a nationally representative diary survey, providing an important data source for decision-makers to design policies and programs for promoting healthy lifestyles. Unfortunately, a multiday diary survey like the FoodAPS can be subject to various survey errors, especially item nonresponse error occurring at the day level. The FoodAPS public-use data set provides survey weights that adjust only for unit nonresponse. Due to the lack of day-level weights (which could possibly adjust for the item nonresponse that arises from refusals on particular days), the adjustments for unit nonresponse are unlikely to correct any bias in estimates arising from households that initially agree to participate in FoodAPS but then fail to report on particular days. This article develops a general methodology for estimating the extent of underreporting due to this type of item nonresponse error in diary surveys, using FoodAPS as a case study. We describe a methodology combining bootstrap replicate sampling for complex samples and imputation based on a Heckman selection model to predict food expenditures for person-days with missing expenditures. We estimated the item nonresponse error by comparing weighted estimates according to only reported expenditures and both reported expenditures and predictions for missing values. Results indicate that ignoring the missing data would lead to consistent overestimation of the mean expenditures and events per person per day and underestimation of the total expenditures and events. Our study suggests that the household-level weights, which generally account for unit nonresponse, may not be entirely sufficient for addressing the nonresponse occurring at the day level in diary surveys, and proper imputation methods will be important for estimating the size of the underreporting.


2019 ◽  
Vol 8 (3) ◽  
pp. 413-432
Author(s):  
Roger Tourangeau

Abstract This article examines the relationship among different types of nonobservation errors (all of which affect estimates from nonprobability internet samples) and between nonresponse and measurement errors. Both are examples of how different error sources can interact. Estimates from nonprobability samples seem to have more total error than estimates from probability samples, even ones with very low response rates. This finding suggests that the combination of coverage, selection, and nonresponse errors has greater cumulative effects than nonresponse error alone. The probabilities of having internet access, joining an internet panel, and responding to a particular survey request are probably correlated and, as a result, may lead to greater covariances with survey variables than response propensities alone; the biases accentuate one another. With nonresponse and measurement error, the two sources seem more or less uncorrelated, with one exception—those most prone to social desirability bias (those in the undesirable categories) are also less likely to respond. In addition, the propensity for unit nonresponse seems to be related to item nonresponse.


2019 ◽  
Vol 134 (1_suppl) ◽  
pp. 46S-56S ◽  
Author(s):  
Ting Yan ◽  
David Cantor

Criminal justice involvement is a multifaceted construct encompassing various forms of contact with the criminal justice system. It is a sensitive topic to ask about in surveys and also a sensitive topic for respondents to answer. This article provides guidance for writing survey questions on criminal justice involvement, starting with a review of potential causes for reporting error and nonresponse error associated with survey questions on criminal justice involvement. Questions about criminal justice involvement are subject to errors that are common to any survey (eg, misunderstanding questions, recall bias, telescoping). Reponses to these questions are also subject to underreporting because of social desirability concerns. We also address strategies to reduce error for questions pertaining to criminal justice involvement (eg, self-administered data collection, wording of forgiving questions, indirect methods). We then discuss common design decisions associated with writing survey questions on criminal justice involvement (eg, type and frequency of criminal justice involvement, reference period,) and provide examples of questions from current surveys.


2017 ◽  
Vol 33 (2) ◽  
pp. 335-366 ◽  
Author(s):  
Sunghee Lee ◽  
Tuba Suzer-Gurtekin ◽  
James Wagner ◽  
Richard Valliant

Abstract This study attempted to integrate key assumptions in Respondent-Driven Sampling (RDS) into the Total Survey Error (TSE) perspectives and examine TSE as a new framework for a systematic assessment of RDS errors. Using two publicly available data sets on HIV-at-risk persons, nonresponse error in the RDS recruitment process and measurement error in network size reports were examined. On nonresponse, the ascertained partial nonresponse rate was high, and a substantial proportion of recruitment chains died early. Moreover, nonresponse occurred systematically: recruiters with lower income and higher health risks generated more recruits; and peers of closer relationships were more likely to accept recruitment coupons. This suggests a lack of randomness in the recruitment process, also shown through sizable intra-chain correlation. Self-reported network sizes suggested measurement error, given their wide dispersion and unreasonable reports. This measurement error has further implications for the current RDS estimators, which use network sizes as an adjustment factor on the assumption of a positive relationship between network sizes and selection probabilities in recruitment. The adjustment resulted in nontrivial unequal weighting effects and changed estimates in directions that were difficult to explain and, at times, illogical. Moreover, recruiters’ network size played no role in actual recruitment. TSE may serve as a tool for evaluating errors in RDS, which further informs study design decisions and inference approaches.


Sign in / Sign up

Export Citation Format

Share Document