Nonresponse Error

2014 ◽  
pp. 135-150
Keyword(s):  
Author(s):  
Justin J. Gengler ◽  
Kien T. Le ◽  
Jill Wittrock

AbstractMore research than ever before uses public opinion data to investigate society and politics in the Middle East and North Africa (MENA). Ethnic identities are widely theorized to mediate many of the political attitudes and behaviors that MENA surveys commonly seek to measure, but, to date, no research has systematically investigated how the observable ethnic category(s) of the interviewer may influence participation and answers given in Middle East surveys. Here we measure the impact of one highly salient and outwardly observable ascriptive attribute of interviewers—nationality—using data from an original survey experiment conducted in the Arab Gulf state of Qatar. Applying the total survey error (TSE) framework and utilizing an innovative nonparametric matching technique, we estimate treatment effects on both nonresponse error and measurement error. We find that Qatari nationals are more likely to begin and finish a survey, and respond to questions, when interviewed by a fellow national. Qataris also edit their answers to sensitive questions relating to the unequal status of citizens and noncitizens, reporting views that are more exclusionary and less positive toward out-group members, when the interviewer is a conational. The findings have direct implications for consumers and producers of a growing number of surveys conducted inside and outside the Arab world, where migration and conflict have made respondent-interviewer mismatches along national and other ethnic dimensions more salient and more common.


2019 ◽  
Vol 8 (3) ◽  
pp. 413-432
Author(s):  
Roger Tourangeau

Abstract This article examines the relationship among different types of nonobservation errors (all of which affect estimates from nonprobability internet samples) and between nonresponse and measurement errors. Both are examples of how different error sources can interact. Estimates from nonprobability samples seem to have more total error than estimates from probability samples, even ones with very low response rates. This finding suggests that the combination of coverage, selection, and nonresponse errors has greater cumulative effects than nonresponse error alone. The probabilities of having internet access, joining an internet panel, and responding to a particular survey request are probably correlated and, as a result, may lead to greater covariances with survey variables than response propensities alone; the biases accentuate one another. With nonresponse and measurement error, the two sources seem more or less uncorrelated, with one exception—those most prone to social desirability bias (those in the undesirable categories) are also less likely to respond. In addition, the propensity for unit nonresponse seems to be related to item nonresponse.


2016 ◽  
Vol 4 (2) ◽  
pp. 246-262 ◽  
Author(s):  
Floyd J. Fowler ◽  
Anthony M. Roman ◽  
Rumel Mahmood ◽  
Carol A. Cosenza

2015 ◽  
Vol 31 (4) ◽  
pp. 611-625 ◽  
Author(s):  
Jessica Broome

AbstractSurvey nonresponse may increase the chances of nonresponse error, and different interviewers contribute differentially to nonresponse. This article first addresses the relationship between initial impressions of interviewers in survey introductions and the outcome of these introductions, and then contrasts this relationship with current viewpoints and practices in telephone interviewing. The first study described here exposed judges to excerpts of interviewer speech from actual survey introductions and asked them to rate twelve characteristics of the interviewer. Impressions of positive traits such as friendliness and confidence had no association with the actual outcome of the call, while higher ratings of “scriptedness” predicted lower participation likelihood. At the same time, a second study among individuals responsible for training telephone interviewers found that when training interviewers, sounding natural or unscripted during a survey introduction is not emphasized. This article concludes with recommendations for practice and further research.


Author(s):  
Eleanor Singer ◽  
Cong Ye

This article is intended to supplement rather than replace earlier reviews of research on survey incentives, especially those by Singer (2002); Singer and Kulka (2002); and Cantor, O’Hare, and O’Connor (2008). It is based on a systematic review of articles appearing since 2002 in major journals, supplemented by searches of the Proceedings of the American Statistical Association’s Section on Survey Methodology for unpublished papers. The article begins by drawing on responses to open-ended questions about why people are willing to participate in a hypothetical survey. It then lays out the theoretical justification for using monetary incentives and the conditions under which they are hypothesized to be particularly effective. Finally, it summarizes research on how incentives affect response rates in cross-sectional and longitudinal studies and, to the extent information is available, how they affect response quality, nonresponse error, and cost-effectiveness. A special section on incentives in Web surveys is included.


2011 ◽  
Vol 25 (3) ◽  
pp. 229-239 ◽  
Author(s):  
Jeremy S. Jordan ◽  
Matthew Walker ◽  
Aubrey Kent ◽  
Yuhei Inoue

The failure to adequately address nonresponse issues in survey research may lead to nonresponse bias in overall survey estimates, which can severely restrict researchers’ ability to make inferences to a target population. This study was designed to assess the frequency of nonresponse analyses in articles published in theJournal of Sport Management(JSM). All articles from the years 1987 through 2008 published in JSM (N= 371) were content analyzed based on a previously established coding scheme as well as additional indicators. The results revealed that only a small number of articles reported the use of nonresponse analyses as a means to control for nonresponse error.


Sign in / Sign up

Export Citation Format

Share Document