nonresponse bias
Recently Published Documents


TOTAL DOCUMENTS

283
(FIVE YEARS 44)

H-INDEX

38
(FIVE YEARS 3)

Author(s):  
Hadeel Mohammad Darwish, Muhammad Mazyad Drybati, Mounzer Ha Hadeel Mohammad Darwish, Muhammad Mazyad Drybati, Mounzer Ha

Statistical surveys are usually conducted to obtain data describing a problem in a studied society, and many surveys experience a rise in nonresponse rates, as the rate of nonresponse may affect the bias of the nonresponse in survey estimates. Recent empirical results show instances of nonresponse rate correlation with nonresponse bias, we attempt to translate statistical experiences of nonresponse bias in newly published studies and research into causal models that lead to assumptions about when a lack of response causes bias in estimates. Research studies of the estimates of nonresponse bias show that this bias often exists. The logical question is: what is the advantage of surveys if they suffer from high rates of nonresponse, since post-survey adjustments for nonresponse require additional variables, the answer depends on the nature of the design and the quality of the additional variables.  


2021 ◽  
Vol 37 (4) ◽  
pp. 931-953
Author(s):  
Corinna König ◽  
Joseph W. Sakshaug ◽  
Jens Stegmaier ◽  
Susanne Kohaut

Abstract Evidence from the household survey literature shows a declining response rate trend in recent decades, but whether a similar trend exists for voluntary establishment surveys is an understudied issue. This article examines trends in nonresponse rates and nonresponse bias over a period of 17 years in the annual cross-sectional refreshment samples of the IAB Establishment Panel in Germany. In addition, rich administrative data about the establishment and employee composition are used to examine changes in nonresponse bias and its two main components, refusal and noncontact, over time. Our findings show that response rates dropped by nearly a third: from 50.2% in 2001 to 34.5% in 2017. Simultaneously, nonresponse bias increased over this period, which was mainly driven by increasing refusal bias whereas noncontact bias fluctuated relatively evenly over the same period. Nonresponse biases for individual establishment and employee characteristics did not show a distinct pattern over time with few exceptions. Notably, larger establishments participated less frequently than smaller establishments over the entire period. This implies that survey organizations may need to put more effort into recruiting larger establishments to counteract nonresponse bias.


2021 ◽  
Vol 37 (4) ◽  
pp. 837-864
Author(s):  
Tobias J.M. Büttner ◽  
Joseph W. Sakshaug ◽  
Basha Vicari

Abstract Nearly all panel surveys suffer from unit nonresponse and the risk of nonresponse bias. Just as the analytic value of panel surveys increase with their length, so does cumulative attrition, which can adversely affect the representativeness of the resulting survey estimates. Auxiliary data can be useful for monitoring and adjusting for attrition bias, but traditional auxiliary sources have known limitations. We investigate the utility of linked-administrative data to adjust for attrition bias in a standard piggyback longitudinal design, where respondents from a preceding general population cross-sectional survey, which included a data linkage request, were recruited for a subsequent longitudinal survey. Using the linked-administrative data from the preceding survey, we estimate attrition biases for the first eight study waves of the longitudinal survey and investigate whether an augmented weighting scheme that incorporates the linked-administrative data reduces attrition biases. We find that adding the administrative information to the weighting scheme generally leads to a modest reduction in attrition bias compared to a standard weighting procedure and, in some cases, reduces variation in the point estimates. We conclude with a discussion of these results and remark on the practical implications of incorporating linked-administrative data in piggyback longitudinal designs.


2021 ◽  
Author(s):  
Chris J. Kennedy ◽  
Jayson S. Marwaha ◽  
P. Nina Scalise ◽  
Kortney A. Robinson ◽  
Brandon Booth ◽  
...  

Background: Post-discharge opioid consumption is an important source of data in guiding appropriate opioid prescribing guidelines, but its collection is tedious and requires significant resources. Furthermore, the reliability of post-discharge opioid consumption surveys is unclear. Our group developed an automated short messaging service (SMS)-to-web survey for collecting this data from patients. In this study, we assessed its effectiveness in estimating opioid consumption by performing causal adjustment and comparison to a phone-based survey as reference. Methods: Patients who underwent surgical procedures at our institution from 2019-2020 were sent an SMS message with a link to a secure web survey to quantify opioids consumed after discharge. Several patient factors extracted from the EHR were tested for association with survey response. Following targeted learning (TL) nonresponse adjustment using these EHR-based factors, opioid consumption survey results were compared to a prior telephone-based survey at our institution as a reference. Results: 6,553 patients were included. Opioid consumption was measured in 2,883 (44%), including 1,342 (20.5%) through survey response. Characteristics associated with inability to measure opioid consumption included age, length of stay, race, tobacco use, and missing preoperative assessment. Among the top 10 procedures by volume, EHR-based TL nonresponse bias adjustment corrected the median opioid consumption reported by an average of 57%, and corrected the 75th percentile of reported consumption by an average of 11%. This brought median estimates for 6/10 procedures closer to telephone survey-based consumption estimates, and 75th percentile estimates for 3/10 procedures closer to telephone survey-based consumption estimates. Conclusion: We found that applying electronic health record (EHR)-based machine learning nonresponse bias adjustment is essential for debiased opioid consumption estimates from patient surveys. After adjustment, post-discharge surveys can generate reliable opioid consumption estimates. Clinical factors from the EHR combined with TL adjustment appropriately capture differences between responders and nonresponders and should be used prior to generalizing or applying opioid consumption estimates to patient care.


2021 ◽  
pp. 107780122110190
Author(s):  
Brittany E. Hayes ◽  
Eryn Nicole O’Neal

Using a standardized campus climate survey that was disseminated across three modes of administration ( N = 5,137), this study assesses the nonresponse bias of two web-based versions to a self-administered paper-and-pencil version conducted at a Southeastern 4-year university. Significant differences emerged across all three modes of administration and victimization measures (bullying, sexual assault, rape, emotional abuse, and intimate partner violence [IPV]). Respondents were more likely to report victimization in the web-based surveys administered to online-only classes and via mass email compared to the paper survey. Policy implications, especially as it relates to survey administration, are discussed.


2021 ◽  
pp. 026975802110147
Author(s):  
Nathalie Leitgöb-Guzy

The study expands empirical knowledge on nonresponse bias when estimating victimization rates by using latent class analysis (LCA). Based on information about proxy-nonrespondents (hard-to-reach respondents and soft refusals), the study identifies subgroup(s) of persons who are systematically underrepresented by refusal and unreachability and determines whether an over- or underestimation of different offense-specific crime rates (prevalence and incidence rates) is to be expected. Therefore, a broad review of the current state of research is carried out, followed by a nonresponse analysis of a large-scale victimization survey conducted in Germany (n = 35,503). The paper illustrates that a variety of factors must be considered when analyzing nonresponse in victimization surveys and that the current state of research does not allow definitive conclusions about the amount and direction of nonresponse bias. The following analysis shows that LCA constitutes an excellent approach to determine nonresponse bias in surveys. In each sample, one class of person was identified that is systematically underrepresented, both by refusal and unreachability. Here, victimization rates of violent crime tend to be significantly higher, indicating an underestimation of crime rates.


Author(s):  
Bella Struminskaya ◽  
Tobias Gummer

Abstract Survey researchers are often confronted with the question of how long to set the length of the field period. Longer fielding time might lead to greater participation yet requires survey managers to devote more of their time to data collection efforts. With the aim of facilitating the decision about the length of the field period, we investigated whether a longer fielding time reduces the risk of nonresponse bias to judge whether field periods can be ended earlier without endangering the performance of the survey. By using data from six waves of a probability-based mixed-mode (online and mail) panel of the German population, we analyzed whether the risk of nonresponse bias decreases over the field period by investigating how day-by-day coefficients of variation develop during the field period. We then determined the optimal cut-off points for each mode after which data collection can be terminated without increasing the risk of nonresponse bias and found that the optimal cut-off points differ by mode. Our study complements prior research by shifting the perspective in the investigation of the risk of nonresponse bias to panel data as well as to mixed-mode surveys, in particular. Our proposed method of using coefficients of variation to assess whether the risk of nonresponse bias decreases significantly with each additional day of fieldwork can aid survey practitioners in finding the optimal field period for their mixed-mode surveys.


2021 ◽  
Author(s):  
Samantha Estrada

There is a wealth of literature on nonresponse bias, as well as sampling weights and other methods of assessing for survey nonresponse; however, there is little research in an applied setting such as higher education. Surveys administered to non-enrolled admitted students suffer from nonresponse; specifically, students who are not planning to enroll at a certain institution may be less likely to respond to the survey. In order to fill a gap in the literature, this study uses data from a higher education institution that utilizes the Confirmed and Regretted Admitted Students Questionnaire (CRASQ) to examine the effects of using sampling weights to correct for nonresponse biased.


Field Methods ◽  
2021 ◽  
pp. 1525822X2199916
Author(s):  
Ashley K. Griggs ◽  
Amanda C. Smith ◽  
Marcus E. Berzofsky ◽  
Christine Lindquist ◽  
Christopher Krebs ◽  
...  

The proportion of web survey responses submitted from mobile devices such as smartphones is increasing steadily. This trend presents new methodological challenges because mobile responses are often associated with increased breakoffs, which, in turn, can increase nonresponse bias. Using data from a survey of college students with more than 20,000 respondents, response patterns are examined to identify which days and times the survey invitation and reminder emails were most likely to produce nonmobile responses. The findings provide guidance on the optimal timing for recruiting college student sample members via email to reduce their likelihood of responding from a mobile device, and potentially, breaking off.


Sign in / Sign up

Export Citation Format

Share Document