Explaining Unit Nonresponse in Online Panel Surveys: An Application of the Extended Theory of Planned Behavior1

2011 ◽  
Vol 41 (12) ◽  
pp. 2999-3025 ◽  
Author(s):  
SIGRID HAUNBERGER
Addiction ◽  
2009 ◽  
Vol 104 (10) ◽  
pp. 1641-1645 ◽  
Author(s):  
Renske Spijkerman ◽  
Ronald Knibbe ◽  
Kim Knoops ◽  
Dike van de Mheen ◽  
Regina van den Eijnden

2016 ◽  
Vol 35 (4) ◽  
pp. 498-520 ◽  
Author(s):  
Annelies G. Blom ◽  
Jessica M. E. Herzing ◽  
Carina Cornesse ◽  
Joseph W. Sakshaug ◽  
Ulrich Krieger ◽  
...  

The past decade has seen a rise in the use of online panels for conducting survey research. However, the popularity of online panels, largely driven by relatively low implementation costs and high rates of Internet penetration, has been met with criticisms regarding their ability to accurately represent their intended target populations. This criticism largely stems from the fact that (1) non-Internet (or offline) households, despite their relatively small size, constitute a highly selective group unaccounted for in Internet panels, and (2) the preeminent use of nonprobability-based recruitment methods likely contributes a self-selection bias that further compromises the representativeness of online panels. In response to these criticisms, some online panel studies have taken steps to recruit probability-based samples of individuals and providing them with the means to participate online. Using data from one such study, the German Internet Panel, this article investigates the impact of including offline households in the sample on the representativeness of the panel. Consistent with studies in other countries, we find that the exclusion of offline households produces significant coverage biases in online panel surveys, and the inclusion of these households in the sample improves the representativeness of the survey despite their lower propensity to respond.


2018 ◽  
Vol 37 (3) ◽  
pp. 404-424 ◽  
Author(s):  
Jessica M. E. Herzing ◽  
Annelies G. Blom

Research has shown that the non-Internet population is hesitant to respond to online survey requests. However, also subgroups in the Internet population with low digital affinity may hesitate to respond to online surveys. This latter issue has not yet received much attention by scholars despite its potentially detrimental effects on the external validity of online survey data. In this article, we explore the extent to which a person’s digital affinity contributes to nonresponse bias in the German Internet Panel, a probability-based online panel of the general population. With a multidimensional classification of digital affinity, we predict response to the first online panel wave and participation across panel waves. We find that persons who belong to different classes of digital affinity have systematically different sociodemographic characteristics and show different voting behavior. In addition, we find that initial response propensities vary by classes of digital affinity, as do attrition patterns over time. Our results demonstrate the importance of digital affinity for the reduction in nonresponse bias during fieldwork and for postsurvey adjustments.


2020 ◽  
Vol 25 (4) ◽  
pp. 489-503
Author(s):  
Vitaly Brazhkin

Purpose The purpose of this paper is to provide a comprehensive review of the respondents’ fraud phenomenon in online panel surveys, delineate data quality issues from surveys of broad and narrow populations, alert fellow researchers about higher incidence of respondents’ fraud in online panel surveys of narrow populations, such as logistics professionals and recommend ways to protect the quality of data received from such surveys. Design/methodology/approach This general review paper has two parts, namely, descriptive and instructional. The current state of online survey and panel data use in supply chain research is examined first through a survey method literature review. Then, a more focused understanding of the phenomenon of fraud in surveys is provided through an analysis of online panel industry literature and psychological academic literature. Common survey design and data cleaning recommendations are critically assessed for their applicability to narrow populations. A survey of warehouse professionals is used to illustrate fraud detection techniques and glean additional, supply chain specific data protection recommendations. Findings Surveys of narrow populations, such as those typically targeted by supply chain researchers, are much more prone to respondents’ fraud. To protect and clean survey data, supply chain researchers need to use many measures that are different from those commonly recommended in methodological survey literature. Research limitations/implications For the first time, the need to distinguish between narrow and broad population surveys has been stated when it comes to data quality issues. The confusion and previously reported “mixed results” from literature reviews on the subject have been explained and a clear direction for future research is suggested: the two categories should be considered separately. Practical implications Specific fraud protection advice is provided to supply chain researchers on the strategic choices and specific aspects for all phases of surveying narrow populations, namely, survey preparation, administration and data cleaning. Originality/value This paper can greatly benefit researchers in several ways. It provides a comprehensive review and analysis of respondents’ fraud in online surveys, an issue poorly understood and rarely addressed in academic research. Drawing from literature from several fields, this paper, for the first time in literature, offers a systematic set of recommendations for narrow population surveys by clearly contrasting them with general population surveys.


2019 ◽  
Vol 10 (4) ◽  
pp. 433-452
Author(s):  
Jessica M.E. Herzing ◽  
Caroline Vandenplas ◽  
Julian B. Axenfeld

Longitudinal or panel surveys suffer from panel attrition which may result in biased estimates. Online panels are no exceptions to this phenomenon, but offer great possibilities in monitoring and managing the data-collection phase and response-enhancement features (such as reminders), due to real-time availability of paradata. This paper presents a data-driven approach to monitor the data-collection phase and to inform the adjustment of response-enhancement features during data collection across online panel waves, which takes into account the characteristics of an ongoing panel wave. For this purpose, we study the evolution of the daily response proportion in each wave of a probability-based online panel. Using multilevel models, we predict the data-collection evolution per wave day. In our example, the functional form of the data-collection evolution is quintic. The characteristics affecting the shape of the data-collection evolution are those of the specific wave day and not of the panel wave itself. In addition, we simulate the monitoring of the daily response proportion of one panel wave and find that the timing of sending reminders could be adjusted after 20 consecutive panel waves to keep the data-collection phase efficient. Our results demonstrate the importance of re-evaluating the characteristics of the data-collection phase, such as the timing of reminders, across the lifetime of an online panel to keep the fieldwork efficient.


Sign in / Sign up

Export Citation Format

Share Document