Assessing general attentiveness to online panel surveys: the use of instructional manipulation checks

Author(s):  
Riccardo Ladini
Addiction ◽  
2009 ◽  
Vol 104 (10) ◽  
pp. 1641-1645 ◽  
Author(s):  
Renske Spijkerman ◽  
Ronald Knibbe ◽  
Kim Knoops ◽  
Dike van de Mheen ◽  
Regina van den Eijnden

2016 ◽  
Vol 35 (4) ◽  
pp. 498-520 ◽  
Author(s):  
Annelies G. Blom ◽  
Jessica M. E. Herzing ◽  
Carina Cornesse ◽  
Joseph W. Sakshaug ◽  
Ulrich Krieger ◽  
...  

The past decade has seen a rise in the use of online panels for conducting survey research. However, the popularity of online panels, largely driven by relatively low implementation costs and high rates of Internet penetration, has been met with criticisms regarding their ability to accurately represent their intended target populations. This criticism largely stems from the fact that (1) non-Internet (or offline) households, despite their relatively small size, constitute a highly selective group unaccounted for in Internet panels, and (2) the preeminent use of nonprobability-based recruitment methods likely contributes a self-selection bias that further compromises the representativeness of online panels. In response to these criticisms, some online panel studies have taken steps to recruit probability-based samples of individuals and providing them with the means to participate online. Using data from one such study, the German Internet Panel, this article investigates the impact of including offline households in the sample on the representativeness of the panel. Consistent with studies in other countries, we find that the exclusion of offline households produces significant coverage biases in online panel surveys, and the inclusion of these households in the sample improves the representativeness of the survey despite their lower propensity to respond.


2021 ◽  
pp. 147078532110550
Author(s):  
Chad Saunders ◽  
Jack Kulchitsky

A key challenge for self-administered questionnaires (SaQ) is ensuring quality responses in the absence of a marketing professional providing direct guidance on issues as they arise for respondents. While numerous approaches to improving SaQ response quality have been investigated including validity checks, interactive design, and instructional manipulation checks, these are primarily targeted at situations where expected responses are of a factual nature or stated preferences. These interventions have not been evaluated in scenarios that require higher levels of engagement and judgment from respondents. While professional marketers are guided by codes of conduct, there is no equivalent code of conduct for SaQ respondents. This is particularly salient for SaQ that require higher levels of reflection and judgment, since in the absence of professional guidance, respondents rely more on their individual ethical ideologies and experience, leaving SaQ responses potentially devoid of the standards that normally set the expectations around data quality for marketing professionals. As marketing professionals are unable to provide guidance directly in a SaQ context, the approach used in this study is to offer varying levels of professional marketing guidance indirectly through specific codes of conduct reminders that are easily consumable by SaQ participants. We demonstrate that reminders and ethical ideologies moderate the relationship between the participant’s experience with SaQ and compliance with a code of conduct. Specifically, SaQ respondents produce fewer code of conduct infractions when receiving reminders than the control group, and this improves even more when the reminders coincide with the SaQ task. The paper concludes with implications for theory and practice.


2020 ◽  
Vol 25 (4) ◽  
pp. 489-503
Author(s):  
Vitaly Brazhkin

Purpose The purpose of this paper is to provide a comprehensive review of the respondents’ fraud phenomenon in online panel surveys, delineate data quality issues from surveys of broad and narrow populations, alert fellow researchers about higher incidence of respondents’ fraud in online panel surveys of narrow populations, such as logistics professionals and recommend ways to protect the quality of data received from such surveys. Design/methodology/approach This general review paper has two parts, namely, descriptive and instructional. The current state of online survey and panel data use in supply chain research is examined first through a survey method literature review. Then, a more focused understanding of the phenomenon of fraud in surveys is provided through an analysis of online panel industry literature and psychological academic literature. Common survey design and data cleaning recommendations are critically assessed for their applicability to narrow populations. A survey of warehouse professionals is used to illustrate fraud detection techniques and glean additional, supply chain specific data protection recommendations. Findings Surveys of narrow populations, such as those typically targeted by supply chain researchers, are much more prone to respondents’ fraud. To protect and clean survey data, supply chain researchers need to use many measures that are different from those commonly recommended in methodological survey literature. Research limitations/implications For the first time, the need to distinguish between narrow and broad population surveys has been stated when it comes to data quality issues. The confusion and previously reported “mixed results” from literature reviews on the subject have been explained and a clear direction for future research is suggested: the two categories should be considered separately. Practical implications Specific fraud protection advice is provided to supply chain researchers on the strategic choices and specific aspects for all phases of surveying narrow populations, namely, survey preparation, administration and data cleaning. Originality/value This paper can greatly benefit researchers in several ways. It provides a comprehensive review and analysis of respondents’ fraud in online surveys, an issue poorly understood and rarely addressed in academic research. Drawing from literature from several fields, this paper, for the first time in literature, offers a systematic set of recommendations for narrow population surveys by clearly contrasting them with general population surveys.


2019 ◽  
Vol 10 (4) ◽  
pp. 433-452
Author(s):  
Jessica M.E. Herzing ◽  
Caroline Vandenplas ◽  
Julian B. Axenfeld

Longitudinal or panel surveys suffer from panel attrition which may result in biased estimates. Online panels are no exceptions to this phenomenon, but offer great possibilities in monitoring and managing the data-collection phase and response-enhancement features (such as reminders), due to real-time availability of paradata. This paper presents a data-driven approach to monitor the data-collection phase and to inform the adjustment of response-enhancement features during data collection across online panel waves, which takes into account the characteristics of an ongoing panel wave. For this purpose, we study the evolution of the daily response proportion in each wave of a probability-based online panel. Using multilevel models, we predict the data-collection evolution per wave day. In our example, the functional form of the data-collection evolution is quintic. The characteristics affecting the shape of the data-collection evolution are those of the specific wave day and not of the panel wave itself. In addition, we simulate the monitoring of the daily response proportion of one panel wave and find that the timing of sending reminders could be adjusted after 20 consecutive panel waves to keep the data-collection phase efficient. Our results demonstrate the importance of re-evaluating the characteristics of the data-collection phase, such as the timing of reminders, across the lifetime of an online panel to keep the fieldwork efficient.


Sign in / Sign up

Export Citation Format

Share Document