instructional manipulation
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
pp. 147078532110550
Author(s):  
Chad Saunders ◽  
Jack Kulchitsky

A key challenge for self-administered questionnaires (SaQ) is ensuring quality responses in the absence of a marketing professional providing direct guidance on issues as they arise for respondents. While numerous approaches to improving SaQ response quality have been investigated including validity checks, interactive design, and instructional manipulation checks, these are primarily targeted at situations where expected responses are of a factual nature or stated preferences. These interventions have not been evaluated in scenarios that require higher levels of engagement and judgment from respondents. While professional marketers are guided by codes of conduct, there is no equivalent code of conduct for SaQ respondents. This is particularly salient for SaQ that require higher levels of reflection and judgment, since in the absence of professional guidance, respondents rely more on their individual ethical ideologies and experience, leaving SaQ responses potentially devoid of the standards that normally set the expectations around data quality for marketing professionals. As marketing professionals are unable to provide guidance directly in a SaQ context, the approach used in this study is to offer varying levels of professional marketing guidance indirectly through specific codes of conduct reminders that are easily consumable by SaQ participants. We demonstrate that reminders and ethical ideologies moderate the relationship between the participant’s experience with SaQ and compliance with a code of conduct. Specifically, SaQ respondents produce fewer code of conduct infractions when receiving reminders than the control group, and this improves even more when the reminders coincide with the SaQ task. The paper concludes with implications for theory and practice.


2021 ◽  
Vol 63 (4) ◽  
pp. 408-415
Author(s):  
Maria Rubio Juan ◽  
Melanie Revilla

The presence of satisficers among survey respondents threatens survey data quality. To identify such respondents, Oppenheimer et al. developed the Instructional Manipulation Check (IMC), which has been used as a tool to exclude observations from the analyses. However, this practice has raised concerns regarding its effects on the external validity and the substantive conclusions of studies excluding respondents who fail an IMC. Thus, more research on the differences between respondents who pass versus fail an IMC regarding sociodemographic and attitudinal variables is needed. This study compares respondents who passed versus failed an IMC both for descriptive and causal analyses based on structural equation modeling (SEM) using data from an online survey implemented in Spain in 2019. These data were analyzed by Rubio Juan and Revilla without taking into account the results of the IMC. We find that those who passed the IMC do differ significantly from those who failed for two sociodemographic and five attitudinal variables, out of 18 variables compared. Moreover, in terms of substantive conclusions, differences between those who passed and failed the IMC vary depending on the specific variables under study.


2017 ◽  
Vol 36 (3) ◽  
pp. 349-368 ◽  
Author(s):  
Melanie Revilla ◽  
Mick P. Couper

Much research has been done comparing grids and item-by-item formats. However, the results are mixed, and more research is needed especially when a significant proportion of respondents answer using smartphones. In this study, we implemented an experiment with seven groups ( n = 1,476), varying the device used (PC or smartphone), the presentation of the questions (grids, item-by-item vertical, item-by-item horizontal), and, in the case of smartphones only, the visibility of the “next” button (always visible or only visible at the end of the page, after scrolling down). The survey was conducted by the Netquest online fieldwork company in Spain in 2016. We examined several outcomes for three sets of questions, which are related to respondent behavior (completion time, lost focus, answer changes, and screen orientation) and data quality (item missing data, nonsubstantive responses, instructional manipulation check failure, and nondifferentiation). The most striking difference found is for the placement of the next button in the smartphone item-by-item conditions: When the button is always visible, item missing data are substantially higher.


2017 ◽  
Author(s):  
Franki Y. H. Kung ◽  
Navio Kwok ◽  
Douglas Brown

Attention checks have become increasingly popular in survey research as a means to filter out careless respondents. Despite their widespread use, little research has empirically tested the impact of attention checks on scale validity. In fact, because attention checks can induce a more deliberative mindset in survey respondents, they may change the way respondents answer survey questions, posing a threat to scale validity. In two studies, we tested this hypothesis (N = 816). We examined whether common attention checks—instructed-response items (Study 1) and an instructional manipulation check (Study 2)—impact responses to a well-validated management scale. Results showed no evidence that they affect scale validity, both in reported scale means and tests of measurement invariance. These findings allow researchers to justify the use of attention checks without compromising scale validity and encourage future research to examine other survey characteristic-respondent dynamics to advance our use of survey methods.


Sign in / Sign up

Export Citation Format

Share Document