Where Should I Start? On Default Values for Slider Questions in Web Surveys

2018 ◽  
Vol 37 (2) ◽  
pp. 248-269 ◽  
Author(s):  
Mingnan Liu ◽  
Frederick G. Conrad

Web surveys have expanded the set of options available to questionnaire designers. One new option is to make it possible to administer questions that respondents can answer by moving an on-screen slider to the position on a visual scale that best reflects their position on an underlying dimension. One attribute of sliders that is not well understood is how the position of the slider when the question is presented can affect responses—for better or worse. Yet the slider’s default position is under the control of the designer and can potentially be exploited to maximize the quality of the responses (e.g., positioning the slider by default at the midpoint on the assumption that this is unbiased). There are several studies in the methodology literature that compare data collected via sliders and other methods, but relatively little attention has been given to the issue of default slider values. The current article reports findings from four web survey experiments ( n = 3,744, 490, 697, and 902) that examine whether and how the default values of the slider influence responses. For 101-point questions (e.g., feeling thermometers), when the slider default values are set to be 25, 50, 75, or 100, significantly more respondents choose those values as their answers which seems unlikely to accurately reflect respondents’ actual position on the underlying dimension. For 21- and 7-point scales, there is no significant or consistent impact of the default slider value on answers. The completion times are also similar across default values for questions with scales of this type. When sliders do not appear by default at any value, that is, the respondent must click or touch the scale to activate the slider, the missing data rate is low for 21- and 7-point scales but higher for the 101-point scales. Respondents’ evaluation of the survey difficulty and their satisfaction level with the survey do not differ by the default values. The implications and limitations of the findings are discussed.

2020 ◽  
Vol 30 (6) ◽  
pp. 1763-1781
Author(s):  
Louisa Ha ◽  
Chenjie Zhang ◽  
Weiwei Jiang

PurposeLow response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of this study is to compare the effects of using interviewers to recruit participants in computer-assisted self-administered interviews (CASI) vs computer-assisted personal interviews (CAPI) and smartphones vs computers on participation rate and web survey response quality.Design/methodology/approachTwo field experiments using two similar media use studies on US college students were conducted to compare response quality in different survey modes and response devices.FindingsResponse quality of computer entry was better than smartphone entry in both studies for open-ended and closed-ended question formats. Device effect was only significant on overall completion rate when interviewers were present.Practical implicationsSurvey researchers are given guidance how to conduct online surveys using different devices and choice of question format to maximize survey response quality. The benefits and limitations of using an interviewer to recruit participants and smartphones as web survey response devices are discussed.Social implicationsIt shows how computer-assisted self-interviews and smartphones can improve response quality and participation for underprivileged groups.Originality/valueThis is the first study to compare response quality in different question formats between CASI, e-mailed delivered online surveys and CAPI. It demonstrates the importance of human factor in creating sense of obligation to improve response quality.


2019 ◽  
pp. 089443931987913
Author(s):  
Angelica M. Maineri ◽  
Ivano Bison ◽  
Ruud Luijkx

This study explores some features of slider bars in the context of a multi-device web survey. Using data collected among the students of the University of Trento in 2015 and 2016 by means of two web surveys ( N = 6,343 and 4,124) including two experiments, we investigated the effect of the initial position of the handle and the presence of numeric labels on answers provided using slider bars. It emerged that the initial position of the handle affected answers and that the number of rounded scores increased with numeric feedback. Smartphone respondents appeared more sensitive to the initial position of the handle but also less affected by the presence of numeric labels resulting in a lower tendency to rounding. Yet, outcomes on anchoring were inconclusive. Overall, no relevant differences have been detected between tablet and PC respondents. Understanding to what extent interactive and engaging tools such as slider bars can be successfully employed in multi-device surveys without affecting data quality is a key challenge for those who want to exploit the potential of web-based and multi-device data collection without undermining the quality of measurement.


Author(s):  
Tanja Kunz ◽  
Franziska Quoß ◽  
Tobias Gummer

Abstract Narrative open-ended questions are suitable for gathering detailed information without limiting respondents to a predefined set of response categories. However, despite efforts to improve the quality of open-ended responses using different verbal and visual design features, respondents are often unwilling to expend effort on substantive and comprehensive responses. Based on a Web survey experiment conducted with opt-in panelists in Germany, we test whether placeholder text (i.e., lorem ipsum) in the answer box of a narrative open-ended question can be used as a visual stimulus to promote high-quality responses without discouraging respondents from answering the question. We find that, although placeholder texts that suggest long and extensive responses elicit more extensive responses, they also result in longer response times and less substantive responses. As the disadvantages of such lengthy placeholder texts thus appear to outweigh their advantages, we advise against using them. We further find that shorter placeholder texts do not provide any additional benefits. These findings also suggest that any kind of visual design feature should always be tested thoroughly before use.


2019 ◽  
Vol 62 (1) ◽  
pp. 18-26 ◽  
Author(s):  
Tobias Gummer ◽  
Vera Vogel ◽  
Tanja Kunz ◽  
Joss Roßmann

Graphical symbols such as smileys and other emoticons are prevalent in everyday life. Paralleling their increasing use in private text messaging and even in business communication, smileys and other emoticons also have been used more frequently in surveys. So far, only a few studies have tested the effects of smiley faces as rating scale labels on the response process in web surveys. This study compared smiley face scales with verbally labeled rating scales in three web survey experiments. We found no convincing evidence that using smiley face scales altered response behavior, with the exception that these scales increased response times, which indicates a higher response burden. Based on our findings, we would advise against using smiley face scales when the scales have not been sufficiently tested, and convincing reasons exist for using them.


2017 ◽  
Vol 36 (5) ◽  
pp. 542-556 ◽  
Author(s):  
Roger Tourangeau ◽  
Hanyu Sun ◽  
Ting Yan ◽  
Aaron Maitland ◽  
Gonzalo Rivero ◽  
...  

Does completing a web survey on a smartphone or tablet computer reduce the quality of the data obtained compared to completing the survey on a laptop computer? This is an important question, since a growing proportion of web surveys are done on smartphones and tablets. Several earlier studies have attempted to gauge the effects of the switch from personal computers to mobile devices on data quality. We carried out a field experiment in eight counties around the United States that compared responses obtained by smartphones, tablets, and laptop computers. We examined a range of data quality measures including completion times, rates of missing data, straightlining, and the reliability and validity of scale responses. A unique feature of our study design is that it minimized selection effects; we provided the randomly determined device on which respondents completed the survey after they agreed to take part. As a result, respondents may have been using a device (e.g., a smartphone) for the first time. However, like many of the prior studies examining mobile devices, we find few effects of the type of device on data quality.


2017 ◽  
Vol 36 (2) ◽  
pp. 231-250 ◽  
Author(s):  
Bart Meuleman ◽  
Arnim Langer ◽  
Annelies G. Blom

Because research on the impact of web survey incentives has exclusively focused on Western settings, it is unclear to what extent current insights translate and generalize to non-Western societies, which are usually characterized by very different economic conditions, cultural traditions, and survey climates. The current article presents the results of a web survey incentives experiment among almost 4,440 Ghanaian university students who were offered conditional and unconditional incentives of different values (in the form of telephone credit). Our analyses partly replicate Western findings: Higher value incentives produce higher participation rates and unconditional incentives outperform conditional ones in the lower value conditions. In the case of relatively high incentives, however, conditional outperforms unconditional incentives. No differential effects of incentives on response quality were found.


2016 ◽  
Vol 35 (5) ◽  
pp. 654-665 ◽  
Author(s):  
Jonathan Mendelson ◽  
Jennifer Lee Gibson ◽  
Jennifer Romano-Bergstrom

Videos are often used in web surveys to assess attitudes. While including videos may allow researchers to test immediate reactions, there may be issues associated with displaying videos that are overlooked. In this article, we examine the effects of using video stimuli on responses in a probability-based web survey. Specifically, we evaluate the association between demographics, mobile device usage, and the ability to view videos; differences in ad recall based on whether respondents saw a video or still images of the video; whether respondents’ complete viewing of videos is related to presentation order; and the data quality of follow-up questions to the videos as a function of presentation order and complete viewing. Overall, we found that respondents using mobile browsers were less likely to be able to view videos in the survey. Those who could view videos were more likely to indicate recall compared to those who viewed images, and videos that were shown later in the survey were viewed in their entirety less frequently than those shown earlier. These results directly pertain to the legitimacy of using videos in web surveys to gather data about attitudes.


2018 ◽  
Vol 37 (6) ◽  
pp. 750-765 ◽  
Author(s):  
Joseph W. Sakshaug ◽  
Basha Vicari ◽  
Mick P. Couper

Identifying strategies that maximize participation rates in population-based web surveys is of critical interest to survey researchers. While much of this interest has focused on surveys of persons and households, there is a growing interest in surveys of establishments. However, there is a lack of experimental evidence on strategies for optimizing participation rates in web surveys of establishments. To address this research gap, we conducted a contact mode experiment in which establishments selected to participate in a web survey were randomized to receive the survey invitation with login details and subsequent reminder using a fully crossed sequence of paper and e-mail contacts. We find that a paper invitation followed by a paper reminder achieves the highest response rate and smallest aggregate nonresponse bias across all-possible paper/e-mail contact sequences, but a close runner-up was the e-mail invitation and paper reminder sequence which achieved a similarly high response rate and low aggregate nonresponse bias at about half the per-respondent cost. Following up undeliverable e-mail invitations with supplementary paper contacts yielded further reductions in nonresponse bias and costs. Finally, for establishments without an available e-mail address, we show that enclosing an e-mail address request form with a prenotification letter is not effective from a response rate, nonresponse bias, and cost perspective.


2021 ◽  
Vol 11 (22) ◽  
pp. 11034
Author(s):  
Evgeny Nikulchev ◽  
Alexander Gusev ◽  
Dmitry Ilin ◽  
Nurziya Gazanova ◽  
Sergey Malykh

Web surveys are very popular in the Internet space. Web surveys are widely incorporated for gathering customer opinion about Internet services, for sociological and psychological research, and as part of the knowledge testing systems in electronic learning. When conducting web surveys, one of the issues to consider is the respondents’ authenticity throughout the entire survey process. We took 20,000 responses to an online questionnaire as experimental data. The survey took about 45 min on average. We did not take into account the given answers; we only considered the response time to the first question on each page of the survey interface, that is, only the users’ reaction time was taken into account. Data analysis showed that respondents get used to the interface elements and want to finish a long survey as soon as possible, which leads to quicker reactions. Based on the data, we built two neural network models that identify the records in which the respondent’s authenticity was violated or the respondent acted as a random clicker. The amount of data allows us to conclude that the identified dependencies are widely applicable.


Sign in / Sign up

Export Citation Format

Share Document