Are There Differences Depending on the Device Used to Complete a Web Survey (PC or Smartphone) for Order-by-click Questions?

Field Methods ◽  
2017 ◽  
Vol 29 (3) ◽  
pp. 266-280 ◽  
Author(s):  
Melanie Revilla

The development of web surveys has been accompanied by the emergence of new scales, taking advantages of the visual and interactive features provided by the Internet like drop-down menus, sliders, drag-and-drop, or order-by-click scales. This article focuses on the order-by-click scales, studying the comparability of the data obtained for this scale when answered through PCs versus smartphones. I used data from an experiment where panelists from the Netquest opt-in panel in Spain were randomly assigned to a PC, smartphone optimized, or smartphone not-optimized version of the same questionnaire in two waves. I found significant differences due to the device and optimization at least for some indicators and questions.

2021 ◽  
Vol 11 (22) ◽  
pp. 11034
Author(s):  
Evgeny Nikulchev ◽  
Alexander Gusev ◽  
Dmitry Ilin ◽  
Nurziya Gazanova ◽  
Sergey Malykh

Web surveys are very popular in the Internet space. Web surveys are widely incorporated for gathering customer opinion about Internet services, for sociological and psychological research, and as part of the knowledge testing systems in electronic learning. When conducting web surveys, one of the issues to consider is the respondents’ authenticity throughout the entire survey process. We took 20,000 responses to an online questionnaire as experimental data. The survey took about 45 min on average. We did not take into account the given answers; we only considered the response time to the first question on each page of the survey interface, that is, only the users’ reaction time was taken into account. Data analysis showed that respondents get used to the interface elements and want to finish a long survey as soon as possible, which leads to quicker reactions. Based on the data, we built two neural network models that identify the records in which the respondent’s authenticity was violated or the respondent acted as a random clicker. The amount of data allows us to conclude that the identified dependencies are widely applicable.


2018 ◽  
Vol 37 (6) ◽  
pp. 750-765 ◽  
Author(s):  
Joseph W. Sakshaug ◽  
Basha Vicari ◽  
Mick P. Couper

Identifying strategies that maximize participation rates in population-based web surveys is of critical interest to survey researchers. While much of this interest has focused on surveys of persons and households, there is a growing interest in surveys of establishments. However, there is a lack of experimental evidence on strategies for optimizing participation rates in web surveys of establishments. To address this research gap, we conducted a contact mode experiment in which establishments selected to participate in a web survey were randomized to receive the survey invitation with login details and subsequent reminder using a fully crossed sequence of paper and e-mail contacts. We find that a paper invitation followed by a paper reminder achieves the highest response rate and smallest aggregate nonresponse bias across all-possible paper/e-mail contact sequences, but a close runner-up was the e-mail invitation and paper reminder sequence which achieved a similarly high response rate and low aggregate nonresponse bias at about half the per-respondent cost. Following up undeliverable e-mail invitations with supplementary paper contacts yielded further reductions in nonresponse bias and costs. Finally, for establishments without an available e-mail address, we show that enclosing an e-mail address request form with a prenotification letter is not effective from a response rate, nonresponse bias, and cost perspective.


2020 ◽  
Author(s):  
Susanne Kelfve ◽  
Marie Kivi ◽  
Boo Johansson ◽  
Magnus Lindwall

Abstract Background: Web-surveys are increasingly used in population studies. Yet, web-surveys targeting older individuals are still uncommon for various reasons. However, with younger cohorts approaching older age, the potentials for web-surveys among older people might be improved. In this study, we investigated response patterns in a web-survey targeting older adults and the potential importance of offering a paper questionnaire as an alternative to the web questionnaire. Methods: We analyzed data from three waves of a retirement study, in which a web-push methodology was used and a paper questionnaire was offered as an alternative to the web questionnaire in the last reminder. We mapped the response patterns, compared web- and paper respondents and compared different key outcomes resulting from the sample with and without the paper respondents, both at baseline and after two follow-ups.Results: Paper-respondents, that is, those that did not answer until they got a paper questionnaire with the last reminder, were more likely to be female, retired, single, and to report a lower level of education, higher levels of depression and lower self-reported health, compared to web-respondents. The association between retirement status and depression was only present among web-respondents. The differences between web and paper respondents were stronger in the longitudinal sample (after two follow-ups) than at baseline.Conclusions: We conclude that a web-survey might be a feasible and good alternative in surveys targeting people in the retirement age range. However, without offering a paper questionnaire, a small but important group will likely be missing with potential biased estimates as the result.


2022 ◽  
pp. 112-130
Author(s):  
Gabriella Punziano ◽  
Felice Addeo ◽  
Lucia Velotti

The chapter will focus on using a web survey administered using social networks as a gathering point to collect data on people's risk perception and their undertaking of protective behaviors during the Italian COVID-19 crisis. This was an unprecedented moment in the digital age when there was no possibility of physical contact due to the limitations imposed on coexistence by the health emergency to stem the spread of the virus. This is when digital connections are the only link among people, and the only tool that can be used for doing social research is trying to satisfy the desire for knowledge without limiting the potential for knowledge production even in times of profound uncertainty and several limitations. Analyzing the participants' feedback on web surveys during times of deep uncertainty allows the authors to show what is clearly happening to social research currently. The discussions will be supported by an auto-ethnography conducted on comments left by the respondents to the survey.


Author(s):  
Katja Lozar Manfreda ◽  
Vasja Vehovar

<div>The chapter describes a Web portal, dedicated to survey research, using modern information-communication technologies, especially the WWW. Although supported by EU since 2002, it provides worldwide visitors information on events (e.g., scientific meetings, calls for papers, projects), software, and literature on the methodology and implementation of Web surveys. The most valuable databases are the bibliography (including over 2,000 entries) and software/services databases (including over 500 entries).<br></div>


2020 ◽  
Vol 30 (6) ◽  
pp. 1763-1781
Author(s):  
Louisa Ha ◽  
Chenjie Zhang ◽  
Weiwei Jiang

PurposeLow response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of this study is to compare the effects of using interviewers to recruit participants in computer-assisted self-administered interviews (CASI) vs computer-assisted personal interviews (CAPI) and smartphones vs computers on participation rate and web survey response quality.Design/methodology/approachTwo field experiments using two similar media use studies on US college students were conducted to compare response quality in different survey modes and response devices.FindingsResponse quality of computer entry was better than smartphone entry in both studies for open-ended and closed-ended question formats. Device effect was only significant on overall completion rate when interviewers were present.Practical implicationsSurvey researchers are given guidance how to conduct online surveys using different devices and choice of question format to maximize survey response quality. The benefits and limitations of using an interviewer to recruit participants and smartphones as web survey response devices are discussed.Social implicationsIt shows how computer-assisted self-interviews and smartphones can improve response quality and participation for underprivileged groups.Originality/valueThis is the first study to compare response quality in different question formats between CASI, e-mailed delivered online surveys and CAPI. It demonstrates the importance of human factor in creating sense of obligation to improve response quality.


2019 ◽  
Vol 73 ◽  
pp. 235-244 ◽  
Author(s):  
João Matias ◽  
Eleni Kalamara ◽  
Federica Mathis ◽  
Katerina Skarupova ◽  
André Noor ◽  
...  

2009 ◽  
Vol 40 (1) ◽  
pp. 43-52 ◽  
Author(s):  
Uwe Matzat ◽  
Chris Snijders ◽  
Wouter van der Horst

The present study analyzes whether and how different types of progress indicators affect the tendency of respondents to continue filling out a web survey, focusing on whether the progress indicators’ effects depend on the position of the respondent in the questionnaire. Using a sample of 2460 respondents of a Dutch online access panel, we analyze three kinds of progress indicators (linear, fast-then-slow, slow-then-fast, and a control condition) using survival analysis. The results show that the effect of the indicators on the completion rate is either negative or nonexistent, depending on the questionnaire length. Moreover, the effect of an indicator does not depend on the position of the respondent in the answering process. We interpret our findings in terms of the implicit narrative between survey designer and respondent.


2017 ◽  
Vol 11 ◽  
pp. 117822181771639 ◽  
Author(s):  
Monica J Barratt ◽  
Jason A Ferris ◽  
Renee Zahnow ◽  
Joseph J Palamar ◽  
Larissa J Maier ◽  
...  

A decline in response rates in traditional household surveys, combined with increased internet coverage and decreased research budgets, has resulted in increased attractiveness of web survey research designs based on purposive and voluntary opt-in sampling strategies. In the study of hidden or stigmatised behaviours, such as cannabis use, web survey methods are increasingly common. However, opt-in web surveys are often heavily criticised due to their lack of sampling frame and unknown representativeness. In this article, we outline the current state of the debate about the relevance of pursuing representativeness, the state of probability sampling methods, and the utility of non-probability, web survey methods especially for accessing hidden or minority populations. Our article has two aims: (1) to present a comprehensive description of the methodology we use at Global Drug Survey (GDS), an annual cross-sectional web survey and (2) to compare the age and sex distributions of cannabis users who voluntarily completed (a) a household survey or (b) a large web-based purposive survey (GDS), across three countries: Australia, the United States, and Switzerland. We find that within each set of country comparisons, the demographic distributions among recent cannabis users are broadly similar, demonstrating that the age and sex distributions of those who volunteer to be surveyed are not vastly different between these non-probability and probability methods. We conclude that opt-in web surveys of hard-to-reach populations are an efficient way of gaining in-depth understanding of stigmatised behaviours and are appropriate, as long as they are not used to estimate drug use prevalence of the general population.


2018 ◽  
Vol 37 (2) ◽  
pp. 248-269 ◽  
Author(s):  
Mingnan Liu ◽  
Frederick G. Conrad

Web surveys have expanded the set of options available to questionnaire designers. One new option is to make it possible to administer questions that respondents can answer by moving an on-screen slider to the position on a visual scale that best reflects their position on an underlying dimension. One attribute of sliders that is not well understood is how the position of the slider when the question is presented can affect responses—for better or worse. Yet the slider’s default position is under the control of the designer and can potentially be exploited to maximize the quality of the responses (e.g., positioning the slider by default at the midpoint on the assumption that this is unbiased). There are several studies in the methodology literature that compare data collected via sliders and other methods, but relatively little attention has been given to the issue of default slider values. The current article reports findings from four web survey experiments ( n = 3,744, 490, 697, and 902) that examine whether and how the default values of the slider influence responses. For 101-point questions (e.g., feeling thermometers), when the slider default values are set to be 25, 50, 75, or 100, significantly more respondents choose those values as their answers which seems unlikely to accurately reflect respondents’ actual position on the underlying dimension. For 21- and 7-point scales, there is no significant or consistent impact of the default slider value on answers. The completion times are also similar across default values for questions with scales of this type. When sliders do not appear by default at any value, that is, the respondent must click or touch the scale to activate the slider, the missing data rate is low for 21- and 7-point scales but higher for the 101-point scales. Respondents’ evaluation of the survey difficulty and their satisfaction level with the survey do not differ by the default values. The implications and limitations of the findings are discussed.


Sign in / Sign up

Export Citation Format

Share Document