Agree or Disagree: Does It Matter Which Comes First? An Examination of Scale Direction Effects in a Multi-device Online Survey

Field Methods ◽  
2021 ◽  
pp. 1525822X2110122
Author(s):  
Carmen M. Leon ◽  
Eva Aizpurua ◽  
Sophie van der Valk

Previous research shows that the direction of rating scales can influence participants’ response behavior. Studies also suggest that the device used to complete online surveys might affect the susceptibility to these effects due to the different question layouts (e.g., horizontal grids vs. vertical individual questions). This article contributes to previous research by examining scale direction effects in an online multi-device survey conducted with panelists in Spain. In this experiment, respondents were randomly assigned to two groups where the scale direction was manipulated (incremental vs. decremental). Respondents completed the questionnaire using the device of their choosing (57.8% used PCs; 36.5% used smartphones; and 5.7% used tablets). The results show that scale direction influenced response distributions but did not significantly affect data quality. In addition, our findings indicate that scale direction effects were comparable across devices. Findings are discussed and implications are highlighted.

2017 ◽  
Vol 49 (1) ◽  
pp. 79-107 ◽  
Author(s):  
Natalja Menold

Unlike other data collection modes, the effect of labeling rating scales on reliability and validity, as relevant aspects of measurement quality, has seldom been addressed in online surveys. In this study, verbal and numeric rating scales were compared in split-ballot online survey experiments. In the first experiment, respondents’ cognitive processes were observed by means of eye tracking, that is, determining the respondent’s fixations in different areas of the screen. In the remaining experiments, data for reliability and validity analysis were collected from a German adult sample. The results show that respondents needed more fixations and more time to endorse a category when a rating scale had numeric labels. Cross-sectional reliability was lower and some hypotheses with respect to the criterion validity could not be supported when numeric rating scales were used. In conclusion, theoretical considerations and the empirical results contradict the current broad usage of numeric scales in online surveys.


2018 ◽  
Vol 37 (3) ◽  
pp. 435-445
Author(s):  
Rebecca Hofstein Grady ◽  
Rachel Leigh Greenspan ◽  
Mingnan Liu

Across two studies, we aimed to determine the row and column size in matrix-style questions that best optimizes participant experience and data quality for computer and mobile users. In Study 1 ( N = 2,492), respondents completed 20 questions (comprising four short scales) presented in a matrix grid (converted to item-by-item format on mobile phones). We varied the number of rows (5, 10, or 20) and columns (3, 5, or 7) of the matrix on each page. Outcomes included both data quality (straightlining, item skip rate, and internal reliability of scales) and survey experience measures (dropout rate, rating of survey experience, and completion time). Results for row size revealed dropout rate and reported survey difficulty increased as row size increased. For column size, seven columns increased the completion time of the survey, while three columns produced lower scale reliability. There was no interaction between row and column size. The best overall size tested was a 5 × 5 matrix. In Study 2 ( N = 2,570), we tested whether the effects of row size replicated when using a single 20-item scale that crossed page breaks and found that participant survey ratings were still best in the five-row condition. These results suggest that having around five rows or potentially fewer per page, and around five columns for answer options, gives the optimal survey experience, with equal or better data quality, when using matrix-style questions in an online survey. These recommendations will help researchers gain the benefits of using matrices in their surveys with the least downsides of the format.


2021 ◽  
pp. 147078532098182
Author(s):  
Catherine A Roster

This study explored the influence of Internet memes, specifically image macros of animals with motivational captions, on survey respondents’ engagement with the survey-taking experience and subsequent data quality. A web-based field experiment was conducted with online survey respondents from two sample sources, one crowdsourced, and one commercially managed online panel. Half of the respondents from each sample source were randomly selected to see the memes at various points throughout the survey; the other half did not. Direct and indirect measures of survey engagement and response quality were used to assess effectiveness of the memes. Quantitative results were inconclusive, with few significant differences found in measures of engagement and data quality between respondents in the meme or control condition in either sample source. However, qualitative open-ended comments from respondents who saw the memes in both sample groups revealed that memes provide respondents a fun break and relief from the cognitive burdens of answering online survey questions. In conclusion, memes represent a relatively inexpensive and easy way for survey researchers to connect with respondents and show appreciation for their time and effort.


2017 ◽  
Vol 36 (2) ◽  
pp. 212-230 ◽  
Author(s):  
Stephan Schlosser ◽  
Anja Mays

In this article, we present a study on the data quality and the response process of mobile online surveys using an experimental design as compared to a standard computer. We used the following indicators to measure data quality and response properties: reaction time to survey invitation, break-off rate, item nonresponse, length of responses to open-ended questions and survey transmission, processing, and completion time. With regard to completion time, we also explored the significance of the place as well as the situation in which the survey was completed, the kind of Internet connection the respondents had as well as the hardware properties of the devices used to answer the online survey. Our results suggest comparable data quality and response properties in most aspects: There were no noticeable differences between computer and mobile users as regards break-off rate, item nonresponse, and length of responses to open-ended questions, nor the place where the survey was completed. However, it took respondents in the mobile group longer to complete the survey as compared to respondents answering the online survey on their computer. In terms of the completion time, there was a significant decrease in the differences between mobile devices and PCs when respondents used technically advanced mobile devices and had access to a fast Internet connection.


2021 ◽  
pp. 004912412199553
Author(s):  
Jan-Lucas Schanze

An increasing age of respondents and cognitive impairment are usual suspects for increasing difficulties in survey interviews and a decreasing data quality. This is why survey researchers tend to label residents in retirement and nursing homes as hard-to-interview and exclude them from most social surveys. In this article, I examine to what extent this label is justified and whether quality of data collected among residents in institutions for the elderly really differs from data collected within private households. For this purpose, I analyze the response behavior and quality indicators in three waves of Survey of Health, Ageing and Retirement in Europe. To control for confounding variables, I use propensity score matching to identify respondents in private households who share similar characteristics with institutionalized residents. My results confirm that most indicators of response behavior and data quality are worse in institutions compared to private households. However, when controlling for sociodemographic and health-related variables, differences get very small. These results suggest the importance of health for the data quality irrespective of the housing situation.


2014 ◽  
Vol 14 (10) ◽  
pp. 2681-2698 ◽  
Author(s):  
V. J. Cortes Arevalo ◽  
M. Charrière ◽  
G. Bossi ◽  
S. Frigerio ◽  
L. Schenato ◽  
...  

Abstract. Volunteers have been trained to perform first-level inspections of hydraulic structures within campaigns promoted by civil protection of Friuli Venezia Giulia (Italy). Two inspection forms and a learning session were prepared to standardize data collection on the functional status of bridges and check dams. In all, 11 technicians and 25 volunteers inspected a maximum of six structures in Pontebba, a mountain community within the Fella Basin. Volunteers included civil-protection volunteers, geosciences and social sciences students. Some participants carried out the inspection without attending the learning session. Thus, we used the mode of technicians in the learning group to distinguish accuracy levels between volunteers and technicians. Data quality was assessed by their accuracy, precision and completeness. We assigned ordinal scores to the rating scales in order to get an indication of the structure status. We also considered performance and feedback of participants to identify corrective actions in survey procedures. Results showed that volunteers could perform comparably to technicians, but only with a given range in precision. However, a completeness ratio (question/parameter) was still needed any time volunteers used unspecified options. Then, volunteers' ratings could be considered as preliminary assessments without replacing other procedures. Future research should consider advantages of mobile applications for data-collection methods.


Field Methods ◽  
2016 ◽  
Vol 29 (2) ◽  
pp. 154-170 ◽  
Author(s):  
Arnim Langer ◽  
Bart Meuleman ◽  
Abdul-Gafar Tobi Oshodi ◽  
Maarten Schroyens

This article tackles the question whether it is a viable strategy to conduct online surveys among university students in developing countries. By documenting the methodology of the National Service Scheme Survey conducted in Ghana, we set out to answer three questions: (1) How can a sample of university students be obtained? (2) How can students be motivated to cooperate in online surveys? (3) What kind of devices do students use for completing an online survey? Our results indicate that online strategies can be very useful to reach this particular target group, if the necessary precautions are taken.


2022 ◽  
Vol 2 ◽  
Author(s):  
Andreas Kannenberg ◽  
Arri R. Morris ◽  
Karl D. Hibler

IntroductionStudies with a powered prosthetic ankle-foot (PwrAF) found a reduction in sound knee loading compared to passive feet. Therefore, the aim of the present study was to determine whether anecdotal reports on reduced musculoskeletal pain and improved patient-reported mobility were isolated occurrences or reflect a common experience in PwrAF users.MethodsTwo hundred and fifty individuals with transtibial amputation (TTA) who had been fitted a PwrAF in the past were invited to an online survey on average sound knee, amputated side knee, and low-back pain assessed with numerical pain rating scales (NPRS), the PROMIS Pain Interference scale, and the PLUS-M for patient-reported mobility in the free-living environment. Subjects rated their current foot and recalled the ratings for their previous foot. Recalled scores were adjusted for recall bias by clinically meaningful amounts following published recommendations. Statistical comparisons were performed using Wilcoxon's signed rank test.ResultsForty-six subjects, all male, with unilateral TTA provided data suitable for analysis. Eighteen individuals (39%) were current PwrAF users, whereas 28 subjects (61%) had reverted to a passive foot. After adjustment for recall bias, current PwrAF users reported significantly less sound knee pain than they recalled for use of a passive foot (−0.5 NPRS, p = 0.036). Current PwrAF users who recalled sound knee pain ≥4 NPRS with a passive foot reported significant and clinically meaningful improvements in sound knee pain (−2.5 NPRS, p = 0.038) and amputated side knee pain (−3 NPRS, p = 0.042). Current PwrAF users also reported significant and clinically meaningful improvements in patient-reported mobility (+4.6 points PLUS-M, p = 0.016). Individuals who had abandoned the PwrAF did not recall any differences between the feet.DiscussionCurrent PwrAF users reported significant and clinically meaningful improvements in patient-reported prosthetic mobility as well as sound knee and amputated side knee pain compared to recalled mobility and pain with passive feet used previously. However, a substantial proportion of individuals who had been fitted such a foot in the past did not recall improvements and had reverted to passive feet. The identification of individuals with unilateral TTA who are likely to benefit from a PwrAF remains a clinical challenge and requires further research.


Author(s):  
Grant Duncan ◽  
James H Liu ◽  
Sarah Y Choi

Leading up to the 2017 New Zealand general election, Stuff.co.nz and Massey University collaborated in two online surveys of public opinion to test the mood of the nation and seek opinions about a range of relevant political and social issues. Given their success, two more surveys were conducted in 2020. This article summarises results from the 2020 data, and reflects on the methodological advantages, disadvantages and challenges of conducting people-driven online surveys that need to meet the differing needs of academic researchers, journalists and the public. While the surveys produced very large samples, they were not representative. Moreover, the choices of items were influenced by what happened to be newsworthy at the time. Naturally, Covid-19 was a significant theme during the 2020 surveys. The results reveal predictable left–right polarization of opinions, a minority support for conspiracy theories, some areas of wide agreement across the political spectrum, and some unexpected nuances of opinions within and across ethnic groups.


2018 ◽  
Vol 28 (4) ◽  
pp. 854-887 ◽  
Author(s):  
Joel R. Evans ◽  
Anil Mathur

Purpose The purpose of this paper is to present a detailed and critical look at the evolution of online survey research since Evans and Mathur’s (2005) article on the value of online surveys. At that time, online survey research was in its early stages. Also covered are the present and future states of online research. Many conclusions and recommendations are presented. Design/methodology/approach The look back focuses on online surveys, strengths and weaknesses of online surveys, the literature on several aspects of online surveys and online survey best practices. The look ahead focuses on emerging survey technologies and methodologies, and new non-survey technologies and methodologies. Conclusions and recommendations are provided. Findings Online survey research is used more frequently and better accepted by researchers than in 2005. Yet, survey techniques are still regularly transformed by new technologies. Non-survey digital research is also more prominent than in 2005 and can better track actual behavior than surveys can. Hybrid surveys will be widespread in the future. Practical implications The paper aims to provide insights for researchers with different levels of online survey experience. And both academics and practitioners should gain insights. Social implications Adhering to a strong ethics code is vital to gain respondents’ trust and to produce valid results. Originality/value Conclusions and recommendations are offered in these specific areas: defining concepts, understanding the future role of surveys, developing and implementing surveys and a survey code of ethics. The literature review cites more than 200 sources.


Sign in / Sign up

Export Citation Format

Share Document