Let’s put a smile on that scale: Findings from three web survey experiments

2019 ◽  
Vol 62 (1) ◽  
pp. 18-26 ◽  
Author(s):  
Tobias Gummer ◽  
Vera Vogel ◽  
Tanja Kunz ◽  
Joss Roßmann

Graphical symbols such as smileys and other emoticons are prevalent in everyday life. Paralleling their increasing use in private text messaging and even in business communication, smileys and other emoticons also have been used more frequently in surveys. So far, only a few studies have tested the effects of smiley faces as rating scale labels on the response process in web surveys. This study compared smiley face scales with verbally labeled rating scales in three web survey experiments. We found no convincing evidence that using smiley face scales altered response behavior, with the exception that these scales increased response times, which indicates a higher response burden. Based on our findings, we would advise against using smiley face scales when the scales have not been sufficiently tested, and convincing reasons exist for using them.

2018 ◽  
Vol 37 (2) ◽  
pp. 248-269 ◽  
Author(s):  
Mingnan Liu ◽  
Frederick G. Conrad

Web surveys have expanded the set of options available to questionnaire designers. One new option is to make it possible to administer questions that respondents can answer by moving an on-screen slider to the position on a visual scale that best reflects their position on an underlying dimension. One attribute of sliders that is not well understood is how the position of the slider when the question is presented can affect responses—for better or worse. Yet the slider’s default position is under the control of the designer and can potentially be exploited to maximize the quality of the responses (e.g., positioning the slider by default at the midpoint on the assumption that this is unbiased). There are several studies in the methodology literature that compare data collected via sliders and other methods, but relatively little attention has been given to the issue of default slider values. The current article reports findings from four web survey experiments ( n = 3,744, 490, 697, and 902) that examine whether and how the default values of the slider influence responses. For 101-point questions (e.g., feeling thermometers), when the slider default values are set to be 25, 50, 75, or 100, significantly more respondents choose those values as their answers which seems unlikely to accurately reflect respondents’ actual position on the underlying dimension. For 21- and 7-point scales, there is no significant or consistent impact of the default slider value on answers. The completion times are also similar across default values for questions with scales of this type. When sliders do not appear by default at any value, that is, the respondent must click or touch the scale to activate the slider, the missing data rate is low for 21- and 7-point scales but higher for the 101-point scales. Respondents’ evaluation of the survey difficulty and their satisfaction level with the survey do not differ by the default values. The implications and limitations of the findings are discussed.


2020 ◽  
pp. 089443932095176
Author(s):  
Tobias Gummer ◽  
Tanja Kunz

With the increasing use of smartphones in web surveys, considerable efforts have been devoted to reduce the amount of screen space taken up by questions. An emerging stream of research in this area is aimed at optimizing the design elements of rating scales. One suggestion that has been made is to completely abandon verbal labels and use only numeric labels instead. This approach deliberately shifts the task of scale interpretation to the respondents and reduces the information given to them with an intention to reduce their response burden while still preserving the scale meaning. Following prior research, and by drawing on the established model of the cognitive response process, we critically tested these assumptions. Based on a web survey experiment, we found that omitting verbal labels and using only numeric labels instead pushed respondents to focus their responses on the endpoints of a rating scale. Moreover, drawing on response time paradata, we showed that their response burden was not reduced when presented with only numeric labels; quite the opposite was the case, especially when respondents answered the scale with only numeric labels for the first time, which seemed to entail additional cognitive effort. Based on our findings, we advise against using only numeric labels for rating scales in web surveys.


Author(s):  
Tanja Kunz ◽  
Franziska Quoß ◽  
Tobias Gummer

Abstract Narrative open-ended questions are suitable for gathering detailed information without limiting respondents to a predefined set of response categories. However, despite efforts to improve the quality of open-ended responses using different verbal and visual design features, respondents are often unwilling to expend effort on substantive and comprehensive responses. Based on a Web survey experiment conducted with opt-in panelists in Germany, we test whether placeholder text (i.e., lorem ipsum) in the answer box of a narrative open-ended question can be used as a visual stimulus to promote high-quality responses without discouraging respondents from answering the question. We find that, although placeholder texts that suggest long and extensive responses elicit more extensive responses, they also result in longer response times and less substantive responses. As the disadvantages of such lengthy placeholder texts thus appear to outweigh their advantages, we advise against using them. We further find that shorter placeholder texts do not provide any additional benefits. These findings also suggest that any kind of visual design feature should always be tested thoroughly before use.


2014 ◽  
Vol 30 (1) ◽  
pp. 23-43 ◽  
Author(s):  
Kea Tijdens

Abstract Occupation is key in socioeconomic research. As in other survey modes, most web surveys use an open-ended question for occupation, though the absence of interviewers elicits unidentifiable or aggregated responses. Unlike other modes, web surveys can use a search tree with an occupation database. They are hardly ever used, but this may change due to technical advancements. This article evaluates a three-step search tree with 1,700 occupational titles, used in the 2010 multilingual WageIndicator web survey for UK, Belgium and Netherlands (22,990 observations). Dropout rates are high; in Step 1 due to unemployed respondents judging the question not to be adequate, and in Step 3 due to search tree item length. Median response times are substantial due to search tree item length, dropout in the next step and invalid occupations ticked. Overall the validity of the occupation data is rather good, 1.7-7.5% of the respondents completing the search tree have ticked an invalid occupation.


2012 ◽  
Vol 21 (4) ◽  
pp. 136-143
Author(s):  
Lynn E. Fox

Abstract The self-anchored rating scale (SARS) is a technique that augments collaboration between Augmentative and Alternative Communication (AAC) interventionists, their clients, and their clients' support networks. SARS is a technique used in Solution-Focused Brief Therapy, a branch of systemic family counseling. It has been applied to treating speech and language disorders across the life span, and recent case studies show it has promise for promoting adoption and long-term use of high and low tech AAC. I will describe 2 key principles of solution-focused therapy and present 7 steps in the SARS process that illustrate how clinicians can use the SARS to involve a person with aphasia and his or her family in all aspects of the therapeutic process. I will use a case study to illustrate the SARS process and present outcomes for one individual living with aphasia.


2006 ◽  
Vol 22 (4) ◽  
pp. 259-267 ◽  
Author(s):  
Eelco Olde ◽  
Rolf J. Kleber ◽  
Onno van der Hart ◽  
Victor J.M. Pop

Childbirth has been identified as a possible traumatic experience, leading to traumatic stress responses and even to the development of posttraumatic stress disorder (PTSD). The current study investigated the psychometric properties of the Dutch version of the Impact of Event Scale-Revised (IES-R) in a group of women who recently gave birth (N = 435). In addition, a comparison was made between the original IES and the IES-R. The scale showed high internal consistency (α = 0.88). Using confirmatory factor analysis no support was found for a three-factor structure of an intrusion, an avoidance, and a hyperarousal factor. Goodness of fit was only reasonable, even after fitting one intrusion item on the hyperarousal scale. The IES-R correlated significantly with scores on depression and anxiety self-rating scales, as well as with scores on a self-rating scale of posttraumatic stress disorder. Although the IES-R can be used for studying posttraumatic stress reactions in women who recently gave birth, the original IES proved to be a better instrument compared to the IES-R. It is concluded that adding the hyperarousal scale to the IES-R did not make the scale stronger.


Methodology ◽  
2011 ◽  
Vol 7 (3) ◽  
pp. 88-95 ◽  
Author(s):  
Jose A. Martínez ◽  
Manuel Ruiz Marín

The aim of this study is to improve measurement in marketing research by constructing a new, simple, nonparametric, consistent, and powerful test to study scale invariance. The test is called D-test. D-test is constructed using symbolic dynamics and symbolic entropy as a measure of the difference between the response patterns which comes from two measurement scales. We also give a standard asymptotic distribution of our statistic. Given that the test is based on entropy measures, it avoids smoothed nonparametric estimation. We applied D-test to a real marketing research to study if scale invariance holds when measuring service quality in a sports service. We considered a free-scale as a reference scale and then we compared it with three widely used rating scales: Likert-type scale from 1 to 5 and from 1 to 7, and semantic-differential scale from −3 to +3. Scale invariance holds for the two latter scales. This test overcomes the shortcomings of other procedures for analyzing scale invariance; and it provides researchers a tool to decide the appropriate rating scale to study specific marketing problems, and how the results of prior studies can be questioned.


2021 ◽  
pp. 001698622098594
Author(s):  
Nielsen Pereira

The purpose of this study was to investigate the validity of the HOPE Scale for identifying gifted English language learners (ELs) and how classroom and English as a second language (ESL) teacher HOPE Scale scores differ. Seventy teachers completed the HOPE Scale on 1,467 students in grades K-5 and four ESL teachers completed the scale on 131 ELs. Measurement invariance tests indicated that the HOPE Scale yields noninvariant latent means across EL and English proficient (EP) samples. However, confirmatory factor analysis results support the use of the scale with ELs or EP students separately. Results also indicate that the rating patterns of classroom and ESL teachers were different and that the HOPE Scale does not yield valid data when used by ESL teachers. Caution is recommended when using the HOPE Scale and other teacher rating scales to compare ELs to EP students. The importance of invariance testing before using an instrument with a population that is different from the one(s) for which the instrument was developed is discussed.


Assessment ◽  
2021 ◽  
pp. 107319112199646
Author(s):  
Olivia Gratz ◽  
Duncan Vos ◽  
Megan Burke ◽  
Neelkamal Soares

To date, there is a paucity of research conducting natural language processing (NLP) on the open-ended responses of behavior rating scales. Using three NLP lexicons for sentiment analysis of the open-ended responses of the Behavior Assessment System for Children-Third Edition, the researchers discovered a moderately positive correlation between the human composite rating and the sentiment score using each of the lexicons for strengths comments and a slightly positive correlation for the concerns comments made by guardians and teachers. In addition, the researchers found that as the word count increased for open-ended responses regarding the child’s strengths, there was a greater positive sentiment rating. Conversely, as word count increased for open-ended responses regarding child concerns, the human raters scored comments more negatively. The authors offer a proof-of-concept to use NLP-based sentiment analysis of open-ended comments to complement other data for clinical decision making.


2021 ◽  
Vol 10 (14) ◽  
pp. 3056
Author(s):  
Ada Holak ◽  
Michał Czapla ◽  
Marzena Zielińska

Background: The all-too-frequent failure to rate pain intensity, resulting in the lack of or inadequacy of pain management, has long ceased to be an exclusive problem of the young patient, becoming a major public health concern. This study aimed to evaluate the methods used for reducing post-traumatic pain in children and the frequency of use of such methods. Additionally, the methods of pain assessment and the frequency of their application in this age group were analysed. Methods: A retrospective analysis of 2452 medical records of emergency medical teams dispatched to injured children aged 0–18 years in the area around Warsaw (Poland). Results: Of all injured children, 1% (20 out of 2432) had their pain intensity rated, and the only tool used for this assessment was the numeric rating scale (NRS). Children with burns most frequently received a single analgesic drug or cooling (56.2%), whereas the least frequently used method was multimodal treatment combining pharmacotherapy and cooling (13.5%). Toddlers constituted the largest percentage of patients who were provided with cooling (12%). Immobilisation was most commonly used in adolescents (29%) and school-age children (n = 186; 24%). Conclusions: Low frequency of pain assessment emphasises the need to provide better training in the use of various pain rating scales and protocols. What is more, non-pharmacological methods (cooling and immobilisation) used for reducing pain in injured children still remain underutilized.


Sign in / Sign up

Export Citation Format

Share Document