response option
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 23)

H-INDEX

14
(FIVE YEARS 2)

2022 ◽  
pp. 001316442110694
Author(s):  
Chet Robie ◽  
Adam W. Meade ◽  
Stephen D. Risavy ◽  
Sabah Rasheed

The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence—either incrementally arranged (e.g., strongly disagree to strongly agree) or decrementally arranged (e.g., strongly agree to strongly disagree). The present study added two additional experimental conditions—randomly incremental or decremental and completely randomized. All items were presented in an item-by-item format. We also extended previous studies by including an examination of response option order effects on: careless responding, correlations between focal predictors and criteria, and participant reactions, all the while controlling for false discovery rate and focusing on the size of effects. In a sample of 1,198 university students, we found little to no response option order effects on a recognized personality assessment vis-à-vis measurement equivalence, scale mean differences, item-level distributions, or participant reactions. However, the completely randomized response option order condition differed on several careless responding indices suggesting avenues for future research.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kate Sully ◽  
Nicola Bonner ◽  
Helena Bradley ◽  
Robyn von Maltzahn ◽  
Rob Arbuckle ◽  
...  

Abstract Background Accurate symptom monitoring is vital when managing pediatric asthma, providing an opportunity to improve control and relieve associated burden. The CHILDHOOD ASTHMA CONTROL TEST (C-ACT) has been validated for asthma control assessment in children; however, there are concerns that response option images used in the C-ACT are not culturally universal and could be misinterpreted. This cross-sectional, qualitative study developed and evaluated alternative response option images using interviews with children with asthma aged 4–11 years (and their parents/caregivers) in the United States, Spain, Poland, and Argentina. Interviews were conducted in two stages (with expert input) to evaluate the appropriateness, understanding and qualitative equivalence of the alternative images (both on paper and electronically). This included comparing the new images with the original C-ACT response scale, to provide context for equivalence results. Results Alternative response option images included scale A (simple faces), scale B (circles of decreasing size), and scale C (squares of decreasing quantity). In Stage 1, most children logically ranked images using scales A, B and C (66.7%, 79.0% and 70.6%, respectively). However, some children ranked the images in scales B (26.7%) and C (58.3%) in reverse order. Slightly more children could interpret the images within the context of their asthma in scale B (68.4%) than A (55.6%) and C (47.5%). Based on Stage 1 results, experts recommended scales A (with slight modifications) and B be investigated further. In Stage 2, similar proportions of children logically ranked the images used in modified scales A (69.7%) and B (75.7%). However, a majority of children ranked the images in scale B in the reverse order (60.0%). Slightly more children were able to interpret the images in the context of their asthma using scale B (57.6%) than modified scale A (48.5%). Children and parents/caregivers preferred modified scale A over scale B (78.8% and 90.9%, respectively). Compared with the original C-ACT, most children selected the same response option on items using both scales, supporting equivalency. Following review of Stage 2 results, all five experts agreed modified scale A was the optimal response scale. Conclusions This study developed alternative response option images for use in the C-ACT and provides qualitative evidence of the equivalency of these response options to the originals.


2021 ◽  
Vol 6 ◽  
Author(s):  
Robert Trevethan ◽  
Kang Ma

Certain combinations of number and labeling of response options on Likert scales might, because of their interaction, influence psychometric outcomes. In order to explore this possibility with an experimental design, two versions of a scale for assessing sense of efficacy for teaching (SET) were administered to preservice teachers. One version had seven response options with labels at odd-numbered points; the other had nine response options with labels only at the extremes. Before outliers in the data were adjusted, the first version produced a range of more desirable psychometric outcomes but poorer test–retest reliability. After outliers were addressed, the second version had more undesirable attributes than before, and its previously high test–retest reliability dropped to poor. These results are discussed in relation to the design of scales for assessing SET and other constructs as well as in relation to the need for researchers to examine their data carefully, consider the need to address outlying data, and conduct analyses appropriately and transparently.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Michael Koller ◽  
Karolina Müller ◽  
Sandra Nolte ◽  
Heike Schmidt ◽  
Christina Harvey ◽  
...  

Abstract Background The European Organization for research and Treatment of Cancer (EORTC) Core Quality of Life Questionnaire (QLQ-C30) scales are scored on a 4-point response scale, ranging from not at all to very much. Previous studies have shown that the German translation of the response option quite a bit as mäßig violates interval scale assumptions, and that ziemlich is a more appropriate translation. The present studies investigated differences between the two questionnaire versions. Methods The first study employed a balanced cross-over design and included 450 patients with different types of cancer from three German-speaking countries. The second study was a representative survey in Germany including 2033 respondents. The main analyses included compared the ziemlich and mäßig version of the questionnaire using analyses of covariance adjusted for sex, age, and health burden. Results In accordance with our hypothesis, the adjusted summary score was lower in the mäßig than in the ziemlich version; Study 1: − 4.5 (95% CI − 7.8 to − 1.3), p = 0.006, Study 2: − 3.1 (95% CI − 4.6 to − 1.5), p < 0.001. In both studies, this effect was pronounced in respondents with a higher health burden; Study 1: − 6.8 (95% CI − 12.2 to − 1.4), p = 0.013; Study 2: − 4.5 (95% CI − 7.3 to − 1.7), p = 0.002. Conclusions We found subtle but consistent differences between the two questionnaire versions. We recommend to use the optimized response option for the EORTC QLQ-C30 as well as for all other German modules. Trial registration: The study was retrospectively registered on the German Registry for Clinical Studies (reference number DRKS00012759, 04th August 2017, https://www.drks.de/DRKS00012759).


2021 ◽  
Vol 14 (1) ◽  
pp. 1-5
Author(s):  
Dana Garbarski ◽  
Keyla Navarrete ◽  
David Doherty

2021 ◽  
Author(s):  
Michael Koller ◽  
Karolina Müller ◽  
Sandra Nolte ◽  
Heike Schmidt ◽  
Christina Harvey ◽  
...  

Abstract Background: The European Organization for research and Treatment of Cancer (EORTC) Core Quality of Life Questionnaire (QLQ-C30) scales are scored on a 4-point response scale, ranging from not at all to very much. Previous studies have shown that the German translation of the response option quite a bit as mäßig violates interval scale assumptions, and that ziemlich is a more appropriate translation. The present studies investigated differences between the two questionnaire versions and were based on the hypothesis that the conventional version yielded lower functioning and higher symptom ratings than the optimized version, particularly in respondents with a higher health burden.Methods: The first study employed a balanced cross-over design and included 450 patients with different types of cancer from three German-speaking countries. The second study was a representative survey in Germany including 2033 respondents. Half of the participants filled in the mäßig, the other half the ziemlich version of the questionnaire.Results: In accordance with our hypothesis, the adjusted summary score was lower in the mäßig than in the ziemlich version; Study 1: -4.5 (95% CI -7.8 to -1.3), p = 0.006, Study 2: -3.1 (95% CI - 4.6 to -1.5), p < 0.001. In both studies, this effect was pronounced in respondents with a higher health burden; Study 1: -6.8 (95% CI -12.2 to -1.4), p = 0.013; Study 2: -4.5 (95% CI -7.3 to -1.7), p = 0.002.Conclusions: We found subtle but consistent differences between the two questionnaire versions. The optimized response option should be used for the EORTC QLQ-C30 as well as for all other German modules.Trial registration: The study was retrospectively registered on the German Registry for Clinical Studies (reference number DRKS00012759, 04th August 2017, https://www.drks.de/DRKS00012759).


Author(s):  
Nancy Kinner ◽  
Doug Helton ◽  
Gary Shigenaka

ABSTRACT Chemical dispersants were employed on an unprecedented scale during the Deepwater Horizon (DWH) oil spill in the Gulf of Mexico, and could be a response option should a large spill occur in Arctic waters. The use of dispersants in response to the DWH spill raised concerns regarding the need for chemical dispersants, the fate of the oil and dispersants, and their potential impacts on human health and the environment. Concerns remain that would be more evident in the Arctic, where the remoteness and harsh environmental conditions would make a response to any oil spill very difficult. An outcome of a 2013 Arctic oil spill exercise for senior federal agency leadership identified the need for an evaluation of the state-of-the-science of dispersants and dispersed oil (DDO), and a clear delineation of the associated uncertainties that remain, particularly as they apply to Arctic waters. The National Oceanic and Atmospheric Administration (NOAA), in partnership with the Coastal Response Research Center (CRRC), embarked on a project to seek expert review and evaluation of the state-of-the-science and the uncertainties involving DDO. The objectives of the project were to: identify the primary research/reference documents on DDO, determine what is known about the state-of-the-science regarding DDO, and determine what uncertainties, knowledge gaps or inconsistencies remain 689559 regarding DDO science. The project focused on five areas and how they might be affected by Arctic conditions: dispersant efficacy and effectiveness, physical transport and chemical behavior, degradation and fate, eco-toxicity and sub-lethal impacts, and public health and food safety. The Louisiana University Marine Consortium (LUMCON) dispersants database was used as a source of relevant literature generated prior to June 2008. The CRRC created a database that compiled relevant research thereafter. The six to ten experts on each of the panel were from academia, industry, NGOs, governmental agencies and consulting. Despite the fact that their scientific perspectives were diverse, the panelists were able to generate hundreds of statements of knowns and uncertainties about which all of the members agreed. This required detailed discussion of 1000s scientific papers. While the cutoff date for literature considered was December 31, 2015, the vast majority of the findings are still relevant and most of the uncertainties remain. As the ice in the Arctic diminishes and maritime development and activity increase, these five documents can inform discussions of the potential use of dispersants as a spill response option in both ice-free and ice infested Arctic waters.


2020 ◽  
pp. 147078532097159
Author(s):  
Jerry Timbrook ◽  
Jolene D Smyth ◽  
Kristen Olson

Questions using agree/disagree (A/D) scales are ubiquitous in survey research because they save time and space on questionnaires through display in grids, but they have also been criticized for being prone to acquiescent reports. Alternatively, questions using self-description (SD) scales (asking respondents how well a statement describes them from Completely to Not at All) can also be presented in grids or with a common question stem, and by omitting the word agree, SD scales may reduce acquiescence. However, no research has examined how response patterns may differ across A/D and SD scales. In this article, we compare survey estimates, item nonresponse and nondifferentiation across these two types of scales in a mail survey. We find that SD scales outperform A/D scales for non-socially desirable questions that ask about positive topics. For questions that ask about negative topics, we find that estimates for SD items are significantly more negative than A/D items. This may occur because the SD scale is unipolar and has only one negative response option ( Not at All), whereas the bipolar A/D scale has two negative response options ( Disagree and Strongly Disagree). We recommend that researchers use SD scales for non-socially desirable positive valence questions.


2020 ◽  
Vol 2 (4) ◽  
pp. p47
Author(s):  
Michael Joseph Wise

Over the past few decades, test-writing experts have converged on a set of best-practice guidelines for constructing multiple-choice (MC) items. Despite broad acceptance, some guidelines are supported by scant or inconsistent empirical evidence. This study focused on two of the most-commonly violated of these guidelines: the use of negatively oriented stems (e.g., those using the qualifiers “not” or “except”) and the use of “all of the above” (AOTA) as a response option. Specifically, I analysed the psychometric qualities of 545 MC items from science courses that I taught at a liberal arts college. In this dataset, items with negatively oriented stems did not differ in difficulty or discriminability from questions with positively oriented stems. Similarly, items with AOTA as a response option did not differ in difficulty or discriminability from those without AOTA as an option. Items that used AOTA as a distractor were significantly more difficult, and slightly more discriminating, than were items that used AOTA as the key. Although they must be written with extra attention to detail, this study suggests that MC items with negative stems or AOTA as a response option can be effectively employed for assessment of content mastery in a classroom setting.


Sign in / Sign up

Export Citation Format

Share Document