response scales
Recently Published Documents


TOTAL DOCUMENTS

106
(FIVE YEARS 18)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
pp. 147775092110618
Author(s):  
Kiya Shazadeh Safavi ◽  
Angelina Hong ◽  
Cory F Janney ◽  
Vinod K Panchbhavi ◽  
Daniel C Jupiter

Background This study assessed patient perceptions of the Physician Payments Sunshine Act and opinions toward physicians who receive gifts and/or payments from pharmaceutical or medical device companies. Methods During their office visit, patients attending different specialty clinics volunteered to complete our survey. The survey asks if the patient knows what the Sunshine Act is, then asks questions on 5-point response scales to assess the patient's opinions toward physicians who receive compensation from companies, their self-rated knowledge of physician compensation, and how they believe this compensation affects the cost of care. Results Over 13 months, 523 responses were collected: 8.6% of patients reported having knowledge of the Sunshine Act, 56.8% rated their knowledge of physician compensations as “poor,” and 67.1% agreed with the statement that patients should be aware of the compensation physicians receive. When asked how their opinion toward their physician would change if they learned the physician received free meals or gifts from companies, 58.9% replied “not at all,” and 36.11% of patients did not believe their cost of care would increase if their physician received compensation from companies. Conclusions Most patients were unfamiliar with the Sunshine Act, and believe their knowledge of physician compensation is poor. Over half of the respondents would not change their opinion of their physician based on knowledge of their physician receiving payments/gifts from companies, and over one-third of respondents did not believe such compensation increased the cost of care. The majority of respondents agreed that patients should be aware of payments/gifts to physicians.


2021 ◽  
Author(s):  
Lina Koppel ◽  
David Andersson ◽  
Gustav Tinghög ◽  
Daniel Västfjäll ◽  
Gilad Feldman

The better-than-average effect refers to the tendency to rate oneself as better than the average person on desirable traits and skills. In a classic study, Svenson (1981) asked participants to rate their driving safety and skill compared to other participants in the experiment. Results showed that the majority of participants rated themselves as far above the median, despite the statistical impossibility of more than 50% of participants being above the median. We report a preregistered, well-powered (total N = 1,203), very close replication and extension of the Svenson (1981) study. Our results indicate that the majority of participants rated their driving skill and safety as above average. We added different response scales as an extension and findings were stable across all three mesaures. Thus, our findings are consistent with the original findings by Svenson (1981). Materials, data, and code are available at https://osf.io/fxpwb/


2021 ◽  
Vol 13 (16) ◽  
pp. 9207
Author(s):  
Wonyoung Yang ◽  
Jin Yong Jeon

Response scales in auditory perception assessment are critical for capturing the true responses of listeners. Despite its impact on data, response scales have received the least attention in auditory perception assessment. In this study, the usability of visual analogue scales for auditory perception assessment were investigated. Five response scales (a unipolar visual analogue scale–negated to regular, a unipolar visual analogue scale—regular to negated, a bipolar visual analogue scale–positive to negative, a bipolar visual analogue scale—negative to positive, and a unipolar 11-point scale (ISO/TS 15666:2021)) for auditory perception assessment are presented. Music and traffic noise were presented to 60 university students at two different levels, i.e., 45 and 65 dBA, respectively. A web-based experimental design was implemented, and tablet pads were provided to the respondents to record their responses. The unipolar 11-point scale required the longest response time, followed by the two unipolar visual analogue scales and two bipolar visual analogue scales with statistical significance. All response scales used in this study achieved statistical reliability and sensitivity for the auditory perception assessment. Among the five response scales, the bipolar visual analogue scale (negative to positive) ranked first in reliability over repeated measures, exhibited sensitivity in differentiating sound sources, and was preferred by the respondents under the conditions of the present study. None of the respondents preferred the unipolar 11-point scale. The visual analogue scale was favoured over the traditional unipolar 11-point scale by young educated adults in a mobile-based testing environment. Moreover, the bipolar visual analogue scale demonstrated the highest reliability and sensitivity, and it was preferred the most by the respondents. The semantic labelling direction from negated to regular, or from negative to positive, is preferred over its opposite counterpart. Further research is necessary to investigate the use of response scales for the general public including children and the elderly, as well as that of semantic adjectives and their counterparts for auditory perception assessment.


2021 ◽  
Author(s):  
Douglas Hutchinson

Performance improvement practitioners value evidence-based practices, which include data-driven decisions. Data can be obtained through survey questionnaires designed with closed-ended questions and response scales. The Likert scale (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) is one of the most commonly used response scales. Whether the 5-point Likert scale, as a verbal descriptor scale, should be treated as an ordinal or interval scale is an on-going debate, as different types of statistical analyses are applied to ordinal and interval data. I conducted this study to examine if survey participants would perceive a 5-point Likert scale close to an interval level measurement when an adverb such as Moderately, Somewhat, or Slightly is added in front of Agree and Disagree. This information could be used by researchers who wish to construct an interval type Likert scale. I conducted this study using a convenient sample of performance improvement practitioners, including master's degree and graduate certificate seeking students, recent alumni, and faculty in the Organizational Performance and Workplace Learning department at Boise State University. For this study, I developed a web-based survey instrument using a horizontal slider format. The first screen of the survey instrument contained eight partially-labeled Likert scale sliders, each of which presented three anchors in ascending order (Strongly Disagree on the far-left side, Neutral in the middle, and Strongly Agree on the far-right side) along with their numerical values (-2, 0, and +2, respectively). The slider bar was initially placed on Neutral (0). Participants were instructed to move the slider bar to locate each of the following eight anchors on the Likert scale slider; Disagree, Moderately Disagree, Somewhat Disagree, Slightly Disagree, Agree, Moderately Agree, Somewhat Agree, and Slightly Agree. To test the response order effect, the second screen of the instrument asked the participants to repeat the above procedure using another set of eight Likert scale sliders presented in descending order. The third screen of the instrument asked for participants' gender, age group, and native English speaker status. The data was collected in October of 2020. The web-based survey system (Qualtrics) recorded data rounding to two decimal points and provided summary report data including mean, standard deviation, variance, and minimums and maximum response scores for each item. A survey invitation was sent to 327 practitioners, and a total of 109 of them submitted the survey. However, the initial data screening detected 37 datasets with responses where any responses were incomplete or used the incorrect side of the slider continuum, which were excluded. Two additional responses from non-native English speakers were also excluded due to the linguistic aspect of the study. This left 70 responses available for analysis (51 females, 18 males, 1 "do not want to report"). The anchor being tested would be considered useful for constructing an interval measurement if its corresponding confidence interval included the value -1 or +1. To test this, 95% confidence intervals were constructed for each of the 16 items. Response order effects were investigated by performing paired sample t-tests comparing the average scores of the 8 response options when presented in ascending versus descending order. The results showed that, Moderately Disagree and Moderately Agree were closely aligned with -1 or +1 on the continuum, respectively, regardless of the response orders used. Agree was aligned with +1 only when presented in ascending order, but not when presented in descending order. Adding other adverbs Somewhat and Slightly to Agree and Disagree made the 5-point Likert scales to be clearly ordinal scales in both response orders used. Therefore, the study concluded that when one needs to collect interval data from a 5-point Likert scale, Moderately Agree and Moderately Disagree can be used in either ascending or descending order of the scale. Although Somewhat would not be a good adverb to be added to Disagree and Agree when the 5-point Likert scale is expected to generate interval data, an unexpected interesting finding was that Strongly Agree, Somewhat Agree, Somewhat Disagree, and Strongly Disagree in descending order can be used as an interval-level 4-point Likert scale. This study was conducted with several limitations including the use of a convenience sample, and the generalization of the findings may be limited.


2020 ◽  
pp. 147078532098181
Author(s):  
Lars Bergkvist

This study used a novel research approach to investigate the effects of unlabeled response scales on response distributions. Instead of responding to standard questionnaire items respondents were asked to report given judgments on either semantic-differential (SD) or agree-disagree (AD) response scales, thereby showing the extent to which respondents agree upon where to place given judgments. Results from a survey-based study ( N = 418) show that respondents to a large extent disagree about where to place judgments on the response scale; the level of agreement for different judgment intensities ranged from 42% to 82% and the level of agreement is lower for AD than SD response scales. The low levels of agreement contribute to non-substantive variance in the data which increases the risk of attenuated or inflated correlations between constructs. Moreover, simulations of actual response distributions suggest that unlabeled response scales may lead to a strong bias in the form of underestimated shares of positive answers. Implications for research and marketing research practice of using unlabeled response scales are discussed and it is recommended that response categories on SD and AD items always should be labeled since this will reduce non-substantive variance and bias in the data.


Author(s):  
Sven Hilbert ◽  
Florian Pargent ◽  
Elisabeth Kraus ◽  
Felix Naumann ◽  
Kathryn Eichhorn ◽  
...  

2020 ◽  
Author(s):  
Caspar Kaiser

Research on subjective wellbeing typically assumes that responses to survey questions are comparable across respondents and across time. However, if this assumption is violated, standard methods in empirical research may mislead. I address this concern with three contributions. First, I give a theoretical analysis of the extent and direction of bias that may result from violations of this assumption. Second, I propose to use respondents’ stated memories of their past wellbeing to estimate and thereby to correct for differentials in scale use. Third, using the proposed approach, I test whether wellbeing reports are intrapersonally comparable across time. Using BHPS data, I find that the direction in which explanatory variables affect latent satisfaction is typically the same as the direction in which scale use is affected. Unemployment and bereavement appear to have particularly strong effects on scale use. Although discussed in the context of life satisfaction scales, the proposed approach for anchoring response scales is applicable to a wide range of other subjectively reported constructs.


Sign in / Sign up

Export Citation Format

Share Document