Measuring hand preference: A comparison among different response formats using a selected sample

2013 ◽  
Vol 18 (1) ◽  
pp. 68-107 ◽  
Author(s):  
Marietta Papadatou-Pastou ◽  
Maryanne Martin ◽  
Marcus R. Munafò
2018 ◽  
Vol 122 (5) ◽  
pp. 1946-1966 ◽  
Author(s):  
Bojana M. Dinić ◽  
Aleksandar Vujić

The objective of this research was to validate the Narcissistic Personality Inventory across different response formats, given that several factor structures were proposed, ranging from two to seven factors. The original forced-choice format of the Narcissistic Personality Inventory was given to 410 participants and a modified, i.e., Likert format was given to 423 participants from the general population, along with personality and other narcissism measures. The results showed that the five-factor model proposed by Ackerman et al. had the best model fit in both response formats and that a distinction between adaptive (Leadership, Vanity, and Superiority) and some aspects of maladaptive (Manipulativeness and Exhibitionism) narcissism factors could be established. However, the redundancy of items in certain factors could be problematic and further improvements of the Narcissistic Personality Inventory should include more indicators of some proposed factors, especially of Vanity.


2012 ◽  
Vol 40 (10) ◽  
pp. 1655-1665
Author(s):  
Sonthaya Sriramatr ◽  
Tanya R. Berry ◽  
Wendy Rodgers ◽  
Sean Stolp

In this study we examined the relationships of different response formats, respondent gender, and activity level, to ratings of exercise stereotypes. Participants (N = 203) completed 8 question sets about 8 exerciser stereotypes. In each questionnaire, 1 question was inadvertently measured twice with different response options of definitely would not like to do this/definitely would like to do this (NL anchor) or definitely false/definitely true (FT anchor). Results showed that ratings on the FT statements were significantly higher than those on NL statements for 2 stereotypes: judgmental young women and overweight. There were also significant effects of gender by activity level on ratings of runners. Both gender and activity levels were related to ratings of liking to exercise for some, but not all, of the stereotypes.


Author(s):  
Stefan K. Schauber ◽  
Stefanie C. Hautz ◽  
Juliane E. Kämmer ◽  
Fabian Stroben ◽  
Wolf E. Hautz

AbstractThe use of response formats in assessments of medical knowledge and clinical reasoning continues to be the focus of both research and debate. In this article, we report on an experimental study in which we address the question of how much list-type selected response formats and short-essay type constructed response formats are related to differences in how test takers approach clinical reasoning tasks. The design of this study was informed by a framework developed within cognitive psychology which stresses the importance of the interplay between two components of reasoning—self-monitoring and response inhibition—while solving a task or case. The results presented support the argument that different response formats are related to different processing behavior. Importantly, the pattern of how different factors are related to a correct response in both situations seem to be well in line with contemporary accounts of reasoning. Consequently, we argue that when designing assessments of clinical reasoning, it is crucial to tap into the different facets of this complex and important medical process.


2021 ◽  
Author(s):  
Swaha Pattanaik ◽  
Mike John ◽  
Seungwon Chung ◽  
San Keller

Abstract PurposeWe compared measurement properties of 5-point and 11-point response formats for Orofacial Esthetic Scale (OES) items to determine whether collapsing the format would degrade OES score precision.MethodsData were collected from a consecutive sample of adult dental patients from HealthPartners dental clinics in Minnesota (N=2,078). We fitted an Item Response Theory (IRT) model to the 11-point scale and six, derived 5-point scales. We compared all response formats using test (or scale) information, correlation between the IRT scores, Cronbach’s alpha estimates for each scaling format, correlations based on the observed scores for the seven OES items and the eighth global item, and the relationship of observed and IRT scores to an external criterion using orofacial appearance (OA) indicators from the Oral Health Impact Profile (OHIP).ResultsThe correlations among scores based on the different response formats were uniformly high for observed (0.97-0.99) and IRT scores (0.96-0.99); as were correlations of both observed and IRT scores and the OHIP measure of OA (0.65-0.69). Cronbach’s alpha based on any of the 5-point formats (α = 0.95) was nearly the same as that based on the 11-point format (α = 0.96). The weighted total information area for five of six, 5-point derived formats was 98% of that for the 11-point scale ConclusionsOur results support the use of scores based on a 5-point response format for OES items. The measurement properties of scores based on a 5-point response format are comparable to those of scores based on the 11-point format.


1985 ◽  
Vol 55 (9) ◽  
pp. 382-384 ◽  
Author(s):  
James H. Price ◽  
Janelle K. O'Connell ◽  
Gary Kukulka

2017 ◽  
Vol 51 (2) ◽  
pp. 108-123 ◽  
Author(s):  
Alyson A. Collins ◽  
Esther R. Lindström ◽  
Donald L. Compton

Researchers have increasingly investigated sources of variance in reading comprehension test scores, particularly with students with reading difficulties (RD). The purpose of this meta-analysis was to determine if the achievement gap between students with RD and typically developing (TD) students varies as a function of different reading comprehension response formats (e.g., multiple choice, cloze). A systematic literature review identified 82 eligible studies. All studies administered reading comprehension assessments to students with RD and TD students in Grades K–12. Hedge’s g standardized mean difference effect sizes were calculated, and random effects robust variance estimation techniques were used to aggregate average weighted effect sizes for each response format. Results indicated that the achievement gap between students with RD and TD students was larger for some response formats (e.g., picture selection ES g = −1.80) than others (e.g., retell ES g = −0.60). Moreover, for multiple-choice, cloze, and open-ended question response formats, single-predictor metaregression models explored potential moderators of heterogeneity in effect sizes. No clear patterns, however, emerged in regard to moderators of heterogeneity in effect sizes across response formats. Findings suggest that the use of different response formats may lead to variability in the achievement gap between students with RD and TD students.


Sign in / Sign up

Export Citation Format

Share Document