The Rasch Model and Conjoint Measurement Theory from the Perspective of Psychometrics

2008 ◽  
Vol 18 (1) ◽  
pp. 111-117 ◽  
Author(s):  
Denny Borsboom ◽  
Annemarie Zand Scholten
2019 ◽  
Vol 53 (5) ◽  
pp. 871-891
Author(s):  
Thomas Salzberger ◽  
Monika Koller

Purpose Psychometric analyses of self-administered questionnaire data tend to focus on items and instruments as a whole. The purpose of this paper is to investigate the functioning of the response scale and its impact on measurement precision. In terms of the response scale direction, existing evidence is mixed and inconclusive. Design/methodology/approach Three experiments are conducted to examine the functioning of response scales of different direction, ranging from agree to disagree versus from disagree to agree. The response scale direction effect is exemplified by two different latent constructs by applying the Rasch model for measurement. Findings The agree-to-disagree format generally performs better than the disagree-to-agree variant with spatial proximity between the statement and the agree-pole of the scale appearing to drive the effect. The difference is essentially related to the unit of measurement. Research limitations/implications A careful investigation of the functioning of the response scale should be part of every psychometric assessment. The framework of Rasch measurement theory offers unique opportunities in this regard. Practical implications Besides content, validity and reliability, academics and practitioners utilising published measurement instruments are advised to consider any evidence on the response scale functioning that is available. Originality/value The study exemplifies the application of the Rasch model to assess measurement precision as a function of the design of the response scale. The methodology raises the awareness for the unit of measurement, which typically remains hidden.


1979 ◽  
Vol 3 (2) ◽  
pp. 237-255 ◽  
Author(s):  
Richard Perline ◽  
Benjamin D. Wright ◽  
Howard Wainer

Pythagoras ◽  
2014 ◽  
Vol 35 (1) ◽  
Author(s):  
Caroline Long ◽  
Sarah Bansilal ◽  
Rajan Debba

Mathematical Literacy (ML) is a relatively new school subject that learners study in the final 3 years of high school and is examined as a matric subject. An investigation of a 2009 provincial examination written by matric pupils was conducted on both the curriculum elements of the test and learner performance. In this study we supplement the prior qualitative investigation with an application of Rasch measurement theory to review and revise the scoring procedures so as to better reflect scoring intentions. In an application of the Rasch model, checks are made on the test as a whole, the items and the learner responses, to ensure coherence of the instrument for the particular reference group, in this case Mathematical Literacy learners in one high school. In this article, we focus on the scoring of polytomous items, that is, items that are scored 0, 1, 2 … m. We found in some instances indiscriminate mark allocations, which contravened assessment and measurement principles. Through the investigation of each item, the associated scoring logic and the output of the Rasch analysis, rescoring was explored. We report here on the analysis of the test prior to rescoring, the analysis and rescoring of individual items and the post rescore analysis. The purpose of the article is to address the question: How may detailed attention to the scoring of the items in a Mathematical Literacy test, through theoretical investigation and the application of the Rasch model, contribute to a more informative and coherent outcome?


2011 ◽  
Author(s):  
Klaus Kubinger ◽  
D. Rasch ◽  
T. Yanagida

2021 ◽  
Author(s):  
Bryant A Seamon ◽  
Steven A Kautz ◽  
Craig A Velozo

Abstract Objective Administrative burden often prevents clinical assessment of balance confidence in people with stroke. A computerized adaptive test (CAT) version of the Activities-specific Balance Confidence Scale (ABC CAT) can dramatically reduce this burden. The objective of this study was to test balance confidence measurement precision and efficiency in people with stroke with an ABC CAT. Methods We conducted a retrospective cross-sectional simulation study with data from 406 adults approximately 2-months post-stroke in the Locomotor-Experience Applied Post-Stroke (LEAPS) trial. Item parameters for CAT calibration were estimated with the Rasch model using a random sample of participants (n = 203). Computer simulation was used with response data from remaining 203 participants to evaluate the ABC CAT algorithm under varying stopping criteria. We compared estimated levels of balance confidence from each simulation to actual levels predicted from the Rasch model (Pearson correlations and mean standard error (SE)). Results Results from simulations with number of items as a stopping criterion strongly correlated with actual ABC scores (full item, r = 1, 12-item, r = 0.994; 8-item, r = 0.98; 4-item, r = 0.929). Mean SE increased with decreasing number of items administered (full item, SE = 0.31; 12-item, SE = 0.33; 8-item, SE = 0.38; 4-item, SE = 0.49). A precision-based stopping rule (mean SE = 0.5) also strongly correlated with actual ABC scores (r = .941) and optimized the relationship between number of items administrated with precision (mean number of items 4.37, range [4–9]). Conclusions An ABC CAT can determine accurate and precise measures of balance confidence in people with stroke with as few as 4 items. Individuals with lower balance confidence may require a greater number of items (up to 9) and attributed to the LEAPS trial excluding more functionally impaired persons. Impact Statement Computerized adaptive testing can drastically reduce the ABC’s test administration time while maintaining accuracy and precision. This should greatly enhance clinical utility, facilitating adoption of clinical practice guidelines in stroke rehabilitation. Lay Summary If you have had a stroke, your physical therapist will likely test your balance confidence. A computerized adaptive test version of the ABC scale can accurately identify balance with as few as 4 questions, which takes much less time.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 727
Author(s):  
Moustafa M. Nasralla ◽  
Basiem Al-Shattarat ◽  
Dhafer J. Almakhles ◽  
Abdelhakim Abdelhadi ◽  
Eman S. Abowardah

The literature on engineering education research highlights the relevance of evaluating course learning outcomes (CLOs). However, generic and reliable mechanisms for evaluating CLOs remain challenges. The purpose of this project was to accurately assess the efficacy of the learning and teaching techniques through analysing the CLOs’ performance by using an advanced analytical model (i.e., the Rasch model) in the context of engineering and business education. This model produced an association pattern between the students and the overall achieved CLO performance. The sample in this project comprised students who are enrolled in some nominated engineering and business courses over one academic year at Prince Sultan University, Saudi Arabia. This sample considered several types of assessment, such as direct assessments (e.g., quizzes, assignments, projects, and examination) and indirect assessments (e.g., surveys). The current research illustrates that the Rasch model for measurement can categorise grades according to course expectations and standards in a more accurate manner, thus differentiating students by their extent of educational knowledge. The results from this project will guide the educator to track and monitor the CLOs’ performance, which is identified in every course to estimate the students’ knowledge, skills, and competence levels, which will be collected from the predefined sample by the end of each semester. The Rasch measurement model’s proposed approach can adequately assess the learning outcomes.


Sign in / Sign up

Export Citation Format

Share Document