Modeling Differences in Test-Taking Motivation: Exploring the Usefulness of the Mixture Rasch Model and Person-Fit Statistics

Author(s):  
Marie-Anne Mittelhaëuser ◽  
Anton A. Béguin ◽  
Klaas Sijtsma
2020 ◽  
Author(s):  
Montana Buntragulpoontawee ◽  
Jeeranan Khunachiva ◽  
Patreeya Euawongyarti ◽  
Nahathai Wongpakaran ◽  
Tinakon Wongpakaran ◽  
...  

Abstract Background: This study investigated the ArmA-TH measurement properties based on item response theory, using the Rasch model. Methods: Patients with upper limb hemiplegia resulting from cerebrovascular and other brain disorders were asked to completed the ArmA-TH questionnaire. Rasch analysis was performed to test how well the ArmA-TH passive and active function sub-scales fit the Rasch model by investigating unidimensionality, response category functioning, reliability of the person and item, and differential item functioning (DIF) for age, sex and education. Results: Participants had stroke or other acquired brain injury (n=185) and the majority were men 126(68.1%), with a mean age of 55(SD 22). Most patients 91(49.2%) graduated elementary/primary school. For the ArmA-TH passive function scale, all items had acceptable fit statistics. The scale’s unidimensionality, and local independence were supported. The reliability was acceptable. Disordered threshold was found in five items, none was DIF. For the ArmA-TH active function scale, one item was misfitting and three were locally dependent. The reliability was good. DIF was not found. All items had disordered thresholds, and data fitted the Rasch model better after rescoring.Conclusions: Both sub-scales of ArmA-TH fitted the Rasch model, and are valid and reliable. The disordered thresholds should be further investigated.


1999 ◽  
Vol 23 (4) ◽  
pp. 327-345 ◽  
Author(s):  
Edith M. L. A. Van Krimpen-Stoop ◽  
Rob R. Meijer

2020 ◽  
Vol 10 (11) ◽  
pp. 324
Author(s):  
Amin Mousavi ◽  
Ying Cui

Often, important decisions regarding accountability and placement of students in performance categories are made on the basis of test scores generated from tests, therefore, it is important to evaluate the validity of the inferences derived from test results. One of the threats to the validity of such inferences is aberrant responding. Several person fit indices were developed to detect aberrant responding on educational and psychological tests. The majority of the person fit literature has been focused on creating and evaluating new indices. The aim of this study was to assess the effect of aberrant responding on the accuracy of estimated item parameters and refining estimations by using person fit statistics by means of simulation. Our results showed that the presence of aberrant response patterns created bias in the both b and a parameters at the item level and affected the classification of students, particularly high-performing students, into performance categories regardless of whether aberrant response patterns were present in the data or were removed. The results differed by test length and the percentage of students with aberrant response patterns. Practical and theoretical implications are discussed.


2016 ◽  
Vol 40 (4) ◽  
pp. 274-288 ◽  
Author(s):  
Casper J. Albers ◽  
Rob R. Meijer ◽  
Jorge N. Tendeiro

Sign in / Sign up

Export Citation Format

Share Document