Psychometric Analysis of the Behavior Problems Inventory Using an Item-Response Theory Framework: A Sample of Individuals with Intellectual Disabilities

2013 ◽  
Vol 35 (4) ◽  
pp. 564-577 ◽  
Author(s):  
Lucy Barnard-Brak ◽  
Johannes Rojahn ◽  
Tianlan Wei
2010 ◽  
Vol 7 (2) ◽  
Author(s):  
Alenka Hauptman

In Slovene General Matura, Mathematics is one of the compulsory subjects and it can be taken either at Basic or Higher Level of Achievement. Basic Level of Achievement is expressed by the classic five-grade scale from 1 to 5. Candidates at Higher Level of Achievement can get grades on scale from 1 to 8. Conversion of points into grades (i.e. getting points on tests and points at internal examination and then calculating those grades from the sum of points) on each Level is set independently, and we tried to find out if the same grade on each Level of Achievement corresponds to the same knowledge. Once grades are assigned they are used comparatively in selection procedures for admission to University. Both Basic and Higher Level in Mathematics include the same Part 1 of the exam. The second part of the exam (Part 2) is applied only to the Higher Level's candidates. Part 1 amounts to 80% of the total points at Basic Level, and 53.3% of total points at Higher Level. Higher Level's candidates get other 26.7% of points in Part 2. Oral part of the exam represents 20% of the grades at both Levels. In this paper we show discrepancy between knowledge within the same grades for candidates at Basic and Higher Level of Achievement on an example of a Mathematics exam from General Matura 2008. Rasch model within Item Response Theory framework was used to place item difficulties on common scale and the comparability of grade conversion on both Basic and Higher Level of Achievement was explored. The results show interesting differences in knowledge of candidates with the same grade at Basic and Higher Level of Achievement.


2019 ◽  
Vol 80 (3) ◽  
pp. 461-475
Author(s):  
Lianne Ippel ◽  
David Magis

In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general asymptotic theory framework. Furthermore, exact standard errors were suggested to better evaluate the precision of ability estimators, especially with short tests for which the asymptotic framework is invalid. Unfortunately, the accuracy of exact standard errors was assessed so far only in a very limiting setting. The purpose of this article is to perform a global comparison of exact versus (classical and new formulations of) asymptotic standard errors, for a wide range of usual IRT ability estimators, IRT models, and with short tests. Results indicate that exact standard errors globally outperform the ASE versions in terms of reduced bias and root mean square error, while the new ASE formulas are also globally less biased than their classical counterparts. Further discussion about the usefulness and practical computation of exact standard errors are outlined.


Sign in / Sign up

Export Citation Format

Share Document