scholarly journals Linking Pain Items from Two Studies Onto a Common Scale Using Item Response Theory

2009 ◽  
Vol 38 (4) ◽  
pp. 615-628 ◽  
Author(s):  
Wen-Hung Chen ◽  
Dennis A. Revicki ◽  
Jin-Shei Lai ◽  
Karon F. Cook ◽  
Dagmar Amtmann
2010 ◽  
Vol 7 (2) ◽  
Author(s):  
Alenka Hauptman

In Slovene General Matura, Mathematics is one of the compulsory subjects and it can be taken either at Basic or Higher Level of Achievement. Basic Level of Achievement is expressed by the classic five-grade scale from 1 to 5. Candidates at Higher Level of Achievement can get grades on scale from 1 to 8. Conversion of points into grades (i.e. getting points on tests and points at internal examination and then calculating those grades from the sum of points) on each Level is set independently, and we tried to find out if the same grade on each Level of Achievement corresponds to the same knowledge. Once grades are assigned they are used comparatively in selection procedures for admission to University. Both Basic and Higher Level in Mathematics include the same Part 1 of the exam. The second part of the exam (Part 2) is applied only to the Higher Level's candidates. Part 1 amounts to 80% of the total points at Basic Level, and 53.3% of total points at Higher Level. Higher Level's candidates get other 26.7% of points in Part 2. Oral part of the exam represents 20% of the grades at both Levels. In this paper we show discrepancy between knowledge within the same grades for candidates at Basic and Higher Level of Achievement on an example of a Mathematics exam from General Matura 2008. Rasch model within Item Response Theory framework was used to place item difficulties on common scale and the comparability of grade conversion on both Basic and Higher Level of Achievement was explored. The results show interesting differences in knowledge of candidates with the same grade at Basic and Higher Level of Achievement.


2017 ◽  
Vol 16 (2) ◽  
Author(s):  
Lucas De Francisco Carvalho ◽  
Makilim Nunes Baptista ◽  
Ricardo Primi ◽  
Juliana Gomes Oliveira ◽  
Jon D. Elhai

1992 ◽  
Vol 17 (2) ◽  
pp. 155-173 ◽  
Author(s):  
Kentaro Yamamoto ◽  
John Mazzeo

In educational assessments, it is often necessary to compare the performance of groups of individuals who have been administered different forms of a test. If these groups are to be validly compared, all results need to be expressed on a common scale. When assessment results are to be reported using an item response theory (IRT) proficiency metric, as is done for the National Assessment of Educational Progress (NAEP), establishing a common metric becomes synonymous with expressing IRT item parameter estimates on a common scale. Procedures that accomplish this are referred to here as scale linking procedures. This chapter discusses the need for scale linking in NAEP and illustrates the specific procedures used to carry out the linking in the context of the major analyses conducted for the 1990 NAEP mathematics assessment.


2003 ◽  
Vol 19 (1) ◽  
pp. 24-39 ◽  
Author(s):  
Els de Koning ◽  
Klaas Sijtsma ◽  
Jo H.M. Hamers

Summary We present in this paper a test for inductive reasoning (TIR), which consists of two versions that can be used to assess the inductive reasoning development of third-grade pupils in primary education. The test versions can also be used in combination with a training program for inductive reasoning. Two experiments using samples of 954 and 145 pupils were carried out to investigate the psychometric properties of the tests, including validity. Item response theory (IRT) analyses revealed that the scores on the two TIR tests gave meaningful inductive reasoning summaries. This was supported by analyses of the convergent and divergent validity of the TIR tests. IRT analyses were used to equate the two TIR test versions such that the scores can be compared on a common scale. Possible explanations for the misfit of items that were deleted from the TIR tests are discussed.


2001 ◽  
Vol 46 (6) ◽  
pp. 629-632
Author(s):  
Robert J. Mislevy

Sign in / Sign up

Export Citation Format

Share Document