scholarly journals Om lenkefeil og ekvivaleringsmetoder på nasjonale prøver: Evaluering av endring over tid

2018 ◽  
Vol 12 (4) ◽  
pp. 16 ◽  
Author(s):  
Julius Kristjan Björnsson

Nasjonale prøver i nåværende form, hvor Item Response Theory (IRT) benyttes for å bestemme oppgavenes egenskaper og hvor man måler utvikling over tid, har vært gjennomført siden 2014. Prøvene har vist seg å være stabile over tid, og en lenking og ekvivalering er blitt gjort siden 2014 for å gjøre sammenlik-ninger over tid mulige. For å kunne avgjøre om endringer over tid er signifikante, er det nødvendig å kvantifisere den usikkerheten som er knyttet til prosedyren for lenking fra år til år. Denne usikkerheten betegnes som lenkefeilen. Denne artikkelen gjør rede for ulike måter å gjøre dette på, og med bakgrunn i dette beregnes størrelsen av den lenkefeilen som er til stede i regning og engelsk for 5. og 8. trinn. I tillegg presenteres resultater fra en undersøkelse av mulig bias i lenkingen. Konklusjonen er at lenkefeilen er akseptabel, men likevel såpass stor at evaluering av endring over tid må ta hensyn til den. Det blir derfor viktig å ha et prøvedesign og bruke metoder som gir riktige (unbiased) estimater og som bidrar til å minimere lenkefeilen.Nøkkelord: IRT, nasjonale prøver, ekvivalering, lenkefeilLinking error and equating methods on the national tests:Estimating change over timeAbstractThe Norwegian national tests, utilizing Item Response Theory (IRT) to determine item characteristics and measure changes over time, have been administered since 2014. The tests have turned out to be stable over time, and linking and equating has been done each year to make comparisons over time possible. Central for these methods is to quantify the uncertainty in the linking from year to year, as this must be known to determine whether a change from year to year is significant or not. This article presents some often-used methods to estimate the linking error. Based on this, the size of the error due to linking is estimated for English and Numeracy for the 5th and 8th grades. The article also presents an examination of possible bias in the linking. The main conclusion is that the linking error is acceptable, but nevertheless so large that a determination of changes over time must take it into account. It remains important to make use of a test design and methods that result in an appropriately small and unbiased estimate of the linking error.Keywords: IRT, national tests, equating, linking error

Assessment ◽  
2021 ◽  
pp. 107319112110612
Author(s):  
Stefany Coxe ◽  
Margaret H. Sibley

The transition from Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR) to Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) attention deficit/hyperactivity disorder (ADHD) checklists included item wording changes that require psychometric validation. A large sample of 854 adolescents across four randomized trials of psychosocial ADHD treatments was used to evaluate the comparability of the DSM-IV-TR and DSM-5 versions of the ADHD symptom checklist. Item response theory (IRT) was used to evaluate item characteristics and determine differences across versions and studies. Item characteristics varied across items. No consistent differences in item characteristics were found across versions. Some differences emerged between studies. IRT models were used to create continuous, harmonized scores that take item, study, and version differences into account and are therefore comparable. DSM-IV-TR ADHD checklists will generalize to the DSM-5 era. Researchers should consider using modern measurement methods (such as IRT) to better understand items and create continuous variables that better reflect the variability in their samples.


2020 ◽  
Vol 27 (03) ◽  
pp. 448-454
Author(s):  
Aamir Furqan ◽  
Rahat Akhtar ◽  
Masood Alam ◽  
Rana Altaf Ahmed

Objectives: This article is designed for comparison and contrast of item response theory measurement with classical measurement theory (Classical Measurement Theory) as well as to determine the various advantages offered by item response theory in the setting of medical education. Summary: Classical measurement theory is being impartial and inherent, is used more often than other models in medical education. However, there is one restriction encountered in the use of classical measurement theory that is it sample dependent and the data is bewildered in the specified sample that the researcher has assessed. Whereas, the score in item response theory separate from the sample or stimuli of assessment. Item Response Theory is consistent, it allows for easy evaluation of examination scores enabling the score to be placed in constant measurement scale and compare the change in students’ ability with time. There are various models of Item Response Theory out of which three are discussed along with their statistical assumptions. Conclusions: Item Response Theory being a capable tool is able to simplify a major issue of Classical Measurement Theory, i.e. bewilderment of skill of examinee with item characteristics. The Item Response Theory measurement inscribes the problems in medical education like removing rater mistakes from evaluation.


2021 ◽  
pp. 014662162110405
Author(s):  
Huseyin Yildiz

In the last decade, many R packages were published to perform item response theory (IRT) analysis. Some researchers and practitioners have difficulty in using these functional tools because of their insufficient coding skills. The IRTGUI package provides these researchers a user-friendly GUI where they can perform unidimensional IRT analysis without coding skills. Using the IRTGUI package, person and item parameters, model and item fit indices can be obtained. Dimensionality and local independence assumptions can be tested. With the IRTGUI package, users can generate dichotomous data sets with customizable conditions. Also, Wright Maps, item characteristics and information curves can be graphically displayed. All outputs can be easily downloaded by users.


2017 ◽  
Vol 6 (9) ◽  
pp. 635-641 ◽  
Author(s):  
Marc Vandemeulebroecke ◽  
Björn Bornkamp ◽  
Tillmann Krahnke ◽  
Johanna Mielke ◽  
Andreas Monsch ◽  
...  

2001 ◽  
Vol 46 (6) ◽  
pp. 629-632
Author(s):  
Robert J. Mislevy

Sign in / Sign up

Export Citation Format

Share Document