scholarly journals Comparing Item response theory assessment with Classical Measurement Theory in the setting of medical education for the evaluation of clinical competency and goals achievement.

2020 ◽  
Vol 27 (03) ◽  
pp. 448-454
Author(s):  
Aamir Furqan ◽  
Rahat Akhtar ◽  
Masood Alam ◽  
Rana Altaf Ahmed

Objectives: This article is designed for comparison and contrast of item response theory measurement with classical measurement theory (Classical Measurement Theory) as well as to determine the various advantages offered by item response theory in the setting of medical education. Summary: Classical measurement theory is being impartial and inherent, is used more often than other models in medical education. However, there is one restriction encountered in the use of classical measurement theory that is it sample dependent and the data is bewildered in the specified sample that the researcher has assessed. Whereas, the score in item response theory separate from the sample or stimuli of assessment. Item Response Theory is consistent, it allows for easy evaluation of examination scores enabling the score to be placed in constant measurement scale and compare the change in students’ ability with time. There are various models of Item Response Theory out of which three are discussed along with their statistical assumptions. Conclusions: Item Response Theory being a capable tool is able to simplify a major issue of Classical Measurement Theory, i.e. bewilderment of skill of examinee with item characteristics. The Item Response Theory measurement inscribes the problems in medical education like removing rater mistakes from evaluation.

2012 ◽  
Vol 19 (2) ◽  
pp. 287-302 ◽  
Author(s):  
Silvana Ligia Vincenzi Bortolotti ◽  
Fernando de Jesus Moreira Junior ◽  
Antonio Cezar Bornia ◽  
Afonso Farias de Sousa Júnior ◽  
Dalton Francisco de Andrade

Today, people have increasingly demanded more from the state and enterprises. Consumer satisfaction is not an organizational option, but rather a matter of survival for any institution. The quest for measurement of consumer satisfaction has been ongoing in many areas of research, and researchers have concentrated efforts to demonstrate the psychometric quality of their measurements. However, the techniques employed by these commitments have not kept pace with the advances in psychometric theory and methods. The Item Response Theory (IRT) is an approach used for assessing latent trait. It is commonly used in educational and psychological tests and provides additional information beyond that obtained from classic psychometric techniques. This article presents a model of cumulative application of item response theory to measure the extent of students' satisfaction with their courses by creating a measurement scale. The Graded Response Model was used. The results demonstrate the effectiveness of this theory in measuring satisfaction since it places both items as individuals on the same scale. This theory may be valuable in the evaluation of customer satisfaction and many other organizational phenomena. The findings may help the decision maker of an enterprise with the correction of flows, processes, and procedures, and, consequently, it may help generate increased efficiency and effectiveness in daily tasks and in event management business. Finally, the information obtained from the analysis can play a role in the development and/or evaluation of institutional planning.


Assessment ◽  
2021 ◽  
pp. 107319112110612
Author(s):  
Stefany Coxe ◽  
Margaret H. Sibley

The transition from Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR) to Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) attention deficit/hyperactivity disorder (ADHD) checklists included item wording changes that require psychometric validation. A large sample of 854 adolescents across four randomized trials of psychosocial ADHD treatments was used to evaluate the comparability of the DSM-IV-TR and DSM-5 versions of the ADHD symptom checklist. Item response theory (IRT) was used to evaluate item characteristics and determine differences across versions and studies. Item characteristics varied across items. No consistent differences in item characteristics were found across versions. Some differences emerged between studies. IRT models were used to create continuous, harmonized scores that take item, study, and version differences into account and are therefore comparable. DSM-IV-TR ADHD checklists will generalize to the DSM-5 era. Researchers should consider using modern measurement methods (such as IRT) to better understand items and create continuous variables that better reflect the variability in their samples.


2018 ◽  
Vol 12 (4) ◽  
pp. 16 ◽  
Author(s):  
Julius Kristjan Björnsson

Nasjonale prøver i nåværende form, hvor Item Response Theory (IRT) benyttes for å bestemme oppgavenes egenskaper og hvor man måler utvikling over tid, har vært gjennomført siden 2014. Prøvene har vist seg å være stabile over tid, og en lenking og ekvivalering er blitt gjort siden 2014 for å gjøre sammenlik-ninger over tid mulige. For å kunne avgjøre om endringer over tid er signifikante, er det nødvendig å kvantifisere den usikkerheten som er knyttet til prosedyren for lenking fra år til år. Denne usikkerheten betegnes som lenkefeilen. Denne artikkelen gjør rede for ulike måter å gjøre dette på, og med bakgrunn i dette beregnes størrelsen av den lenkefeilen som er til stede i regning og engelsk for 5. og 8. trinn. I tillegg presenteres resultater fra en undersøkelse av mulig bias i lenkingen. Konklusjonen er at lenkefeilen er akseptabel, men likevel såpass stor at evaluering av endring over tid må ta hensyn til den. Det blir derfor viktig å ha et prøvedesign og bruke metoder som gir riktige (unbiased) estimater og som bidrar til å minimere lenkefeilen.Nøkkelord: IRT, nasjonale prøver, ekvivalering, lenkefeilLinking error and equating methods on the national tests:Estimating change over timeAbstractThe Norwegian national tests, utilizing Item Response Theory (IRT) to determine item characteristics and measure changes over time, have been administered since 2014. The tests have turned out to be stable over time, and linking and equating has been done each year to make comparisons over time possible. Central for these methods is to quantify the uncertainty in the linking from year to year, as this must be known to determine whether a change from year to year is significant or not. This article presents some often-used methods to estimate the linking error. Based on this, the size of the error due to linking is estimated for English and Numeracy for the 5th and 8th grades. The article also presents an examination of possible bias in the linking. The main conclusion is that the linking error is acceptable, but nevertheless so large that a determination of changes over time must take it into account. It remains important to make use of a test design and methods that result in an appropriately small and unbiased estimate of the linking error.Keywords: IRT, national tests, equating, linking error


Author(s):  
Thanh V. Tran ◽  
Tam Nguyen ◽  
Keith Chan

Item response theory (IRT) is a modern measurement theory that, as its name implies, focuses mainly at the item level as opposed to the test level. The underlying principle of IRT is that a relationship exists between an individual’s ability and how the individual responds to items on a test. IRT offers item-level details not provided through classical approaches. The aims of this chapter are to (1) provide a brief overview of IRT, (2) demonstrate the basic features of IRT using existing data, and (3) walk the reader through the key steps in conducting IRT analysis using IRTPRO®. IRT has also increasingly been used to develop, shorten, and refine psychosocial instruments.


Sign in / Sign up

Export Citation Format

Share Document