scholarly journals Parallel tests viewed from the arrangement of item numbers and alternative answers

2019 ◽  
Vol 5 (2) ◽  
pp. 169-182
Author(s):  
Badrun Kartowagiran ◽  
Djemari Mardapi ◽  
Dian Normalitasari Purnama ◽  
Kriswantoro Kriswantoro

This research aims to prove that a parallel test can be constructed by randomizing the test item numbers and or alternative answers' order. This study used the experimental method with a post-test only non-equivalent control group design, involving junior high schools students in Yogyakarta City with a sample of 320 students of State Junior High School (SMPN) 5 Yogyakarta and 320 students of SMPN 8 Yogyakarta established using the stratified proportional random sampling technique. The instrument used is a mathematics test in the form of an objective test consisting of a five-question package and each package contains 40 items with four alternatives. The test package is randomized in the item numbers' order from the smallest to the largest and vice versa. The options in each item are also randomized from A to D and vice versa. Each item is analyzed using the Classical Test Theory and Item Response Theory approaches, while data analysis is done using the discrimination index with Kruskal-Wallis test technique to see the differences among the five-question packages. The study reveals that the result of item analysis using the Classical Test Theory and Item Response Theory approaches shows no significant difference in the difficulty index among Package 1 until Package 5. Nevertheless, according to the Classical Test Theory, there is a category shift of the difficulty index of Package 2 until Package 5 when compared to Package 1 – the original package – which is, in general, not a good package, because it contains too easy items.

Author(s):  
Eun Young Lim ◽  
Jang Hee Park ◽  
ll Kwon ◽  
Gue Lim Song ◽  
Sun Huh

The results of the 64th and 65th Korean Medical Licensing Examination were analyzed according to the classical test theory and item response theory in order to know the possibility of applying item response theory to item analys and to suggest its applicability to computerized adaptive test. The correlation coefficiency of difficulty index, discriminating index and ability parameter between two kinds of analysis were got using computer programs such as Analyst 4.0, Bilog and Xcalibre. Correlation coefficiencies of difficulty index were equal to or more than 0.75; those of discriminating index were between - 0.023 and 0.753; those of ability parameter were equal to or more than 0.90. Those results suggested that the item analysis according to item response theory showed the comparable results with that according to classical test theory except discriminating index. Since the ability parameter is most widely used in the criteria-reference test, the high correlation between ability parameter and total score can provide the validity of computerized adaptive test utilizing item response theory.


Author(s):  
David L. Streiner ◽  
Geoffrey R. Norman ◽  
John Cairney

Over the past few decades, there has been a revolution in the approach to scale development. Called item response theory (IRT), this approach challenges the notion that scales must be long in order to be reliable, and that psychometric properties of a scale derived from one group of people cannot be applied to different groups. This chapter provides an introduction to IRT, and discusses how it can be used to develop scales and to shorten existing scales that have been developed using the more traditional approach of classical test theory. IRT also can result in scales that have interval-level properties, unlike those derived from classical test theory. Further, it allows people to be compared to one another, even though they may have completed different items, allowing for computer-adapted testing. The chapter concludes by discussing the advantages and disadvantages of IRT.


Sign in / Sign up

Export Citation Format

Share Document