scholarly journals Chyba měření a odhad pravého skóru: Připomenutí některých postupů Klasické testové teorie

TESTFÓRUM ◽  
2015 ◽  
Vol 4 (6) ◽  
pp. 67-84
Author(s):  
Hynek Cígler ◽  
Martin Šmíra

Práce s chybou měření patří k základním dovednostem při interpretaci výsledků psychologických výsledků. Bohužel, řada českých psychologických metod však neobsahuje veškeré informace o chybě měření, například intervaly spolehlivosti či odhad standardní chyby měření pro různá použití. I v případě, že tyto informace jsou dostupné, je často nutné zvážit i další okolnosti a způsob výpočtu přizpůsobit – ne vždy je přitom možné se spolehnout na informace poskytnuté distributorem testu. Ani v současné počítačové době navíc nejsou jednoduše dostupné příslušné aplikace a řadu základních výpočtů by si tak psycholog v ideálním případě měl umět provést sám. Článek v krátkosti shrne běžné postupy při interpretaci chyby měření s využitím intervalů spolehlivosti v rámci klasické testové teorie, a to včetně podrobných příkladů, aby text mohl sloužit jako návod pro psychology z praxe. Cígler, H., & Šmíra, M.: Error of measurement and the estimation of true score: Selected methods of Classical test theoryOne of the elementary skills involved in the interpretation of the psychological results is handling the error of measurement. Unfortunately, many Czech psychological tests do not include all the necessary information about the error of measurement (e.g. confidence intervals and standard errors of measurement for different purposes). Even if such information is available, we might need to consider other circumstances of the assessment, and adjust the method of estimation and its application properly – it is not always possible to rely on the test developer in such cases. Since there are not many applications for such computations easily available for the test users, they should be capable of doing many of the elementary computations by hand. This paper briefly summarizes common techniques for the interpretation of the error of measurement using confidence intervals in the framework of Classical Test Theory. The theory is supported by detailed examples that should be helpful for applying these procedures in practice.

2017 ◽  
Vol 79 (6) ◽  
pp. 1198-1209 ◽  
Author(s):  
Tenko Raykov ◽  
Dimiter M. Dimitrov ◽  
George A. Marcoulides ◽  
Michael Harrison

This note highlights and illustrates the links between item response theory and classical test theory in the context of polytomous items. An item response modeling procedure is discussed that can be used for point and interval estimation of the individual true score on any item in a measuring instrument or item set following the popular and widely applicable graded response model. The method contributes to the body of research on the relationships between classical test theory and item response theory and is illustrated on empirical data.


2017 ◽  
Vol 79 (4) ◽  
pp. 796-807 ◽  
Author(s):  
Tenko Raykov ◽  
Dimiter M. Dimitrov ◽  
George A. Marcoulides ◽  
Michael Harrison

Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person true score on any item in a unidimensional multicomponent measuring instrument or item set under consideration. The method adds to the body of research on the connections between classical test theory and item response theory. The outlined estimation approach is illustrated on empirical data.


2014 ◽  
Vol 35 (4) ◽  
pp. 201-211 ◽  
Author(s):  
André Beauducel ◽  
Anja Leue

It is shown that a minimal assumption should be added to the assumptions of Classical Test Theory (CTT) in order to have positive inter-item correlations, which are regarded as a basis for the aggregation of items. Moreover, it is shown that the assumption of zero correlations between the error score estimates is substantially violated in the population of individuals when the number of items is small. Instead, a negative correlation between error score estimates occurs. The reason for the negative correlation is that the error score estimates for different items of a scale are based on insufficient true score estimates when the number of items is small. A test of the assumption of uncorrelated error score estimates by means of structural equation modeling (SEM) is proposed that takes this effect into account. The SEM-based procedure is demonstrated by means of empirical examples based on the Edinburgh Handedness Inventory and the Eysenck Personality Questionnaire-Revised.


Sign in / Sign up

Export Citation Format

Share Document