scholarly journals Missing item responses in latent growth analysis: Item response theory versus classical test theory

2020 ◽  
Vol 29 (4) ◽  
pp. 996-1014
Author(s):  
R Gorter ◽  
J-P Fox ◽  
I Eekhout ◽  
MW Heymans ◽  
JWR Twisk

In medical research, repeated questionnaire data is often used to measure and model latent variables across time. Through a novel imputation method, a direct comparison is made between latent growth analysis under classical test theory and item response theory, while also including effects of missing item responses. For classical test theory and item response theory, by means of a simulation study the effects of item missingness on latent growth parameter estimates are examined given longitudinal item response data. Several missing data mechanisms and conditions are evaluated in the simulation study. The additional effects of missingness on differences in classical test theory- and item response theory-based latent growth analysis are directly assessed by rescaling the multiple imputations. The multiple imputation method is used to generate latent variable and item scores from the posterior predictive distributions to account for missing item responses in observed multilevel binary response data. It is shown that a multivariate probit model, as a novel imputation model, improves the latent growth analysis, when dealing with missing at random (MAR) in classical test theory. The study also shows that the parameter estimates for the latent growth model using item response theory show less bias and have smaller MSE’s compared to the estimates using classical test theory.

2019 ◽  
Vol 29 (4) ◽  
pp. 962-986
Author(s):  
R Gorter ◽  
J-P Fox ◽  
G Ter Riet ◽  
MW Heymans ◽  
JWR Twisk

Latent growth models are often used to measure individual trajectories representing change over time. The characteristics of the individual trajectories depend on the variability in the longitudinal outcomes. In many medical and epidemiological studies, the individual health outcomes cannot be observed directly and are indirectly observed through indicators (i.e. items of a questionnaire). An item response theory or a classical test theory measurement model is required, but the choice can influence the latent growth estimates. In this study, under various conditions, this influence is directly assessed by estimating latent growth parameters on a common scale for item response theory and classical test theory using a novel plausible value method in combination with Markov chain Monte Carlo. The latent outcomes are considered missing data and plausible values are generated from the corresponding posterior distribution, separately for item response theory and classical test theory. These plausible values are linearly transformed to a common scale. A Markov chain Monte Carlo method was developed to simultaneously estimate the latent growth and measurement model parameters using this plausible value technique. It is shown that estimated individual trajectories using item response theory, compared to classical test theory to measure outcomes, provide a more detailed description of individual change over time, since item response patterns (item response theory) are more informative about the health measurements than sum scores (classical test theory).


Author(s):  
David L. Streiner ◽  
Geoffrey R. Norman ◽  
John Cairney

Over the past few decades, there has been a revolution in the approach to scale development. Called item response theory (IRT), this approach challenges the notion that scales must be long in order to be reliable, and that psychometric properties of a scale derived from one group of people cannot be applied to different groups. This chapter provides an introduction to IRT, and discusses how it can be used to develop scales and to shorten existing scales that have been developed using the more traditional approach of classical test theory. IRT also can result in scales that have interval-level properties, unlike those derived from classical test theory. Further, it allows people to be compared to one another, even though they may have completed different items, allowing for computer-adapted testing. The chapter concludes by discussing the advantages and disadvantages of IRT.


Sign in / Sign up

Export Citation Format

Share Document