The Linear Logistic Test Model

Rasch Models ◽  
1995 ◽  
pp. 131-155 ◽  
Author(s):  
Gerhard H. Fischer
Assessment ◽  
2017 ◽  
Vol 26 (8) ◽  
pp. 1524-1539 ◽  
Author(s):  
Bao Sheng Loe ◽  
John Rust

The Elithorn perceptual maze test is widely used in clinical research and practice. However, there is little evidence of its psychometric properties, and its application is limited by the technical difficulty of developing more mazes. The current research aims to adopt a rigorous approach to evaluate 18 mazes that were automatically generated by a novel R software package. Various item response theory models were employed to examine the difficulty parameters. The findings suggested that the data best fitted the Rasch model. The linear logistic test model revealed meaningful contribution to the sources of maze difficulty. Additionally, the linear logistic test model plus error was considered the most parsimonious model. The Automatic Perceptual Maze Test was moderately correlated with a nonverbal intelligence test. By introducing more mazes to provide adequate information on participants’ ability at all levels, the Automatic Perceptual Maze Test promises future clinical and research utility for the study of cognitive performance.


2017 ◽  
Vol 8 ◽  
Author(s):  
Purya Baghaei ◽  
Christine Hohensinn

2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Thomas Castelain ◽  
María Paula Villarreal Galera ◽  
Mauricio Molina Delgado ◽  
Odir Antonio Rodríguez-Villagra

El presente artículo tiene como objetivo poner a prueba —a través del uso de un modelo logístico lineal (LLTM, por sus siglas en inglés)— un conjunto de operaciones cognitivas (reglas), que influencian la dificultad de los ítems de un test de inteligencia fluida en diferentes muestras de estudiantes. En el Estudio 1, estudiantes de colegios (n = 1751) fueron asignados al azar a una muestra “estudio” o a una muestra “validación”. La primera sirvió para poner a prueba el conjunto de reglas propuestas como variables, que podrían afectar la dificultad de los ítems, y la segunda permitió recolectar evidencias de validez de dichas reglas. En el Estudio 2, se reclutaron estudiantes de universidad (n = 162), esto para determinar si la influencia de las reglas sobre el nivel de dificultad de los ítems podía generalizarse a este nuevo grupo. El Estudio 1 aporta evidencias acerca de la validez del conjunto de operaciones cognitivas que subyacen al proceso de resolución de los ítems, mientras que el Estudio 2 sugiere diferencias individuales en las estrategias de resolución de las personas examinadas. La misma estrategia de análisis podría ser aplicada a la construcción de otros tests. Asimismo, podría ayudar a personas educadoras, investigadoras y tomadoras de decisiones en su búsqueda de disponer de instrumentos cada vez más depurados.


2002 ◽  
Vol 26 (3) ◽  
pp. 271-285 ◽  
Author(s):  
Frank Rijmen ◽  
Paul De Boeck ◽  
K. U. Leuven

1987 ◽  
Vol 12 (4) ◽  
pp. 369-381 ◽  
Author(s):  
Kathy E. Green ◽  
Richard M. Smith

This paper compares two methods of estimating component difficulties for dichotomous test data. Simulated data are used to study the effects of sample size, collinearity, a measurement disturbance, and multidimensionality on the estimation of component difficulties. The two methods of estimation used in this study were conditional maximum likelihood estimation of parameters specified by the linear logistic test model (LLTM) and estimated Rasch item difficulties regressed on component frequencies. The results of the analysis indicate that both methods produce similar results in all comparisons. Neither of the methods worked well in the presence of an incorrectly specified structure or collinearity in the component frequencies. However, both methods appear to be fairly robust in the presence of measurement disturbances as long as there is a large number of cases (n = 1,000). For the case of fitting data with uncorrelated component frequencies, 30 cases were sufficient to recover the generating parameters accurately.


2016 ◽  
Vol 38 (4) ◽  
Author(s):  
Rainer W. Alexandrowicz

One important tool for assessing whether a data set can be described equally well with a Rasch Model (RM) or a Linear Logistic Test Model (LLTM) is the Likelihood Ratio Test (LRT). In practical applications this test seems to overly reject the null hypothesis, even when the null hypothesis is true. Aside from obvious reasons like inadequate restrictiveness of linear restrictions formulated in the LLTM or the RM not being true, doubts have arisen whether the test holds the nominal type-I error risk, that is whether its theoretically derived sampling distribution applies. Therefore, the present contribution explores the sampling distribution of the likelihood ratio test comparing a Rasch model with a Linear Logistic Test Model. Particular attention is put on the issue of similar columns in the weight matrixW of the LLTM: Although full column rank of this matrix is a technical requirement, columns can differ in only a few entries, what in turn might have an impact on the sampling distribution of the test statistic. Therefore, a system of how to generate weight matrices with similar columns has been established and tested in a simulation study. The results were twofold: In general, the matricesconsidered in the study showed LRT results where the empirical alpha showed only spurious deviations from the nominal alpha. Hence the theoretically chosen alpha seems maintained up to random variation. Yet, one specific matrix clearly indicated a highly increased type-I error risk: The empirical alpha was at least twice the nominal alpha when using this weight matrix. This shows that we have to indeed consider the internal structure of the weight matrix when applying the LRT for testing the LLTM. Best practice would be to perform a simulation or bootstrap/re-sampling study for the weight matrix under consideration in order to rule out a misleadingly significant result due to reasons other than true model misfit.


Sign in / Sign up

Export Citation Format

Share Document