Random Item Effects Models

2010 ◽  
pp. 193-225
Author(s):  
Jean-Paul Fox
Keyword(s):  
2019 ◽  
Author(s):  
Steven Langsford ◽  
Andrew T Hendrickson ◽  
Amy Perfors ◽  
Lauren Kennedy ◽  
Danielle Navarro

Understanding and measuring sentence acceptability is of fundamental importance for linguists, but although many measures for doing so have been developed, relatively little is known about some of their psychometric properties. In this paper we evaluate within- and between-participant test-retest reliability on a wide range of measures of sentence acceptability. Doing so allows us to estimate how much of the variability within each measure is due to factors including participant-level individual differences, sample size, response styles, and item effects. The measures examined include Likert scales, two versions of forced-choice judgments, magnitude estimation, and a novel measure based on Thurstonian approaches in psychophysics. We reproduce previous findings of high between-participant reliability within and across measures, and extend these results to a generally high reliability within individual items and individual people. Our results indicate that Likert scales and the Thurstonian approach produce the most stable and reliable acceptability measures and do so with smaller sample sizes than the other measures. Moreover, their agreement with each other suggests that the limitation of a discrete Likert scale does not impose a significant degree of structure on the resulting acceptability judgments.


1985 ◽  
Vol 21 (6) ◽  
pp. 1120-1131 ◽  
Author(s):  
David F. Bjorklund ◽  
Barbara R. Bjorklund

2003 ◽  
Vol 28 (4) ◽  
pp. 369-386 ◽  
Author(s):  
Wim Van den Noortgate ◽  
Paul De Boeck ◽  
Michel Meulders

In IRT models, responses are explained on the basis of person and item effects. Person effects are usually defined as a random sample from a population distribution. Regular IRT models therefore can be formulated as multilevel models, including a within-person part and a between-person part. In a similar way, the effects of the items can be studied as random parameters, yielding multilevel models with a within-item part and a between-item part. The combination of a multilevel model with random person effects and one with random item effects leads to a cross-classification multilevel model, which can be of interest for IRT applications. The use of cross-classification multilevel logistic models will be illustrated with an educational measurement application.


2019 ◽  
Author(s):  
Nathan J. Evans ◽  
Gabriel Tillman ◽  
Eric-Jan Wagenmakers

A key assumption of models of human cognition is that there is variability in information processing. Evidence accumulation models (EAMs) commonly assume two broad variabilities in information processing: within-trial variability, which is thought to reflect moment-to-moment fluctuations in perceptual processes, and between-trial variability, which is thought to reflect variability in slower-changing processes like attention, or systematic variability between the stimuli on different trials. Recently, Ratcliff, Voskuilen, and McKoon (2018) claimed to “provide direct evidence that external noise is, in fact, required to explain the data from five simple two-choice decision tasks” (p. 33), suggesting that at least some portion of the between-trial variability in information processing is due to “noise”. However, we argue that Ratcliff et al. (2018) failed to distinguish between two different potential sources of between-trial variability: random (i.e., “external noise”) and systematic (e.g., item effects). Contrary to the claims of Ratcliff et al. (2018), we show that “external noise” is not required to explain their findings, as the same trends of data can be produced when only item effects are present. Furthermore, we contend that the concept of “noise” within cognitive models merely serves as a convenience parameter for sources of variability that we know exist, but are unable to account for. Therefore, we question the usefulness of experiments aimed at testing the general existence of “random” variability, and instead suggest that future research should attempt to replace the random variability terms within cognitive models with actual explanations of the process.


2020 ◽  
Author(s):  
Hugh Rabagliati

Sound symbolism refers to the intuition that a word’s sound should match the characteristics of its referents – e.g., kiki should label something spiky – and its prevalence and systematicity provide compelling evidence for an intuitive mapping between linguistic form and meaning. Striking recent work (Hung, Styles, & Hsieh, 2017) suggests that these mappings may have an unconscious basis, such that participants can compute the fit between a word’s sound and an object’s shape when both are masked from awareness. This surprising finding replicated in the pre-registered report by Heyman, Maerten, Vankrunkelsven, Voorspoels and Moors (2019), with potentially far-reaching implications for the role of awareness in language processing (Hassin, 2013; Rabagliati, Robertson, & Carmel, 2018). However, as I demonstrate, it is an artifact of the stimuli used. Once item effects are accounted for, these data provide no evidence that sound symbolism, and language more generally, can be processed without awareness.


2010 ◽  
Vol 62 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Emily Freeman ◽  
Andrew Heathcote ◽  
Kerry Chalmers ◽  
William Hockley

Sign in / Sign up

Export Citation Format

Share Document