scholarly journals Diagnosing Student Node Mastery: Impact of Varying Item Response Modeling Approaches

2021 ◽  
Vol 6 ◽  
Author(s):  
Susan Embretson

An important feature of learning maps, such as Dynamic Learning Maps and Enhanced Learning Maps, is their ability to accommodate nation-wide specifications of standards, such as the Common Core State Standards, within the map nodes along with relevant instruction. These features are especially useful for remedial instruction, given that accurate diagnosis is available. The year-end achievement tests are potentially useful in this regard. Unfortunately, the current use of total score or area sub-scores are neither sufficiently precise nor sufficiently reliable to diagnose mastery at the node level especially when students vary in their patterns of mastery. The current study examines varying approaches to using the year-end test for diagnosis. Prediction at the item level was obtained using parameters from varying item response theory (IRT) models. The results support using mixture class IRT models predicting mastery in which either items or node scores vary in difficulty for students in different latent classes. Not only did the mixture models fit better but trait score reliability was also maintained for the predictions of node mastery.

2004 ◽  
Author(s):  
Kate E. Walton ◽  
Brent W. Roberts ◽  
Avshalom Caspi ◽  
Terrie E. Moffitt

2021 ◽  
Vol 6 ◽  
Author(s):  
John Fitzgerald Ehrich ◽  
Steven J. Howard ◽  
Sahar Bokosmaty ◽  
Stuart Woodcock

The accurate measurement of the cognitive load a learner encounters in a given task is critical to the understanding and application of Cognitive Load Theory (CLT). However, as a covert psychological construct, cognitive load represents a challenging measurement issue. To date, this challenge has been met mostly by subjective self-reports of cognitive load experienced in a learning situation. In this paper, we find that a valid and reliable index of cognitive load can be obtained through item response modeling of student performance. Specifically, estimates derived from item response modeling of relative difficulty (i.e., the difference between item difficulty and person ability locations) can function as a linear measure that combines the key components of cognitive load (i.e., mental load, mental effort, and performance). This index of cognitive load (relative difficulty) was tested for criterion (concurrent) validity in Year 2 learners (N = 91) performance on standardized educational numeracy and literacy assessments. Learners’ working memory (WM) capacity significantly predicted our proposed cognitive load (relative difficulty) index across both numeracy and literacy domains. That is, higher levels of WM were related to lower levels of cognitive load (relative difficulty), in line with fundamental predictions of CLT. These results illustrate the validity, utility and potential of this objective item response modeling approach to capturing individual differences in cognitive load across discrete learning tasks.


Sign in / Sign up

Export Citation Format

Share Document