scholarly journals How to Think Straight About Psychometrics: Improving Measurement by Identifying its Assumptions

2021 ◽  
Author(s):  
Víthor Rosa Franco ◽  
Jacob Arie Laros ◽  
Marie Wiberg

The aim of the current study is to present three assumptions common to psychometric theory and psychometric practice, and to show how alternatives to traditional psychometrical approaches can be used to improve psychological measurement. These alternatives are developed by adapting each of these three assumptions. The assumption of structural validity relates to the implementation of mathematical models. The process assumption which is underlying process generates the observed data. The construct assumption implies that the observed data on its own do not constitute a measurement, but the latent variable that originates the observed data. Nonparametric item response modeling and cognitive psychometric modeling are presented as alternatives for relaxing the first two assumptions, respectively. Network psychometrics is the alternative for relaxing the third assumption. Final remarks sum up the most important conclusions of the study.

2004 ◽  
Author(s):  
Kate E. Walton ◽  
Brent W. Roberts ◽  
Avshalom Caspi ◽  
Terrie E. Moffitt

2021 ◽  
Vol 6 ◽  
Author(s):  
John Fitzgerald Ehrich ◽  
Steven J. Howard ◽  
Sahar Bokosmaty ◽  
Stuart Woodcock

The accurate measurement of the cognitive load a learner encounters in a given task is critical to the understanding and application of Cognitive Load Theory (CLT). However, as a covert psychological construct, cognitive load represents a challenging measurement issue. To date, this challenge has been met mostly by subjective self-reports of cognitive load experienced in a learning situation. In this paper, we find that a valid and reliable index of cognitive load can be obtained through item response modeling of student performance. Specifically, estimates derived from item response modeling of relative difficulty (i.e., the difference between item difficulty and person ability locations) can function as a linear measure that combines the key components of cognitive load (i.e., mental load, mental effort, and performance). This index of cognitive load (relative difficulty) was tested for criterion (concurrent) validity in Year 2 learners (N = 91) performance on standardized educational numeracy and literacy assessments. Learners’ working memory (WM) capacity significantly predicted our proposed cognitive load (relative difficulty) index across both numeracy and literacy domains. That is, higher levels of WM were related to lower levels of cognitive load (relative difficulty), in line with fundamental predictions of CLT. These results illustrate the validity, utility and potential of this objective item response modeling approach to capturing individual differences in cognitive load across discrete learning tasks.


Sign in / Sign up

Export Citation Format

Share Document