scholarly journals Psychometric Models of Individual Differences in Reading Comprehension: A Reanalysis of Freed, Hamilton, and Long (2017)

2019 ◽  
Author(s):  
Sara Anne Goring ◽  
Christopher J. Schmank ◽  
Michael J. Kane ◽  
Andrew R. A. Conway

Individual differences in reading comprehension have often been explored using latent variable modeling (LVM), to assess the relative contribution of domain-general and domain-specific cognitive abilities. However, LVM is based on the assumption that the observed covariance among indicators of a construct is due to a common cause (i.e., a latent variable; Pearl, 2000). This is a questionable assumption when the indicator variables are measures of performance on complex cognitive tasks. According to Process Overlap Theory (POT; Kovacs & Conway, 2016), multiple processes are involved in cognitive task performance and the covariance among tasks is due to the overlap of processes across tasks. Instead of a single latent common cause, there are thought to be multiple dynamic manifest causes, consistent with an emerging view in psychometrics called network theory (Barabási, 2012; Borsboom & Cramer, 2013). In the current study, we reanalyzed data from Freed et al. (2017) and compared two modeling approaches: LVM (Study 1) and psychometric network modeling (Study 2). In Study 1, two exploratory LVMs demonstrated problems with the original measurement model proposed by Freed et al. Specifically, the model failed to achieve discriminant and convergent validity with respect to reading comprehension, language experience, and reasoning. In Study 2, two network models confirmed the problems found in Study 1, and also served as an example of how network modeling techniques can be used to study individual differences. In conclusion, more research, and a more informed approach to psychometric modeling, is needed to better understand individual differences in reading comprehension.

2021 ◽  
Vol 9 (1) ◽  
pp. 8
Author(s):  
Christopher J. Schmank ◽  
Sara Anne Goring ◽  
Kristof Kovacs ◽  
Andrew R. A. Conway

In a recent publication in the Journal of Intelligence, Dennis McFarland mischaracterized previous research using latent variable and psychometric network modeling to investigate the structure of intelligence. Misconceptions presented by McFarland are identified and discussed. We reiterate and clarify the goal of our previous research on network models, which is to improve compatibility between psychological theories and statistical models of intelligence. WAIS-IV data provided by McFarland were reanalyzed using latent variable and psychometric network modeling. The results are consistent with our previous study and show that a latent variable model and a network model both provide an adequate fit to the WAIS-IV. We therefore argue that model preference should be determined by theory compatibility. Theories of intelligence that posit a general mental ability (general intelligence) are compatible with latent variable models. More recent approaches, such as mutualism and process overlap theory, reject the notion of general mental ability and are therefore more compatible with network models, which depict the structure of intelligence as an interconnected network of cognitive processes sampled by a battery of tests. We emphasize the importance of compatibility between theories and models in scientific research on intelligence.


2018 ◽  
Vol 373 (1756) ◽  
pp. 20170281 ◽  
Author(s):  
M. Cauchoix ◽  
P. K. Y. Chow ◽  
J. O. van Horik ◽  
C. M. Atance ◽  
E. J. Barbeau ◽  
...  

Behavioural and cognitive processes play important roles in mediating an individual's interactions with its environment. Yet, while there is a vast literature on repeatable individual differences in behaviour, relatively little is known about the repeatability of cognitive performance. To further our understanding of the evolution of cognition, we gathered 44 studies on individual performance of 25 species across six animal classes and used meta-analysis to assess whether cognitive performance is repeatable. We compared repeatability ( R ) in performance (1) on the same task presented at different times (temporal repeatability), and (2) on different tasks that measured the same putative cognitive ability (contextual repeatability). We also addressed whether R estimates were influenced by seven extrinsic factors (moderators): type of cognitive performance measurement, type of cognitive task, delay between tests, origin of the subjects, experimental context, taxonomic class and publication status. We found support for both temporal and contextual repeatability of cognitive performance, with mean R estimates ranging between 0.15 and 0.28. Repeatability estimates were mostly influenced by the type of cognitive performance measures and publication status. Our findings highlight the widespread occurrence of consistent inter-individual variation in cognition across a range of taxa which, like behaviour, may be associated with fitness outcomes. This article is part of the theme issue ‘Causes and consequences of individual differences in cognitive abilities’.


2015 ◽  
Vol 27 (6) ◽  
pp. 1249-1258 ◽  
Author(s):  
Christian Habeck ◽  
Jason Steffener ◽  
Daniel Barulli ◽  
Yunglin Gazes ◽  
Qolamreza Razlighi ◽  
...  

Cognitive psychologists posit several specific cognitive abilities that are measured with sets of cognitive tasks. Tasks that purportedly tap a specific underlying cognitive ability are strongly correlated with one another, whereas performances on tasks that tap different cognitive abilities are less strongly correlated. For these reasons, latent variables are often considered optimal for describing individual differences in cognitive abilities. Although latent variables cannot be directly observed, all cognitive tasks representing a specific latent ability should have a common neural underpinning. Here, we show that cognitive tasks representing one ability (i.e., either perceptual speed or fluid reasoning) had a neural activation pattern distinct from that of tasks in the other ability. One hundred six participants between the ages of 20 and 77 years were imaged in an fMRI scanner while performing six cognitive tasks, three representing each cognitive ability. Consistent with prior research, behavioral performance on these six tasks clustered into the two abilities based on their patterns of individual differences and tasks postulated to represent one ability showed higher similarity across individuals than tasks postulated to represent a different ability. This finding was extended in the current report to the spatial resemblance of the task-related activation patterns: The topographic similarity of the mean activation maps for tasks postulated to reflect the same reference ability was higher than for tasks postulated to reflect a different reference ability. Furthermore, for any task pairing, behavioral and topographic similarities of underlying activation patterns are strongly linked. These findings suggest that differences in the strengths of correlations between various cognitive tasks may be because of the degree of overlap in the neural structures that are active when the tasks are being performed. Thus, the latent variable postulated to account for correlations at a behavioral level may reflect topographic similarities in the neural activation across different brain regions.


2010 ◽  
Vol 31 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Oliver Wilhelm ◽  
Michael Witthöft ◽  
Stefan Schipolowski

The Cognitive Failure Questionnaire (CFQ) is a well-known and frequently used self-report measure of cognitive lapses and slips, for example, throwing away the candy bar and keeping the wrapping. Measurement models of individual differences in cognitive failures have failed to produce consistent results so far. In this article we establish a measurement model distinguishing three factors of self-reported cognitive failures labeled Clumsiness, Retrieval, and Intention forgotten. The relationships of the CFQ factors with a variety of self-report instruments are investigated. Measures of minor lapses, neuroticism, functional and dysfunctional self-consciousness, cognitive interference, and memory complaints provide evidence across several studies for the interpretation of self-reported cognitive failures as an aspect of neuroticism that primarily reflects general subjective complaints about cognition. We conclude that self-report measures about cognition ought to be interpreted as expressing worries about one’s cognition rather than measuring cognitive abilities themselves.


2019 ◽  
Vol 7 (3) ◽  
pp. 21 ◽  
Author(s):  
Christopher J. Schmank ◽  
Sara Anne Goring ◽  
Kristof Kovacs ◽  
Andrew R. A. Conway

The positive manifold—the finding that cognitive ability measures demonstrate positive correlations with one another—has led to models of intelligence that include a general cognitive ability or general intelligence (g). This view has been reinforced using factor analysis and reflective, higher-order latent variable models. However, a new theory of intelligence, Process Overlap Theory (POT), posits that g is not a psychological attribute but an index of cognitive abilities that results from an interconnected network of cognitive processes. These competing theories of intelligence are compared using two different statistical modeling techniques: (a) latent variable modeling and (b) psychometric network analysis. Network models display partial correlations between pairs of observed variables that demonstrate direct relationships among observations. Secondary data analysis was conducted using the Hungarian Wechsler Adult Intelligence Scale Fourth Edition (H-WAIS-IV). The underlying structure of the H-WAIS-IV was first assessed using confirmatory factor analysis assuming a reflective, higher-order model and then reanalyzed using psychometric network analysis. The compatibility (or lack thereof) of these theoretical accounts of intelligence with the data are discussed.


2021 ◽  
Vol 12 ◽  
Author(s):  
Selena Wang

The combination of network modeling and psychometric models has opened up exciting directions of research. However, there has been confusion surrounding differences among network models, graphic models, latent variable models and their applications in psychology. In this paper, I attempt to remedy this gap by briefly introducing latent variable network models and their recent integrations with psychometric models to psychometricians and applied psychologists. Following this introduction, I summarize developments under network psychometrics and show how graphical models under this framework can be distinguished from other network models. Every model is introduced using unified notations, and all methods are accompanied by available R packages inducive to further independent learning.


2015 ◽  
Vol 27 (5) ◽  
pp. 853-865 ◽  
Author(s):  
Nash Unsworth ◽  
Keisuke Fukuda ◽  
Edward Awh ◽  
Edward K. Vogel

A great deal of prior research has examined the relation between estimates of working memory and cognitive abilities. Yet, the neural mechanisms that account for these relations are still not very well understood. The current study explored whether individual differences in working memory delay activity would be a significant predictor of cognitive abilities. A large number of participants performed multiple measures of capacity, attention control, long-term memory, working memory span, and fluid intelligence, and latent variable analyses were used to examine the data. During two working memory change detection tasks, we acquired EEG data and examined the contralateral delay activity. The results demonstrated that the contralateral delay activity was significantly related to cognitive abilities, and importantly these relations were because of individual differences in both capacity and attention control. These results suggest that individual differences in working memory delay activity predict individual differences in a broad range of cognitive abilities, and this is because of both differences in the number of items that can be maintained and the ability to control access to working memory.


2019 ◽  
Author(s):  
Alexander P. Christensen ◽  
Hudson Golino ◽  
Paul Silvia

This article reviews the causal implications of latent variable and psychometric network models for the validation of personality trait questionnaires. These models imply different data generating mechanisms that have important consequences for the validity and validation of questionnaires. From this review, we formalize a framework for assessing the evidence for the validity of questionnaires from the psychometric network perspective. We focus specifically on the structural phase of validation where items are assessed for redundancy, dimensionality, and internal structure. In this discussion, we underline the importance of identifying unique personality components (i.e., an item or set of items that share a unique common cause) and representing the breadth of each trait’s domain in personality networks. After, we argue that psychometric network models have measures that are statistically equivalent to factor models, but suggest that their substantive interpretations differ. Finally, we provide a novel measure of structural consistency, which provides complementary information to internal consistency measures. We close with future directions for how external validation can be executed using psychometric network models.


2019 ◽  
Author(s):  
Christopher J. Schmank ◽  
Sara Anne Goring ◽  
Kristof Kovacs ◽  
Andrew R. A. Conway

The positive manifold—the finding that cognitive ability measures demonstrate positive correlations with one another—has led to models of intelligence that include a general cognitive ability or general intelligence (g). This view has been reinforced using factor analysis and latent variable models. However, a new theory of intelligence, Process Overlap Theory (POT; Kovacs & Conway, 2016), posits that g is not a psychological attribute but an index of cognitive abilities that results from an interconnected network of cognitive processes. From this perspective, psychometric network analysis is an attractive alternative to latent variable modeling. Network analyses display partial correlations among observed variables that demonstrate direct relationships among observed variables. To demonstrate the benefits of this approach, the Hungarian Wechsler Adult Intelligence Scale Fourth Edition (H-WAIS-IV; Wechsler, 2008) was analyzed using both psychometric network analysis and latent variable modeling. Network models were directly compared to latent variable models. Results indicate that the H-WAIS-IV data was better fit by network models than by latent variable models. We argue that POT, and network models, provide a more accurate view of the structure of intelligence than traditional approaches.


Sign in / Sign up

Export Citation Format

Share Document