Evaluating Restrictive Models in Educational and Behavioral Research: Local Misfit Overrides Model Tenability

2020 ◽  
pp. 001316442094456
Author(s):  
Tenko Raykov ◽  
Christine DiStefano

The frequent practice of overall fit evaluation for latent variable models in educational and behavioral research is reconsidered. It is argued that since overall plausibility does not imply local plausibility and is only necessary for the latter, local misfit should be considered a sufficient condition for model rejection, even in the case of omnibus model tenability. The argument is exemplified with a comparison of the widely used one-parameter and two-parameter logistic models. A theoretically and practically relevant setting illustrates how discounting local fit and concentrating instead on overall model fit may lead to incorrect model selection, even if a popular information criterion is also employed. The article concludes with the recommendation for routine examination of particular parameter constraints within latent variable models as part of their fit evaluation.

2021 ◽  
Author(s):  
Dustin Fife ◽  
Steven Brunwasser ◽  
Edgar C. Merkle

Latent variable models (LVMs) are incredibly flexible tools that allow users to address research questions they might otherwise never be able to answer (McDonald, 2013). However, one major limitation of LVMs is evaluating model fit. There is no universal consensus about how to evaluate model fit, either globally or locally. Part of the reason evaluating these models is difficult is because fit is typically reduced to a handful of statistics that may or may not reflect the model’s adequacy and/or assumptions. In this paper we argue that proper evaluation of model fit must include visualizing both the raw data and the model-implied fit. Visuals reveal, at a glance, the fit of the model and whether the model’s assumptions have been met. Unfortunately, tools for visualizing LVMs have historically been limited. In this paper, we introduce new plots and reframe existing plots that provide necessary resources for evaluating LVMs. These plots are available in a new open- source R package called flexplavaan, which combines the model plotting capabilities of flexplot with the latent variable modeling capabilities oflavaan.


2019 ◽  
Author(s):  
Leon Patrick Wendt ◽  
Aidan G.C. Wright ◽  
Paul A. Pilkonis ◽  
Tobias Nolte ◽  
Peter Fonagy ◽  
...  

Interpersonal problems are key transdiagnostic constructs in psychopathology. In the past, investigators have neglected the importance of operationalizing interpersonal problems according to their latent structure by using divergent representations of the construct: (a) computing scores for severity, agency, and communion (“dimensional approach”), (b) classifying persons into subgroups with respect to their interpersonal profile (“categorical approach”). This hinders cumulative research on interpersonal problems, because findings cannot be integrated both from a conceptual and a statistical point of view. We provide a comprehensive evaluation of interpersonal problems by enlisting several large samples (Ns = 5400, 491, 656, 712) to estimate a set of latent variable candidate models, covering the spectrum of purely dimensional (i.e., confirmatory factor analysis using gaussian and non-normal latent t-distributions), hybrid (i.e., semi-parametric factor analysis) and purely categorical approaches (latent class analysis). Statistical models were compared with regard to their structural validity, as evaluated by model fit (corrected Akaike’s information criterion and the Bayesian information criterion), and their concurrent validity, as defined by the models’ ability to predict relevant external variables. Across samples, the fully dimensional model performed best in terms of model fit, prediction, robustness and parsimony. We found scant evidence that categorical and hybrid models provide incremental value for understanding interpersonal problems. Our results indicate that the latent structure of interpersonal problems is best represented by continuous dimensions, especially when one allows for non-normal latent distributions.


Methodology ◽  
2011 ◽  
Vol 7 (4) ◽  
pp. 157-164
Author(s):  
Karl Schweizer

Probability-based and measurement-related hypotheses for confirmatory factor analysis of repeated-measures data are investigated. Such hypotheses comprise precise assumptions concerning the relationships among the true components associated with the levels of the design or the items of the measure. Measurement-related hypotheses concentrate on the assumed processes, as, for example, transformation and memory processes, and represent treatment-dependent differences in processing. In contrast, probability-based hypotheses provide the opportunity to consider probabilities as outcome predictions that summarize the effects of various influences. The prediction of performance guided by inexact cues serves as an example. In the empirical part of this paper probability-based and measurement-related hypotheses are applied to working-memory data. Latent variables according to both hypotheses contribute to a good model fit. The best model fit is achieved for the model including latent variables that represented serial cognitive processing and performance according to inexact cues in combination with a latent variable for subsidiary processes.


2020 ◽  
Author(s):  
Paul Silvia ◽  
Alexander P. Christensen ◽  
Katherine N. Cotter

Right-wing authoritarianism (RWA) has well-known links with humor appreciation, such as enjoying jokes that target deviant groups, but less is known about RWA and creative humor production—coming up with funny ideas oneself. A sample of 186 young adults completed a measure of RWA, the HEXACO-100, and 3 humor production tasks that involved writing funny cartoon captions, creating humorous definitions for quirky concepts, and completing joke stems with punchlines. The humor responses were scored by 8 raters and analyzed with many-facet Rasch models. Latent variable models found that RWA had a large, significant effect on humor production (β = -.47 [-.65, -.30], p < .001): responses created by people high in RWA were rated as much less funny. RWA’s negative effect on humor was smaller but still significant (β = -.25 [-.49, -.01], p = .044) after controlling for Openness to Experience (β = .39 [.20, .59], p < .001) and Conscientiousness (β = -.21 [-.41, -.02], p = .029). Taken together, the findings suggest that people high in RWA just aren’t very funny.


Sign in / Sign up

Export Citation Format

Share Document