scholarly journals Estimating a Three-Level Latent Variable Regression Model With Cross-Classified Multiple Membership Data

Methodology ◽  
2018 ◽  
Vol 14 (1) ◽  
pp. 30-44 ◽  
Author(s):  
Audrey J. Leroux ◽  
S. Natasha Beretvas

Abstract. The current study proposed a new model, termed the cross-classified multiple membership latent variable regression (CCMM-LVR) model that provides an extension to the three-level latent variable regression (HM3-LVR) model that can be used with cross-classified multiple membership data, for example, in the presence of student mobility across schools. The HM3-LVR model is beneficial for testing more flexible hypotheses about growth trajectory parameters and handles pure clustering of participants within higher-level (level-3) units. However, the HM3-LVR model involves the assumption that students remain in the same cluster (school) throughout the duration of the time period of interest. The CCMM-LVR model appropriately models the participants’ changing clusters over time. The impact of ignoring mobility in the real data was investigated by comparing parameter estimates, standard error estimates, and model fit indices for the model (CCMM-LVR) that appropriately modeled the cross-classified multiple membership structure with results when this structure was ignored (HM3-LVR).

2021 ◽  
pp. 001316442199121
Author(s):  
Guher Gorgun ◽  
Okan Bulut

In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of not-reached items is high, these traditional approaches may yield biased scores and thereby threatening the validity of test results. In this study, we propose a polytomous scoring approach for handling not-reached items and compare its performance with those of the traditional scoring approaches. Real data from a low-stakes math assessment administered to second and third graders were used. The assessment consisted of 40 short-answer items focusing on addition and subtraction. The students were instructed to answer as many items as possible within 5 minutes. Using the traditional scoring approaches, students’ responses for not-reached items were treated as either not-administered or incorrect in the scoring process. With the proposed scoring approach, students’ nonmissing responses were scored polytomously based on how accurately and rapidly they responded to the items to reduce the impact of not-reached items on ability estimation. The traditional and polytomous scoring approaches were compared based on several evaluation criteria, such as model fit indices, test information function, and bias. The results indicated that the polytomous scoring approaches outperformed the traditional approaches. The complete case simulation corroborated our empirical findings that the scoring approach in which nonmissing items were scored polytomously and not-reached items were considered not-administered performed the best. Implications of the polytomous scoring approach for low-stakes assessments were discussed.


2021 ◽  
Vol 45 (3) ◽  
pp. 159-177
Author(s):  
Chen-Wei Liu

Missing not at random (MNAR) modeling for non-ignorable missing responses usually assumes that the latent variable distribution is a bivariate normal distribution. Such an assumption is rarely verified and often employed as a standard in practice. Recent studies for “complete” item responses (i.e., no missing data) have shown that ignoring the nonnormal distribution of a unidimensional latent variable, especially skewed or bimodal, can yield biased estimates and misleading conclusion. However, dealing with the bivariate nonnormal latent variable distribution with present MNAR data has not been looked into. This article proposes to extend unidimensional empirical histogram and Davidian curve methods to simultaneously deal with nonnormal latent variable distribution and MNAR data. A simulation study is carried out to demonstrate the consequence of ignoring bivariate nonnormal distribution on parameter estimates, followed by an empirical analysis of “don’t know” item responses. The results presented in this article show that examining the assumption of bivariate nonnormal latent variable distribution should be considered as a routine for MNAR data to minimize the impact of nonnormality on parameter estimates.


2021 ◽  
pp. 001316442199240
Author(s):  
Chunhua Cao ◽  
Eun Sook Kim ◽  
Yi-Hsin Chen ◽  
John Ferron

This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates produced in the correct and the misspecified models were compared under varying conditions of cluster number, cluster size, intraclass correlation, and the magnitude of the interaction effect in the population model. Results showed that the two main effects were overestimated by approximately half of the size of the interaction effect, and the between-level factor mean was underestimated. None of comparative fit index, Tucker–Lewis index, root mean square error of approximation, and standardized root mean square residual was sensitive to the omission of the interaction effect. The sensitivity of information criteria varied depending majorly on the magnitude of the omitted interaction, as well as the location of the interaction (i.e., at the between level, within level, or cross level). Implications and recommendations based on the findings were discussed.


2020 ◽  
pp. 073428292093092 ◽  
Author(s):  
Patrícia Silva Lúcio ◽  
Joachim Vandekerckhove ◽  
Guilherme V. Polanczyk ◽  
Hugo Cogo-Moreira

The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven’s Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the literature and this is the first to compare models of two and three parameters for the test of Raven, evaluating the informational gain of considering guessing probability. Participants were 582 Brazilian’s preschool children ( Mage = 57 months; SD = 7 months; 46% female) who responded individually to the instrument. The model fit indices suggested that the 2PL fit better to the data. The difficulty and ability parameters were similar between the models, with almost perfect correlations. Differences were observed in terms of discrimination and test information. The principle of parsimony must be called for comparing models.


2020 ◽  
pp. 001316442094289
Author(s):  
Amanda K. Montoya ◽  
Michael C. Edwards

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean square error of approximate (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and Tucker–Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule ( N = 1,000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor exploratory factor analysis. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions that are overfactored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.


2020 ◽  
Author(s):  
Amanda Kay Montoya ◽  
Michael C. Edwards

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean squared error of approximate (RMSEA), standardized root mean squared residual (SRMR), comparative fit index (CFI), and Tucker-Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule (N = 1000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor EFA. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions which are over-factored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.


2016 ◽  
Vol 33 (S1) ◽  
pp. S157-S157
Author(s):  
C. Ferreira ◽  
A.L. Mendes ◽  
J. Marta-Simões ◽  
I.A. Trindade

It is widely accepted that shame plays a significant role in the development and maintenance psychopathology, namely depressive symptoms. In fact, the experience of shame is highly associated with the adoption of maladaptive strategies to cope with negative feelings, such as experiential avoidance (i.e., the unavailability to accept one's private experiences), and the inability of decenter oneself from unwanted internal events. The present study aims to explore a mediation model that examines whether external shame's effect on depressive symptomatology is mediated through the mechanisms of decentering and experimental avoidance, while controlling for age. Participants were 358 adults of both genders from the general population that completed a battery of self-report scales measuring external shame, decentering, experimental avoidance and depression. The final model explained 33% of depression and revealed excellent model fit indices. Results showed that external shame has a direct effect on depressive symptomatology and simultaneously an indirect effect mediated by the mechanisms of decentering and experiential avoidance. These data seem to support the association between shame and depressive symptomatology. Nevertheless, these findings add to literature by suggesting that when the individual presents higher levels of shame he or she may present lower decentering abilities and tends to engage in experiential avoidance, which amplify the impact of external shame and depression. Furthermore, our findings seem to have important clinical implications, stressing the importance of developing intervention programs in the community that target shame and experimental avoidance and that promote adaptive emotion regulation strategies (e.g., decentering) to deal with adverse experiences.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2019 ◽  
Vol 45 (4) ◽  
pp. 383-402
Author(s):  
Paul A. Jewsbury ◽  
Peter W. van Rijn

In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for data from a linear test, unacceptable item parameter estimates are obtained when data arise from a multistage test (MST). We explore this situation from a missing data perspective and show mathematically that MST data will be problematic for calibrating multiple UIRT models but not MIRT models. This occurs because some items that were used in the routing decision are excluded from the separate UIRT models, due to measuring a different latent variable. Both simulated and real data from the National Assessment of Educational Progress are used to further confirm and explore the unacceptable item parameter estimates. The theoretical and empirical results confirm that only MIRT models are valid for item calibration of multidimensional MST data.


2016 ◽  
Vol 4 (4) ◽  
pp. 586
Author(s):  
Pin-Shan Hsiung

<p><em>In recent years, the number of translation and interpretation courses offered in Taiwan</em><em> has increased rapidly</em><em>, but </em><em>few studies has looked at</em><em> the employability of their graduates. </em><em>T</em><em>his paper </em><em>is aimed to</em><em> investigate the direct effects of curriculum on the professional careers of alumni as reflected in their current employment status </em><em>and</em><em> level of academic advancement. </em><em>A </em><em>questionnaire</em><em> survey was carried out to</em><em> evaluate multiple aspects of teaching, including learning effectiveness</em><em>, </em><em>core competency</em><em>, c</em><em>urriculum design and repay the society. Through an analysis of 150 named and 300 anonymous questionnaires, this study analyz</em><em>ed </em><em>the learning effectiveness as the mediator for the careers of alumni, using the Amos statistical package for Structural Equation Modeling</em><em> </em><em>(SEM), along with other related techniques, such as Confirmatory Factor Analysis</em><em> </em><em>(CFA)</em><em>. The analyses have </em><em>produce</em><em>d</em><em> parameter estimates and goodness-of-fit indices, which could be useful for many purposes, such as examining longitudinal data and comparing groups. It is hoped that this brief study may provide a better understanding and a basis for future studies.</em><em></em></p>


Sign in / Sign up

Export Citation Format

Share Document