Latent classes from complex assessments
Latent variable mixture models are commonly used to examine patterns of students' knowledge. These models, including Latent Class Analysis (LCA), have proven valuable for uncovering qualitative variation in students' knowledge that is hidden by traditional variable-centered approaches, particularly when testing a particular cognitive or developmental theory. However, it is far less clear that these models, when applied to broader measures of student knowledge, have practical applications, such as identifying meaningful and actionable knowledge patterns on standardized achievement tests. In the present study, we probe the practical effectiveness of LCA for identifying valid patterns of students' knowledge on broadly defined achievement tests that provide added predictive value beyond overall scores and other known indicators of success. We examined the performance of 3481 fifth-grade students from a mid-sized school district in the western United States on two benchmark assessments of their mathematics achievement during the school year. Latent classes extracted from pass-fail scores on specific standards measured by these assessments were then used to predict students' end-of-year performance on a statewide-standardized mathematics assessment. Latent classes generally showed face validity and identified qualitatively different knowledge patterns. The predictive value of class membership for end-of-year test scores was greatly reduced when adjusting for overall benchmark scores and very small after also adjusting for additional pre-existing differences among students. These results suggest that, although LCA might improve the interpretability of achievement test scores, their predictive value is largely redundant with overall scores. These results are tentative; we encourage replication with different kinds of data, especially with finer-grained measures.