latent trait model
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 1)

H-INDEX

17
(FIVE YEARS 0)

2017 ◽  
Vol 51 (4) ◽  
pp. 351-362 ◽  
Author(s):  
Lara-Jeane Costa ◽  
Melissa Green ◽  
John Sideris ◽  
Stephen R. Hooper

The primary aim of this study was determining Grade 1 cognitive predictors of students at risk for writing disabilities in Grades 2 through 4. Applying cognitive measures selected to align with theoretical and empirical models of writing, tasks were administered to Grade 1 students assessing fine-motor, linguistic, and executive functions: 84 at risk (bottom quartile for age-base expectations) and 54 typically developing. A model with individual predictors was compared to a previously developed latent trait model to determine the relative predictive worth of each approach. Data analysis primarily involved stepwise logistic regression. Results revealed that the individual measures of orthographic choice, working memory, inhibitory control, visual memory recognition, and planning all were significant predictors of at risk status in Grades 2 through 4. The latent trait model also fared well but did not account for the same amount of variance as any of the individual measurement models for any of the grades. The findings lay the foundation for an empirically based approach to cognitive assessment in Grade 1 for identifying potential at-risk students in later elementary grades and suggest potential underlying neurocognitive abilities that could be employed with educational interventions for students with later-emerging writing disabilities.


2016 ◽  
Vol 9 (1) ◽  
pp. 168-175 ◽  
Author(s):  
Purya Baghaei ◽  
Mona Tabatabaee Yazdi

Background:Validity is the most important characteristic of tests and social science researchers have a general consensus of opinion that the trustworthiness of any substantive research depends on the validity of the instruments employed to gather the data.Objective:It is a common practice among psychologists and educationalists to provide validity evidence for their instruments by fitting a latent trait model such as exploratory and confirmatory factor analysis or the Rasch model. However, there has been little discussion on the rationale behind model fitting and its use as validity evidence. The purpose of this paper is to answer the question: why the fit of data to a latent trait model counts as validity evidence for a test?Method:To answer this question latent trait theory and validity concept as delineated by Borsboom and his colleagues in a number of publications between 2003 to 2013 is reviewed.Results:Validating psychological tests employing latent trait models rests on the assumption of conditional independence. If this assumption holds it means that there is a ‘common cause’ underlying the co-variation among the test items, which hopefully is our intended construct.Conclusion:Providing validity evidence by fitting latent trait models is logistically easy and straightforward. However, it is of paramount importance that researchers appreciate what they do and imply about their measures when they demonstrate that their data fit a model. This helps them to avoid unforeseen pitfalls and draw logical conclusions.


2016 ◽  
Vol 32 (3) ◽  
pp. 187-194 ◽  
Author(s):  
Robert E. McGrath

Abstract. The VIA Classification of Character Strengths and Virtues ( Peterson & Seligman, 2004 ) has been an influential contribution to the study of prosocial traits, and provided the basis for the VIA Inventory of Strengths (VIA-IS). Inherent to the Classification is the assumption that the character strengths included in the model are cross-culturally relevant. The emergence of a latent trait model for the VIA Classification from exploratory factor analytic research and the availability of data from translated versions of the VIA-IS provides a basis for evaluating this assumption. A sample of 15,540 individuals from 16 nations who completed the VIA-IS online was used to evaluate measurement equivalence. Multigroup confirmatory factor analysis and a relatively new statistical procedure, alignment analysis, were used to evaluate configural, metric, and scalar invariance across translations of the instrument. Consistent support was found for configural and metric invariance, and scalar invariance was also demonstrated under a number of circumstances. The findings lend support to the cross-cultural relevance of the VIA Classification of Character Strengths and Virtues as well as to existing translations of the VIA-IS.


2015 ◽  
Vol 4 (3) ◽  
pp. 170-180 ◽  
Author(s):  
Belinda C. Goodwin ◽  
Matthew Browne ◽  
Matthew Rockloff ◽  
Phillip Donaldson

Sign in / Sign up

Export Citation Format

Share Document