scholarly journals Identifying a preinstruction to postinstruction factor model for the Force Concept Inventory within a multitrait item response theory framework

Author(s):  
Philip Eaton ◽  
Shannon Willoughby
Author(s):  
Levent Kirisci ◽  
Ralph Tarter ◽  
Maureen Reynolds ◽  
Michael Vanyukov

Background. Item response theory (IRT) based studies conducted on diverse samples showed a single dominant factor for DSM-III-R and DSM-IV substance use disorder (SUD) abuse and dependence symptoms of alcohol, cannabis, sedative, cocaine, stimulants, and opiates use disorders. IRT provides the opportunity, within a person-centered framework, to accurately gauge each person’s severity of disorder that, in turn, informs required intensiveness of treatment. Objectives. The aim of this study was to determine whether the SUD symptoms indicate a unidimensional trait or instead need to be conceptualized and quantified as a multidimensional scale. Methods. The sample was composed of families of adult SUD+ men (n=349), and SUD+ women (n=173), who qualified for DSM-III-R diagnosis of substance use disorder (abuse or dependence) and families of adult men and women who did not qualify for a SUD diagnosis (SUD- men: n=190, SUD- women: n=133). An expanded version of the Structured Clinical Interview for DSM-III-R (SCID) was administered to characterize lifetime and current substance use disorders. Item response theory methodology was used to assess the dimensionality of DSM-III-R SUD abuse and dependence symptoms.Results. A bi-factor model provided the optimal representation of the factor structure of SUD symptoms in males and females. SUD symptoms are scalable as indicators of a single common factor, corresponding to general (non-drug-specific, common) liability to addiction, combined with drug-specific liabilities. Conclusions. IRT methodology used to quantify the continuous general liability to addiction (GLA) latent trait in individuals having SUD symptoms was found effective for accurately measuring SUD severity in men and women. This may be helpful for person-centered medicine approaches to effectively address intensity of treatment.


2016 ◽  
Vol 15 (4) ◽  
pp. ar64 ◽  
Author(s):  
Steven T. Kalinowski ◽  
Mary J. Leonard ◽  
Mark L. Taper

We developed and validated the Conceptual Assessment of Natural Selection (CANS), a multiple-choice test designed to assess how well college students understand the central principles of natural selection. The expert panel that reviewed the CANS concluded its questions were relevant to natural selection and generally did a good job sampling the specific concepts they were intended to assess. Student interviews confirmed questions on the CANS provided accurate reflections of how students think about natural selection. And, finally, statistical analysis of student responses using item response theory showed that the CANS did a very good job of estimating how well students understood natural selection. The empirical reliability of the CANS was substantially higher than the Force Concept Inventory, a highly regarded test in physics that has a similar purpose.


2010 ◽  
Vol 7 (2) ◽  
Author(s):  
Alenka Hauptman

In Slovene General Matura, Mathematics is one of the compulsory subjects and it can be taken either at Basic or Higher Level of Achievement. Basic Level of Achievement is expressed by the classic five-grade scale from 1 to 5. Candidates at Higher Level of Achievement can get grades on scale from 1 to 8. Conversion of points into grades (i.e. getting points on tests and points at internal examination and then calculating those grades from the sum of points) on each Level is set independently, and we tried to find out if the same grade on each Level of Achievement corresponds to the same knowledge. Once grades are assigned they are used comparatively in selection procedures for admission to University. Both Basic and Higher Level in Mathematics include the same Part 1 of the exam. The second part of the exam (Part 2) is applied only to the Higher Level's candidates. Part 1 amounts to 80% of the total points at Basic Level, and 53.3% of total points at Higher Level. Higher Level's candidates get other 26.7% of points in Part 2. Oral part of the exam represents 20% of the grades at both Levels. In this paper we show discrepancy between knowledge within the same grades for candidates at Basic and Higher Level of Achievement on an example of a Mathematics exam from General Matura 2008. Rasch model within Item Response Theory framework was used to place item difficulties on common scale and the comparability of grade conversion on both Basic and Higher Level of Achievement was explored. The results show interesting differences in knowledge of candidates with the same grade at Basic and Higher Level of Achievement.


2019 ◽  
Vol 80 (3) ◽  
pp. 461-475
Author(s):  
Lianne Ippel ◽  
David Magis

In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general asymptotic theory framework. Furthermore, exact standard errors were suggested to better evaluate the precision of ability estimators, especially with short tests for which the asymptotic framework is invalid. Unfortunately, the accuracy of exact standard errors was assessed so far only in a very limiting setting. The purpose of this article is to perform a global comparison of exact versus (classical and new formulations of) asymptotic standard errors, for a wide range of usual IRT ability estimators, IRT models, and with short tests. Results indicate that exact standard errors globally outperform the ASE versions in terms of reduced bias and root mean square error, while the new ASE formulas are also globally less biased than their classical counterparts. Further discussion about the usefulness and practical computation of exact standard errors are outlined.


Sign in / Sign up

Export Citation Format

Share Document