Is the Item-Position Effect in Achievement Measures Induced by Increasing Item Difficulty?

2017 ◽  
Vol 24 (5) ◽  
pp. 745-754 ◽  
Author(s):  
Florian Zeller ◽  
Siegbert Reiß ◽  
Karl Schweizer
2016 ◽  
Vol 41 (2) ◽  
pp. 115-129 ◽  
Author(s):  
Sebastian Weirich ◽  
Martin Hecht ◽  
Christiane Penk ◽  
Alexander Roppelt ◽  
Katrin Böhme

This article examines the interdependency of two context effects that are known to occur regularly in large-scale assessments: item position effects and effects of test-taking effort on the probability of correctly answering an item. A microlongitudinal design was used to measure test-taking effort over the course of a large-scale assessment of 60 min. Two components of test-taking effort were investigated: initial effort and change in effort. Both components of test-taking effort significantly affected the probability to solve an item. In addition, it was found that participants’ current test-taking effort diminished considerably across the course of the test. Furthermore, a substantial linear position effect was found, which indicated that item difficulty increased during the test. This position effect varied considerably across persons. Concerning the interplay of position effects and test-taking effort, it was found that only the change in effort moderates the position effect and that persons differ with respect to this moderation effect. The consequences of these results concerning the reliability and validity of large-scale assessments are discussed.


2018 ◽  
Vol 26 (2) ◽  
pp. 368
Author(s):  
NIE Xugang ◽  
CHEN Ping ◽  
ZHANG Yingbin ◽  
HE Yinhong

2020 ◽  
Vol 36 (1) ◽  
pp. 96-104 ◽  
Author(s):  
Florian Zeller ◽  
Siegbert Reiß ◽  
Karl Schweizer

Abstract. The consequences of speeded testing for the structure and validity of a numerical reasoning scale (NRS) were investigated. Confirmatory factor models including an additional factor for representing working speed and models without such a representation were employed for investigating reasoning data collected in speeded paper-and-pencil testing and in only slightly speeded testing. For achieving a complete account of the data, the models also accounted for the item-position effect. The results revealed the factor representing working speed as essential for achieving a good fit in data originating from speeded testing. The reasoning factors based on data due to speeded and slightly speeded testing showed a high correlation among each other. The factor representing working speed was independent of the other factors derived from reasoning data but related to an external score representing processing speed.


2008 ◽  
Vol 22 (1) ◽  
pp. 38-60 ◽  
Author(s):  
Jason L. Meyers ◽  
G. Edward Miller ◽  
Walter D. Way

2019 ◽  
Vol 35 (2) ◽  
pp. 182-189 ◽  
Author(s):  
Stefan J. Troche ◽  
Felicitas L. Wagner ◽  
Karl Schweizer ◽  
Thomas H. Rammsayer

Abstract. Although all four subtests of Cattell’s Culture Fair Test (CFT) claim to measure inductive reasoning as a facet of fluid intelligence, previous studies indicated surprisingly weak correlations among them. In the present study, we applied a fixed-links modeling approach on CFT-20R data of 206 participants to control for the confounding influence of the item-position effect on test performance and to reevaluate the structural validity of the CFT-20R. Controlling for the item-position effect resulted in two latent variables representing inductive reasoning for CFT-20R subtests Series and Matrices and subtests Classifications and Topologies, respectively. Given the correlation of r = .61 between these two latent variables, the structural validity of the CFT-20R proved to be better than suggested by traditional correlations between test scores.


2016 ◽  
Vol 78 (1) ◽  
pp. 46-69 ◽  
Author(s):  
Karl Schweizer ◽  
Stefan Troche

In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of item difficulties. The similarity of the models of measurement hampers the dissociation of these factors. Since the item-position effect should theoretically be independent of the item difficulties, the statistical ex post manipulation of the difficulties should enable the discrimination of the two types of factors. This method was investigated in two studies. In the first study, Advanced Progressive Matrices (APM) data of 300 participants were investigated. As expected, the factor thought to be due to the item-position effect was observed. In the second study, using data simulated to show the major characteristics of the APM data, the wide range of items with various difficulties was set to zero to reduce the likelihood of detecting the difficulty factor. Despite this reduction, however, the factor now identified as item-position factor, was observed in virtually all simulated datasets.


2019 ◽  
Vol 40 (2) ◽  
pp. 71-81
Author(s):  
Ismael S. Al-Bursan ◽  
Emil O. W. Kirkegaard ◽  
John Fuerst ◽  
Salaheldin Farah Attallah Bakhiet ◽  
Mohammad F. Al Qudah ◽  
...  

Abstract. Sex differences in mathematical ability were examined in a nation-wide sample of 32,346 Jordanian 4th graders (age 9–10 year) on a 40-item mathematics test. Overall, boys were found to perform slightly worse ( d = −0.12) but had slightly more variation in scores ( SD = 1.02 and SD = 0.98 for boys and girls, respectively). However, when results were disaggregated by school type, single-sex versus coed (i.e., coeducational), boys were found to perform better than girls in coed schools ( d = 0.27) but worse across single-sex schools ( d = −0.37). Two-parameter item response theory analysis showed that item difficulty was similar across sexes in the full sample. Item loadings exhibited substantial departure from measurement invariance with respect to boys and girls at single-sex schools, though. For boys and girls at coed schools, both the item difficulty and item loading correlations were highly similar, evincing that measurement invariance largely held in this case. Partially consistent with findings from other countries, a correlation between item difficulty and male advantage was observed, r = .57, such that the relative male advantage increased with increased item difficulty. Complicating interpretation, this association did not replicate within coed schools. Item content, Bloom’s cognitive taxonomy category, and item position showed no relation to sex differences.


2016 ◽  
Vol 77 (5) ◽  
pp. 743-765 ◽  
Author(s):  
Florian Zeller ◽  
Dorothea Krampen ◽  
Siegbert Reiß ◽  
Karl Schweizer

The item-position effect describes how an item’s position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine whether adapted representations are more appropriate than the existing ones. Models of confirmatory factor analysis were used for testing the different representations. Analyses were conducted by means of simulated data that followed the covariance pattern of Raven’s Advanced Progressive Matrices (APM) items. Since the item-position effect has been demonstrated repeatedly for the APM, it is a very suitable measure for our investigations. Results revealed no remarkable improvement by using an adapted representation. Possible reasons causing these results are discussed.


Sign in / Sign up

Export Citation Format

Share Document