Cognitive Validity: Can Multiple-Choice Items Tap Historical Thinking Processes?

2017 ◽  
Vol 54 (6) ◽  
pp. 1256-1287 ◽  
Author(s):  
Mark D. Smith
Psychometrika ◽  
2021 ◽  
Author(s):  
Qian Wu ◽  
Monique Vanerum ◽  
Anouk Agten ◽  
Andrés Christiansen ◽  
Frank Vandenabeele ◽  
...  

1987 ◽  
Vol 47 (2) ◽  
pp. 513-522 ◽  
Author(s):  
Steven V. Owen ◽  
Robin D. Froman

1990 ◽  
Vol 1990 (1) ◽  
pp. i-29 ◽  
Author(s):  
Randy Elliot Bennett ◽  
Donald A. Rock ◽  
Minhwei Wang

2021 ◽  
pp. 001316442098810
Author(s):  
Stefanie A. Wind ◽  
Yuan Ge

Practical constraints in rater-mediated assessments limit the availability of complete data. Instead, most scoring procedures include one or two ratings for each performance, with overlapping performances across raters or linking sets of multiple-choice items to facilitate model estimation. These incomplete scoring designs present challenges for detecting rater biases, or differential rater functioning (DRF). The purpose of this study is to illustrate and explore the sensitivity of DRF indices in realistic sparse rating designs that have been documented in the literature that include different types and levels of connectivity among raters and students. The results indicated that it is possible to detect DRF in sparse rating designs, but the sensitivity of DRF indices varies across designs. We consider the implications of our findings for practice related to monitoring raters in performance assessments.


Sign in / Sign up

Export Citation Format

Share Document