Using Kane’s framework to build a validity argument supporting (or not) virtual OSCEs

2021 ◽  
pp. 1-6
Author(s):  
Brian J. Hess ◽  
Brent Kvern
Keyword(s):  
2017 ◽  
Vol 35 (4) ◽  
pp. 477-499 ◽  
Author(s):  
Ute Knoch ◽  
Carol A. Chapelle

Argument-based validation requires test developers and researchers to specify what is entailed in test interpretation and use. Doing so has been shown to yield advantages (Chapelle, Enright, & Jamieson, 2010), but it also requires an analysis of how the concerns of language testers can be conceptualized in the terms used to construct a validity argument. This article presents one such analysis by examining how issues associated with the rating of test takers’ linguistic performance can be included in a validity argument. Through a manual search of published language testing research, we gathered examples of research studies investigating the quality of rating processes and products. We then analyzed them in terms of how the research could be framed within a validity argument. Drawing on Kane’s (2001, 2006, 2013) conceptualization of inferences, warrants, and assumptions, we show that the relevance of research about the rating of test performances extends beyond one or two inferences about rater reliability. Such research results, for example, provide backing for assumptions about the correspondence of the rating scale to the test construct (explanation inference) and the context of extrapolation as well as the decisions made based on the ratings and their consequences. Our analysis reveals a picture of the extensive reach of the rating process into many aspects of test score meaning as well as concrete suggestions for integrating rating issues into future argument-based validation studies.


Sign in / Sign up

Export Citation Format

Share Document