Book Review: by Samuel A. Livingston. Equating Test Scores (Without IRT). Princeton, NJ: Educational Testing Service, 2004, 68 pp. (paperback)

2009 ◽  
Vol 33 (8) ◽  
pp. 640-642
Author(s):  
Gautam Puhan
2010 ◽  
Vol 27 (3) ◽  
pp. 335-353 ◽  
Author(s):  
Sara Cushing Weigle

Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study approaches validity by comparing human and automated scores on responses to TOEFL® iBT Independent writing tasks with several non-test indicators of writing ability: student self-assessment, instructor assessment, and independent ratings of non-test writing samples. Automated scores were produced using e-rater ®, developed by Educational Testing Service (ETS). Correlations between both human and e-rater scores and non-test indicators were moderate but consistent, providing criterion-related validity evidence for the use of e-rater along with human scores. The implications of the findings for the validity of automated scores are discussed.


1985 ◽  
Vol 55 (2) ◽  
pp. 195-220 ◽  
Author(s):  
James Crouse

The College Entrance Examination Board and the Educational Testing Service claim that the Scholastic Aptitude Test (SAT) improves colleges' predictions of their applicants' success. James Crouse uses data from the National Longitudinal Study of high school students to calculate the actual improvement in freshman grade point averages, college completion,and total years of schooling resulting from colleges' use of the SAT. He then compares those predictions with predictions based on applicants' high school rank. Crouse argues that the College Board and the Educational Testing Service have yet to demonstrate that the high costs of the SAT are justified by its limited ability to predict student performance.


Sign in / Sign up

Export Citation Format

Share Document