scholarly journals How do Test Scores at the Ceiling Affect Value-Added Estimates?

2018 ◽  
Vol 5 (1) ◽  
pp. 1-6 ◽  
Author(s):  
Alexandra Resch ◽  
Eric Isenberg
Keyword(s):  
2010 ◽  
Vol 24 (3) ◽  
pp. 167-182 ◽  
Author(s):  
Kevin Lang

One of the potential strengths of the No Child Left Behind (NCLB) Act enacted in 2002 is that the law requires the production of an enormous amount of data, particularly from tests, which, if used properly, might help us improve education. As an economist and as someone who served 13 years on the School Committee1 in Brookline Massachusetts, until May 2009, I have been appalled by the limited ability of districts to analyze these data; I have been equally appalled by the cavalier manner in which economists use test scores and related measures in their analyses. The summary data currently provided are very hard to interpret, and policymakers, who typically lack statistical sophistication, cannot easily use them to assess progress. In some domains, most notably the use of average test scores to evaluate teachers or schools, the education community is aware of the biases and has sought better measures. The economics and statistics communities have both responded to and created this demand by developing value-added measures that carry a scientific aura. However, economists have largely failed to recognize many of the problems with such measures. These problems are sufficiently important that they should preclude any automatic link between these measures and rewards or sanctions. They do, however, contain information and can be used as a catalyst for more careful evaluation of teachers and schools, and as a lever to induce principals and other administrators to act on their knowledge.


2014 ◽  
Vol 104 (9) ◽  
pp. 2593-2632 ◽  
Author(s):  
Raj Chetty ◽  
John N. Friedman ◽  
Jonah E. Rockoff

Are teachers' impacts on students' test scores (value-added) a good measure of their quality? One reason this question has sparked debate is disagreement about whether value-added (VA) measures provide unbiased estimates of teachers' causal impacts on student achievement. We test for bias in VA using previously unobserved parent characteristics and a quasi-experimental design based on changes in teaching staff. Using school district and tax records for more than one million children, we find that VA models which control for a student's prior test scores provide unbiased forecasts of teachers' impacts on student achievement. (JEL H75, I21, J24, J45)


2020 ◽  
Vol 49 (5) ◽  
pp. 335-349
Author(s):  
Allison Atteberry ◽  
Daniel Mangan

Papay (2011) noticed that teacher value-added measures (VAMs) from a statistical model using the most common pre/post testing timeframe–current-year spring relative to previous spring (SS)–are essentially unrelated to those same teachers’ VAMs when instead using next-fall relative to current-fall (FF). This is concerning since this choice–made solely as an artifact of the timing of statewide testing–produces an entirely different ranking of teachers’ effectiveness. Since subsequent studies (grades K/1) have not replicated these findings, we revisit and extend Papay’s analyses in another Grade 3–8 setting. We find similarly low correlations (.13–.15) that persist across value-added specifications. We delineate and apply a literature-based framework for considering the role of summer learning loss in producing these low correlations.


2020 ◽  
Vol 10 (12) ◽  
pp. 390
Author(s):  
Ismail Aslantas

It is widely believed that the teacher is one of the most important factors influencing a student’s success at school. In many countries, teachers’ salaries and promotion prospects are determined by their students’ performance. Value-added models (VAMs) are increasingly used to measure teacher effectiveness to reward or penalize teachers. The aim of this paper is to examine the relationship between teacher effectiveness and student academic performance, controlling for other contextual factors, such as student and school characteristics. The data are based on 7543 Grade 8 students matched with 230 teachers from one province in Turkey. To test how much progress in student academic achievement can be attributed to a teacher, a series of regression analyses were run including contextual predictors at the student, school and teacher/classroom level. The results show that approximately half of the differences in students’ math test scores can be explained by their prior attainment alone (47%). Other factors, such as teacher and school characteristics explain very little the variance in students’ test scores once the prior attainment is taken into account. This suggests that teachers add little to students’ later performance. The implication, therefore, is that any intervention to improve students’ achievement should be introduced much earlier in their school life. However, this does not mean that teachers are not important. Teachers are key to schools and student learning, even if they are not differentially effective from each other in the local (or any) school system. Therefore, systems that attempt to differentiate “effective” from “ineffective” teachers may not be fair to some teachers.


2014 ◽  
Vol 104 (9) ◽  
pp. 2633-2679 ◽  
Author(s):  
Raj Chetty ◽  
John N. Friedman ◽  
Jonah E. Rockoff

Are teachers' impacts on students' test scores (value-added) a good measure of their quality? This question has sparked debate partly because of a lack of evidence on whether high value-added (VA) teachers improve students' long-term outcomes. Using school district and tax records for more than one million children, we find that students assigned to high-VA teachers are more likely to attend college, earn higher salaries, and are less likely to have children as teenagers. Replacing a teacher whose VA is in the bottom 5 percent with an average teacher would increase the present value of students' lifetime income by approximately $250,000 per classroom. (JEL H75, I21, J24, J45)


2016 ◽  
Vol 32 (1) ◽  
pp. 55-85 ◽  
Author(s):  
Seth Gershenson ◽  
Michael S. Hayes

School districts across the United States increasingly use value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer with school-year learning and potentially bias estimates of teacher effectiveness. We investigate the practical implications of this problem by comparing estimates from “cross-year” VAMs with those from arguably more valid “within-year” VAMs using fall and spring test scores from the nationally representative Early Childhood Longitudinal Study–Kindergarten Cohort (ECLS-K). “Cross-year” and “within-year” VAMs frequently yield significant differences that remain even after conditioning on participation in summer activities.


Sign in / Sign up

Export Citation Format

Share Document