scholarly journals The effect of school spending on student achievement: addressing biases in value‐added models

Author(s):  
Cheti Nicoletti ◽  
Birgitta Rabe
2014 ◽  
Vol 116 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Spyros Konstantopoulos

Background In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade to grade, net of student background and prior ability. Value-added models are widely used nowadays and they are used by some states to rank teachers. These models are used to measure teacher performance or effectiveness (via student achievement gains), with the ultimate objective of rewarding or penalizing teachers. Such practices have resulted in a large amount of controversy in the education community about the role of value-added models in the process of making important decisions about teachers such as salary increases, promotion, or termination of employment. Purpose The purpose of this paper is to review the effects teachers have on student achievement, with an emphasis on value-added models. The paper also discusses whether value-added models are appropriately used as a sole indicator in evaluating teachers’ performance and making critical decisions about teachers’ futures in the profession. Research Design This is a narrative review of the literature on teacher effects that includes evidence about the stability of teacher effects using value-added models. Conclusions More comprehensive systems for teacher evaluation are needed. We need more research on value-added models and more work on evaluating value-added models. The strengths and weaknesses of these models should be clearly described. We also need much more empirical evidence with respect to the reliability and the stability of value-added measures across different states. The findings thus far do not seem robust and conclusive enough to warrant decisions about raises, tenure, or termination of employment. In other words, it is unclear that the value-added measures that inform the accountability system are adequate. It is not obvious that we are better equipped now to make such important decisions about teachers than we were 35 years ago. Good et al. have argued that we need well-thought-out and well-developed criteria that guide accountability decisions. Perhaps such criteria should be standardized across school districts and states. That would ensure that empirical evidence across different states is comparable and would help determine whether findings converge or diverge.


2006 ◽  
Vol 31 (1) ◽  
pp. 35-62 ◽  
Author(s):  
Joseph A. Martineau

Longitudinal, student performance-based, value-added accountability models have become popular of late and continue to enjoy increasing popularity. Such models require student data to be vertically scaled across wide grade and developmental ranges so that the value added to student growth/achievement by teachers, schools, and districts may be modeled in an accurate manner. Many assessment companies provide such vertical scales and claim that those scales are adequate for longitudinal value-added modeling. However, psychometricians tend to agree that scales spanning wide grade/developmental ranges also span wide content ranges, and that scores cannot be considered exchangeable along the various portions of the scale. This shift in the constructs being measured from grade to grade jeopardizes the validity of inferences made from longitudinal value-added models. This study demonstrates mathematically that the use of such “construct-shifting” vertical scales in longitudinal, value-added models introduces remarkable distortions in the value-added estimates of the majority of educators. These distortions include (a) identification of effective teachers/schools as ineffective (and vice versa) simply because their students’ achievement is outside the developmental range measured well by “appropriate” grade-level tests, and (b) the attribution of prior teacher/school effects to later teachers/schools. Therefore, theories, models, policies, rewards, and sanctions based upon such value-added estimates are likely to be invalid because of distorted conclusions about educator effectiveness in eliciting student growth. This study identifies highly restrictive scenarios in which current value-added models can be validly applied in high-stakes and low-stakes research uses. This article further identifies one use of student achievement data for growth-based, value-added modeling that is not plagued by the problems of construct shift: the assessment of an upper grade content (e.g., fourth grade) in both the grade below and the appropriate grade to obtain a measure of student gain on a grade-specific mix of constructs. Directions for future research on methods to alleviate the problems of construct shift are identified as well.


2006 ◽  
Vol 31 (2) ◽  
pp. 205-230 ◽  
Author(s):  
Harold C. Doran ◽  
J. R. Lockwood

Value-added models of student achievement have received widespread attention in light of the current test-based accountability movement. These models use longitudinal growth modeling techniques to identify effective schools or teachers based upon the results of changes in student achievement test scores. Given their increasing popularity, this article demonstrates how to perform the data analysis necessary to fit a general value-added model using the nlme package available for the R statistics environment. We demonstrate techniques for inspecting the data prior to fitting the model, walk a practitioner through a sample analysis, and discuss general extensions commonly found across the literature that may be incorporated to enhance the basic model presented, including the estimation of multiple outcomes and teacher effects.


2016 ◽  
Author(s):  
Raj Chetty ◽  
John Friedman ◽  
Jonah Rockoff

2013 ◽  
Vol 83 (2) ◽  
pp. 349-370 ◽  
Author(s):  
Kimberlee Callister Everson ◽  
Erika Feinauer ◽  
Richard Sudweeks

In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. Conventional value-added estimates of teacher quality are attempts to determine to what degree a teacher would theoretically contribute, on average, to the test score gains of any student in the accountability population (i.e., district or state). Everson, Feinauer, and Sudweeks suggest an alternative statistical methodology, propensity score matching, which allows estimation of how well a teacher performs relative to teachers assigned comparable classes of students. This approach more closely fits the appropriate role of an accountability system: to estimate how well employees perform in the job to which they are actually assigned. It also has the benefit of requiring fewer statistical assumptions—assumptions that are frequently violated in value-added modeling. The authors conclude that this alternative method allows for more appropriate and policy-relevant inferences about the performance of teachers.


2019 ◽  
Vol 20 (1) ◽  
pp. 26-44
Author(s):  
Godstime Osekhebhen Eigbiremolen

This article presents the first value-added model of private school effect in Ethiopia, using the unique Young Lives longitudinal data. I found a substantial and statistically significant private school premium (about 0.5 standard deviation) in Maths, but not in Peabody Picture Vocabulary Test (PPVT). Private school premium works for both low and high ability children. The results are robust to sorting on unobserved ability, grouping on lag structures and transfer between private and public schools. Combined with available contextual data, empirical evidence suggests that the effectiveness of private primary schools may be due to more learning time and teacher’s attention enjoyed by students. I also attempted to contribute methodologically to the literature by directly testing the structural assumption underpinning value-added models.


2011 ◽  
Vol 6 (1) ◽  
pp. 18-42 ◽  
Author(s):  
Cory Koedel ◽  
Julian R. Betts

Value-added modeling continues to gain traction as a tool for measuring teacher performance. However, recent research questions the validity of the value-added approach by showing that it does not mitigate student-teacher sorting bias (its presumed primary benefit). Our study explores this critique in more detail. Although we find that estimated teacher effects from some value-added models are severely biased, we also show that a sufficiently complex value-added model that evaluates teachers over multiple years reduces the sorting bias problem to statistical insignificance. One implication of our findings is that data from the first year or two of classroom teaching for novice teachers may be insufficient to make reliable judgments about quality. Overall, our results suggest that in some cases value-added modeling will continue to provide useful information about the effectiveness of educational inputs.


Sign in / Sign up

Export Citation Format

Share Document