Friday, November 6, 2009

Using Value Added to Assess Teacher Effectiveness

The Association for Public Policy Analysis and Management -- an organization not widely known outside of academia and technical policy circles -- puts on truly meaty conferences. I've attended three APPAM conferences to date, including the Annual Fall Research Conference going on in Washington, DC this week.

Education is merely one strand at APPAM, but the sessions feature some of the biggest names in educational research addressing some very policy relevant issues. The current conference features sessions on value-added modeling, school choice, teacher certification and teacher induction, teacher performance pay, financial aid, college persistence, and more.

The session I attended yesterday on "Using Value Added To Assess Teacher Effectiveness" was excellent. It featured four papers each of which I will undoubtedly oversimplify in this brief blog post. (I encourage you to seek out the papers and read them closely -- below I've linked to those that are available.) One by Dan Goldhaber and Michael Hansen (University of Washington) suggests that year-to-year correlations in value-added teacher effects are modest, but that pre-tenure estimates of teacher job performance do predict estimated post-tenure performance in both math and reading. A second by Julian Betts (UCSD) and Cory Koedel (University of Missouri-Columbia) suggests that bias does exist in value-added models due to student sorting, but that it can be overcome through the use of multiple years of value-added data; further, the study suggests that data from the first year or two of classroom teaching may be insufficient to make reliable judgments about teacher quality. A third by Michael Weiss of MDRC suggests that that teacher variability carries implications for measuring program effects within randomized controlled trials when those teachers are not randomly assigned. And a fourth by John Tyler (Brown University) and Tom Kane (Harvard University) found that teacher assessments made using classroom observation rubrics (such as Charlotte Danielson's) are closely aligned with value-added ratings of teachers.