This week, in light of a recent column criticizing the state’s new teacher evaluation program, we thought we’d take a moment to decipher some of the jargon surrounding this important reform.

Contrasting the Two Models: 

“Value-Added” v. “Student Growth”

Just what is a student-growth model, and how is it distinguished from the value-added model?

Essentially, a value-added model (VAM) is one that follows three steps. Step 1, you use a statistical formula to establish an amount of growth that is expected for a subgroup of students like the one the teacher has to teach (for example, low-income or ELL students); Step 2, you calculate the actual amount of growth that these students have made; Step 3, you define the difference between the expected growth and the actual growth as the “value” that the teacher has added.

It’s a good model. Why? Because it’s a way of measuring a student against himself as well as those of his peers who come from similar circumstances. The VAM asks us to set a goal for an individual student based on what we know about students with similar backgrounds and performance records.  VAM asks first: “How much do we predict a kid like this one will improve in a year?” and then “Did we meet, miss, or exceed that expectation?”

A student-growth model (SAM), on the other hand, follows a vertical scale that uses test performances to chart the growth of each student from one year to the next. The question SAM asks is, essentially: “Is this fourth grader on an upward trajectory since third grade?” Under SAM, instead of using fancy statistics to establish an expectation for each student’s growth, we ask teachers to meet with their supervisors and set individual student growth goals based on what they know about their students, behaviorally, academically, and socially. Each teacher sets the goal for each student on a case-by-case basis. After establishing this expectation, you look at the student’s growth throughout the year, and you determine how effective the teaching has been in helping the student meet achievement  goals.

If these two models sound extremely similar, it’s because they are. Both determine, based on some form of test results, how close a teacher has come to predictions about how his students will perform. The essential difference is that, In the SAM model, each teacher establishes his goals based upon detailed subgroup analysis, as well as individual students’ performance and behavioral backgrounds. At the end of the day, although there is no statistical formula establishing each teacher’s goals (as in VAM), the SAM model uses the same considerations.

How Much Will the Tests Scores Count?

Whether VAM or SAM, it’s important to remember three things about the manner in which districts will tally test scores for the purposes of evaluating their teachers.

First, despite protestations that the test-score based rating will “inevitably be the main focus of evaluations”, the teacher evaluation framework explicitly limits the amount that standardized test scores can count to 22.5% of an overall evaluation (half of the 45% allotted, in total, for indicators of student growth and development).

Test scores are an important consideration because they are objective, quantifiable, and one method by which to measure our students’ academic gains. However, test scores can only tell us about part of a teacher’s responsibilities and challenges, and so they only count for part of a teacher’s evaluation (less than a quarter).  The reason the PEAC framework makes so much sense is precisely because no single factor alone is going to be a tipping point! If a brilliant teacher’s test results are somehow an inaccurate measure of his performance, then there’s plenty of room for that teacher’s evaluation to be rescued by any of the other contributing factors-for example, administrator observations of performance.

Second, both teachers and administrators generally tend to agree that this evaluation framework makes sense. An ongoing study conducted by the Center for Education Policy Analysis at UConn’s Neag School of Education finds that “[m]ost educators feel positive about the possibility of more frequent observation and the focus on student performance growth”. As with the implementation of any new system, there are going to be kinks to work out, but teachers themselves know that combining observations and test scores is a sound framework for evaluating their overall performance.

Third, as Connecticut’s educators begin to implement this new system, their valuable input is going to provide more useful data and feedback that will help to improve the way the system works over time.  At the end of the day, whether the state uses VAM or SAM, we at CCER are proud that our state is embarking on this important venture to establish fair evaluations for all educators. As Connecticut’s capable teachers and leaders rise to the challenge of implementation, we should show support these trendsetters who are providing the feedback that is going to make this evaluation system stronger every year.

3 thoughts on “A Closer Examination of the State’s New Teacher Evaluation Program

  1. Terry says:

    Love how you cherry pick from the NEAG report as most teachers in the pilot also stated they did not receive adequate training and to the extent the evaluators received.

    You and charters have cherry picking in common. Who knew?

    • CCER says:

      Hi Terry, We aren’t cherry picking. The point we are making, using the Neag study, is that teachers generally agree that the structure of the system makes sense. We also think we’ve made it quite clear, towards the end of the post, that there are going to be lingering issues that need to be worked out–and that the teachers who have piloted the program play an important role in improving the system!

      Thanks for reading!

  2. WakeUpCall says:

    This plan is RUBBISH and only serves private interests. Many great teachers will be lost in this farce of so called ‘reform’.

Comments are closed.