Seattle has rolled out "Student Growth Ratings." Some teachers are devastated, some confused, and the vast majority are unaffected. Next year all 4-8 reading / math teachers and all 9th grade algebra teachers will receive SGRs. These teachers are called, "Teachers of Tested Subjects." Despite the HSPE being THE big test students need to pass before graduation, 10th grade LA/Math teachers are not considered "Teachers of Tested Subjects." Last week some teachers were told they had "low," "typical," or "high" student growth. Watch this overview video if you are curious.
It's now state law that student test results be used in teacher and principal evaluations. How that information is acquired and used, though, is up to local bargaining. My district has packaged that data by taking a teacher's roster and measuring each student's growth against two different comparison groups in order to get two scores, a Student Growth Percentage and a Value Added Measure, using spring MSP and MAP scores.
First, for the Student Growth Percentage, the student's growth between one spring's assessment (MAP or MSP) and the next year's is held up against that child's "comparison group," a group made up of the other children who shared the first spring's score. If a child improved more than 92% of her comparison group, her SGP is 92. That's pretty good.
The second measure that's used is a Value Added Measure. You can watch the video for it here, but it's a little like watching someone switch cups around so you don't know where the pea is. Value Added takes into consideration a student's comparison group regarding Special Ed services, being an English Language Learner, and Free and Reduced Lunch eligibility. The district has made a prediction of a child's growth based on these factors, and then compared the child's growth to others in his/her comparison group. It is possible for a child's predicted growth to be a DROP IN SCORE. While I appreciate that someone out there understands that a child who travels to France in July with a backpack full of Barnes and Noble's latest is likely to improve more than a child who spoke Somali all summer and babysat her younger siblings, I'm not comfortable being told my district has high expectations for every child except poor, learning disabled English Language Learners.
These two numbers are added up and averaged, magically turned into a number between 1-100 (that's explained on a mythical 5th video referenced but, so far, impossible to find), and teachers are placed in a "low," "typical" or "high" category.
In my opinion this is not being done well. First, assigning teachers a rating that has any margin of error at all - and this one has a margin of error of 10-15% - diminishes the worth of using student data at all. Is it credible to call a teacher "low" if his student growth rating is 34 ("Low") but there might be a 15% margin of error, and the score is actually 39.1 ("Typical")?
Second, I'm not comfortable with a teacher's ability to move kids being measured with a test like the MAP, which is an assessment teachers are absolutely unable to prepare children for. There are no released items for the MAP. There is no way to prepare kids, except to hope that teaching to common core standards will do the trick. I stood behind a student as he grappled with the question, "How does the use of a malapropism affect the meaning of John F. Kennedy's speech in the text above?" I could only hope that my student could wing it with his understanding of "mal" and his ability to use the process of elimination to pick the right answer.
Third, my daughters' test scores are pretty far down my list of priorities for what they get out of school. My younger daughter's MAP score was in the toilet last spring. Do I want her teacher to sit my daughter in a chair and drill her until her math scores climb to an acceptable level? Hardly. I'd prefer my daughter to be learning about what it is to be a human in this world, about science, and social skills, and being an effective problem solver and independent learner. I measure her teacher's effectiveness by how much my daughter looks forward to Monday mornings, and I don't think I'm alone in that.
Finally, I'm uncomfortable with where all of this is leading. We can't go back, and I don't want to go back, to teachers being secure in their jobs even when their children learned nothing. Looking at how much students grow under a teacher's care is appropriate, but we must do it right. A margin of error is not acceptable. Holding a very small percentage of teachers accountable and giving the others a career off is not acceptable. Measuring a teacher's effectiveness with assessments that mean everything to the teacher but nothing to the student is not acceptable. The End of Course exams, used to determine whether or not a child moves on, are much more credible measures of a child's ability.
Already I'm hearing colleagues consider private school, think of ways out of teaching "tested" subjects or the same subject two years in a row, and I'm hearing frustration and bitterness. If the point of this exercise is to reward excellent teachers and encourage them to teach challenging kids, the mission will fail if those very teachers are demoralized, targeted, and hopeless. The bottom line is that Student Growth Measures don't help students as much as a more thoughtful, meaningful measurement of growth would.
As the voice says on Seattle's videos, "Because of the challenges associated with quantifying the contribution of a teacher to student learning, the evaluator is strongly cautioned against making assumptions about the teacher's instructional effectiveness." If the evaluator can't make assumptions based on this data, then what's the point?