This summer I saw something very similar to the chart you see here. A principal was explaining how she gathered and analyzed student data, and how that data drove her administrative decisions.
You can see, if you click on the chart, that Teacher B didn't do so hot. Because a teacher's reputation precedes him, what's going to happen if parents find out about Teacher B's scores? Will they request a class change? Will they complain? Test scores are scary for teachers because they don't tell the whole story, but they tell an important part of the story.
You see that Teacher B's students lost ground in reading and math between the spring of their third grade year and that of their fourth, while Teachers A and C had students who improved.
When I sit down to have my PG&E post conference and my administrator discusses student growth with me, she's expecting (and hoping) that I will be like Teachers A and C.
We can make explanations for Teacher B - more kids in poverty, disruptive kids, kids who spent summer in front of the T.V., but no. The demographics between the three classrooms were the same. The difference was that Teacher B, while he had "the most loving classroom in the school," was also teaching at more of a third grade level than a fourth grade level. The principal examined the situation, made a diagnosis, moved the teacher to third grade and plans to support him in the upcoming year.
I think this is impressive. As teachers we worry about tests and how the data can be disconnected from the human value of what we're doing with our students, but if the data is never examined maybe we're not going to reach our potential as educators. If our administrators aren't trained to problem solve once the data reveals weak spots, then someone who might be an outstanding third grade teacher is lost just because he wasn't very good at fourth grade.
Life skills matter, but so do academics. This study was done by a group of economists who were at first skeptical about "value added data," but once they started looking they found that "the value-added scores consistently identified some teachers as better than others, even if individual teachers’ value-added scores varied from year to year." Teachers who consistently moved their students from September to June had lasting positive impact on pregnancy rates, college completion, and earning.
How do we continue to make sure that when student data is used, it's used in a way that improves student outcomes and strengthens a teacher's practice? How do we ensure that the data accurately measures a student's growth, and not a parent's tutoring, or a summer's enrichment?