Apparently, Klein had rigged the whole process so that any teacher whom one of his Leadership Academy principals didn't like simply got a U rating for no real reason. The backdrop to this, an appeal process with a three-person panel (one picked by the union, one picked by the city and one picked in some other Byzantine way) had been completely compromised and 99.9% of these U ratings were upheld on appeal many times with no evidence at all to support it. I can't tell you how many hours I spent in monthly Chapter Leader meetings listening to uncomfortable details about how this process was unfolding. I was CL for exactly two years and almost 1/2 of the time of each meeting back then was devoted to this one topic.
So now, almost eight years later (and six years after Klein lost interest and moved on), the UFT has finally arrived at a defense for these capricious U ratings: The new new new new teacher evaluation system of 2014 2015 2016.
Groups ranging from Students' First to the UFT MORE caucus are objecting to it and the DoE and UFT's Unity Caucus is heralding it.
Both sides are completely wrong. The simple fact is that the newest version of the evaluation is rooted in recent history, the result of scars still held by the people who run the city union and, at the end of the day, neither good or bad. It causes neither harm or pain and will not advance or regress our profession or our students in the slightest way. It is as useless, and as useful, as a teat on a very old bull.
Now if everyone can now stop clapping and stop objecting, that'd be great. Thanks.
The essence of the system is two words: Multiple Measures. Once you understand the concept of multiple measures, then you'll understand how utterly neutral this new agreement is. ICEUFT blog, written by a long time chapter leader who understood the world of the S/U rating system has objected that the union agreed to too many classroom observations (three times the amount other districts in the state gets). I was surprised at that objection because he, of all, chapter leaders, had to deal with high stakes classroom observations that resulted in unfair U ratings -rating that many times killed a teacher's whole career. The more classroom visits we have, the more watered down each visit's rating becomes and the less easy it is wreck a teacher's career. Most teachers will have 4-6 observations over the course of the entire year. Thats 4-6 measures.
Fact: The more observations that are required, the harder an administrator has to work to hurt a teacher. It's easy to wreck a teacher's career when you have only 40 observations to perform per year. Try keeping on top of wicked task that with 250 observations to perform, data to enter in detailed manner and 'on the record' reports to generate for each and every observation. It's a lot harder to go after a teacher under that process.
That's not to say that it can't be done. Of course it can. But when it can be done, the observations had better align with the teachers' test results. Because if the two measures don't match the teachers' test scores, then the teacher still escapes with his or her middle finger fully in tact.
And that test may now count for as much as half of the rating. That's unbelievably not good. However under the previous system, where testing counted for forty percent (just under 1/2), the tests themselves were cut into two different categories. For HS teachers, this meant that how your students did counted for 20% and (in many cases), how all of the school's students performed on tests in your department counted for another 20%. That's 2 more measures. These became combined and when they did, it looked real bad for your capricious, abusive administrator if they did not match their observations of your teaching.
That's not to say this system isn't bad. The truth is that no one pays attentions to Danielson at all until the teacher has been "I" rated for close to a year. Something (like Danielson) that no one understands can easily be used as a stick to beat someone with. But this stick is more reversible. It has a built-in review process and almost all of the APPR grievances in my building result in the observation being removed. Why? Because the process is so difficult to keep up with, it's almost impossible to do without breaking some rule of some sort. The best job in the NYC DoE is that of tenured AP. They don't want to ruin all that cushiness just to go after a teacher.
However, I was VERY surprised at the UFT leadership for bragging about the result Matrix to determine final ratings. They spoke about it as though it was both simple AND fair. Let's be clear: The matrix neither simple or fair. (The whole system is neither simple or fair.) True, the Matrix doesn't' default to ineffective if your test scores suck (and it doesn't default to ineffective if your observations suck). But the Matrix and a Danielson process so complex that no one pays attention to it does nothing to improve teaching. And teachers who don't understand the process will have a more difficult time preparing to defend their jobs.
Final thought: This is where we'll all be left for whole generation of teachers. While it doesn't hurt too much and doesn't help at all, the current evaluation system does haves the benefit of operating from political consensus up in Albany. Because of this, this current system isn't going anywhere (in any substantial way) for the rest of our careers. This is it, folks. This (finally) is the hand we've been dealt: A system that is difficult to maneuver, where administrators will have to work their assess off to end our careers and where data and tests scores (many of which is generated by us) will dictate much of our review scores.
No one happy. No one hurt. "Welcome. To the real world."