I don't think the growth scores themselves matter at all. Having said that, there are a few things that do matter. Some matter a great deal. Others, perhaps a bit less. But the general gist is this: Data that PREVENTS me from seeing how well my teaching helped a certain student is worthless data. Data that PREVENTS me from seeing which category of students I had the least amount of success on these test with, is worthless data. And data that cannot be verified -that does not treat the recipient of the data like an adult and offer the actual numbers is worthless data.
I accept my crummy Effective (87). Not that it matters, because it doesn't.
What matters is that I am not able to see how each of my students performed against the rest of the students of the state:
What matters is that I was not able to see their growth scores. I was not able to see how each one performed against other similar students across the state:
What matter is that I was not able to see how well (or unwell) my instructional strategies worked with, say, my ELL students, or with my Latin students who speak english as a first language but come from homes below the poverty line or with students who have test modifications, or with the ones who do not.
What matters is that I was not able to see which category each of my students was placed in by NYSED in order to verify that NYSED placed them in the correct categories.
I was not able to see what MY MGP (my Mean Growth Percentile; the average of all of the percentiles in which all of my students fell), nor am I able to see the MGP of all, like me, "US History Teachers" throughout the state
What matter is that I am not able to see the MGP for teachers in my district (all over NYC). Was my 15/20 and 15/20 the very highest score for all Chemistry teachers across the city? Was I "The Best" Chemistry teacher in NYC? What if I wanted to know (I don't)? But did my instructional practices lead me to be the worst in NYC? Sure, it's effective when measured against the performance of students all across NYS, but teachers from the city are THE BEST teaching corps in the state. It matters that I cannot measure against only that.
What matters even more is that it tells me nothing about how well my students are doing against other students in NYC (or how well my strategies helped my students succeed as opposed to the strategies used by other teachers in NYC).
You see, without validating this information that I have been given, I am excluded from finding ways to improve (or even change) my instructional practices with the students who didn't find success. I'm also excluded from finding teachers whose best practices are, well. best-er than mine. And do you know what that means? It means that NYSED has prevented me from finding further success at my job via improvement. They're not permitted to do that and it is causing damage to my property which, in this instance, I identify as my ability to succeed and improve at my job). Because of this, I'm thinking a lawsuit to recover damages lost (taken) by NYSED.
These things, to varying degrees, DO matter.
Update: They do promise some of these things by way of a 'detailed workbook;; sometime next week, or maybe the one after. But I have suspicions about their idea of detail.
I may just head over to the blogs and paste this in the comments section as well. Don't think ill of me if I do (Ok, I really don't care what you think of me. I was just trying to be nice).
How do we fight this? A lawsuit seems the only approach, since UFT is unresponsive. Syracuse and Rochester have each filed suit. NYC teachers should also file suit. There are some amazing anomololies in the data. Based on recent articles, it seems there would be some awesome testimony from expert witnesses (at the American Statistacal Association and the American Marhematical Society). These in accurate and misleading statistics are damaging teachers careers. . . And to your pointReplyDelete
. . about not actually getting useful data, I once had a conversation with someone who worked in the Offoce of Teacher Accountibility while at a cocktail reception for Fullbright Scholars. I asked him why we never got useful data in a timeframe which would allow us to make educational decisions. I asked why they don't just give us raw data. He said, "You and I know you can make data say whatever you want it to [and that's what we're doing]. We don't release it right away because we aren't don't with it." So, there you have it. Straight from the horse's mouth.ReplyDelete