Since writing this piece about the initial press release, I've had an opportunity to look over the full decision. There is way too much to throw up in one blog post, so I'll try to focus on one thing at a time. In this post, I'll focus on how scores are tallied. Here, along with a few screenshots from my iPad are a few things I noticed about how the scores will be compiled.
Getting from 1 - 4 (HEDI) to actual numbers
We'll all have our Danielson observations, where the administrator will give us scores of 1-4. All those 1s, 2s, 3s and 4s then go back to the main office where they are averaged out and turned into something that will probably be called a Peer Index (PI). From there, it looks like the PIs from each subdomain will be converted into an overall PI, which is then converted into an overall a 100 point score using the conversion chart below. That score is your final score for the 60%. (No student surveys this year. That's for next year, so the score goes up to 60, not 55).
As you may know, domain four won't be observable in the traditional sense. Because of that, we'll be "allowed" to put time and effort and energy in to create a portfolio like thing of teaching artifacts. Those teaching artifacts (which will apply to anything from Domains 1 and 4) will count for only 25% of those 60 points (a total of 15 points). The teaching artifacts can be presented to the principal during a mandatory end of the year 'summary conference'. We'll get to present only 8 artifacts and each will have their own individual "score". Domains 2&3 will count for 75% of that score (a total of 45 points). Here is a screenshot on some examples of what some of those artifacts might be.
Tallying our final score
don't know how they'll go from whatever our state assessments will be to the final 100 point score, but Here are the final points, with the corresponding rating, that each final point-based score will fall within. Update: Using number I found in the last appendix of the decision, I came up with different cutoffs for the two local 20. Conesus in the ol' blogosphere seems to point to the fact i was totally wrong, so I'm changing them but keeping the original graphic (if for no other reason than to show you that the decision was, genuinely a bit confusing here).
Highly Effective -
Highly Effective -
Other measures (Danielson) (60%)
Highly Effective - 55-60
Total (they all add up)
Highly Effective - 91-100