Sunday, June 2, 2013

Just Some Quick Information About How the Scores will be Compiled.

Since writing this piece about the initial press release, I've had an opportunity to look over the full decision. There is way too much to throw up in one blog post, so I'll try to focus on one thing at a time. In this post, I'll focus on how scores are tallied. Here, along with a few screenshots from my iPad are a few things I noticed about how the scores will be compiled. 

Getting from 1 - 4 (HEDI) to actual numbers

We'll all have our Danielson observations, where the administrator will give us scores of 1-4. All those 1s, 2s, 3s and 4s then go back to the main office where they are averaged out and turned into something that will probably be called a Peer Index (PI). From there, it looks like the PIs from each subdomain will be converted into an overall PI, which is then converted into an overall a 100 point score using the conversion chart below. That score is your final score for the 60%. (No student surveys this year. That's for next year, so the score goes up to 60, not 55). 








"Teaching Artifacts"
As you may know, domain four won't be observable in the traditional sense. Because of that, we'll be "allowed" to put time and effort and energy in to create a portfolio like thing of teaching artifacts. Those teaching artifacts (which will apply to anything from Domains 1 and 4) will count for only 25% of those 60 points (a total of 15 points).  The teaching artifacts can be presented to the principal during a mandatory end of the year 'summary conference'. We'll get to present only 8 artifacts and each will have their own individual "score". Domains 2&3 will count for 75% of that score (a total of 45 points).  Here is a screenshot on some examples of what some of those artifacts might be. 




Tallying our final score
I don't know how they'll go from whatever our state assessments will be to the final 100 point score, but Here are the final points, with the corresponding rating, that each final point-based score will fall within. Update: Using number I found in the last appendix of the decision, I came up with different cutoffs for the two local 20. Conesus in the ol' blogosphere seems to point to the fact i was totally wrong, so I'm changing them but keeping the original graphic (if for no other reason than to show you that the decision was, genuinely a bit confusing here). 



State measures 
Ineffective- 0-12 0-2
Developing- 13-14 3-8
Effective- 15-17 9-17
Highly Effective - 18-20 18-20

Local Measures
Ineffective- 0-12 0-2
Developing- 13-14 3-8
Effective- 15-17 9-17
Highly Effective - 18-20 18-20

Other measures (Danielson) (60%) 
Ineffective- 0-38
Developing- 39-44
Effective- 45-54
Highly Effective - 55-60

Total (they all add up)
Ineffective- 0-64
Developing- 65-74
Effective- 79-90
Highly Effective - 91-100























.







6 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. In reality the scores most people care about are for those poor souls who are rated ineffective. And that will sadly be based mostly on student test scores.

    ReplyDelete
  4. If the State and/or Local measures are ineffective the teacher must be rated ineffective. Is this not part of the decision?

    ReplyDelete
    Replies
    1. Pardon the confusion. The decision is a bit contradictory and I may have used the wrong chart with the assessment pieces.
      The fact is that the numbers don't allow you to reach a 65 without ranking above the ineffective reading in at last one of those categories.

      Delete
  5. Hello, after reading this awesome paragraph i am
    also happy to share my know-how here with friends.

    ReplyDelete