Consider this scenario: You’ve taken a skydiving class. At the end of the course, you are going to jump out of a plane with a parachute. [Note to Reality Police: Yes, beginning parachuters are typically tied to an instructor for their first jump. This is a Scenario. Work with me :) ]
There are 3 professionals who pack all parachutes for course participants. You get to choose which person will pack your chute. Who would you want packing your chute: Packer #1, Packer #2, or Packer #3?
Every time I conduct this scenario, no one wants Packer #1. Sure this person started out great. Initially, scores are consistently above the Competency/Mastery Line. But on Week 6 and onward, this packer is unable to pack a chute that is safe for your jump. Choosing this packer is asking for a Once in a lifetime experience.
Packer #2 gets several votes. I’m always intrigued about why this person is picked. Here is a packer who is consistently inconsistent. The exam results are as much below the Competency/Mastery line as above. The common response is that based on the last test in week 9, this person is due for some good packed chutes. Choosing Packer 2 is for those looking to literally gamble with their lives on whether their chute will be the one packed well.
Packer #3 is an interesting story. Looking at the exam results for the first 6+ weeks, and you wonder how did this person stay employed? There is no way I’d want this person in the same room as my chute. Yet something happened beginning in week 7. Not only is this individual packing good chutes, the quality level exceeds the other packers and continues to excel to higher standards. Most people choose Packer #3. There is much to gamble with one’s life when jumping out of a plane. Having a chute packed by this professional, ensures that the focus on risk and reward is based on what you learned by the end of the course.
Interestingly, if we look under the hood of the chart at the raw scores by the packers, a different story is revealed. Traditionally supported or directed by schools and districts, teachers use a 100 percent scoring system for grades. The scores are averaged to get the reported grade.
[Note: One variation is to “weigh” categories of grades, such as 30% Tests, 20% Homework, etc. Within each category the scores are averaged. More on weighing grades in a later post.]
What if the Packing Chart reflected a school course? Plug-in the class you teach or are taking.
How do the results differ from a record of what students truly know and understand of the curriculum?
Packer #3 who the vast majority of teachers surveyed would risk their life on her competency has a grade of “D”, while the others who you’d really need a psych test if you chose them to pack your parachute get a “B”.
Assessment and grading practices are complex, and should be treated as such in how academic learning is evaluated. Yet, one component, averages, is a primary tool used for its simplicity in making judgments. Averages might not need to be thrown out of the mix. What’s needed is to include other factors for analysis of student competency. Beginning with clean assessment data, add in a shift in thinking about assessment practices, and add in an educator’s professional experience about curriculum, learning, and the student, will improve students’ growth and academic achievement.
There are 3 professionals who pack all parachutes for course participants. You get to choose which person will pack your chute. Who would you want packing your chute: Packer #1, Packer #2, or Packer #3?
Every time I conduct this scenario, no one wants Packer #1. Sure this person started out great. Initially, scores are consistently above the Competency/Mastery Line. But on Week 6 and onward, this packer is unable to pack a chute that is safe for your jump. Choosing this packer is asking for a Once in a lifetime experience.
Packer #2 gets several votes. I’m always intrigued about why this person is picked. Here is a packer who is consistently inconsistent. The exam results are as much below the Competency/Mastery line as above. The common response is that based on the last test in week 9, this person is due for some good packed chutes. Choosing Packer 2 is for those looking to literally gamble with their lives on whether their chute will be the one packed well.
Packer #3 is an interesting story. Looking at the exam results for the first 6+ weeks, and you wonder how did this person stay employed? There is no way I’d want this person in the same room as my chute. Yet something happened beginning in week 7. Not only is this individual packing good chutes, the quality level exceeds the other packers and continues to excel to higher standards. Most people choose Packer #3. There is much to gamble with one’s life when jumping out of a plane. Having a chute packed by this professional, ensures that the focus on risk and reward is based on what you learned by the end of the course.
Interestingly, if we look under the hood of the chart at the raw scores by the packers, a different story is revealed. Traditionally supported or directed by schools and districts, teachers use a 100 percent scoring system for grades. The scores are averaged to get the reported grade.
[Note: One variation is to “weigh” categories of grades, such as 30% Tests, 20% Homework, etc. Within each category the scores are averaged. More on weighing grades in a later post.]
What if the Packing Chart reflected a school course? Plug-in the class you teach or are taking.
How do the results differ from a record of what students truly know and understand of the curriculum?
Packer #3 who the vast majority of teachers surveyed would risk their life on her competency has a grade of “D”, while the others who you’d really need a psych test if you chose them to pack your parachute get a “B”.
Assessment and grading practices are complex, and should be treated as such in how academic learning is evaluated. Yet, one component, averages, is a primary tool used for its simplicity in making judgments. Averages might not need to be thrown out of the mix. What’s needed is to include other factors for analysis of student competency. Beginning with clean assessment data, add in a shift in thinking about assessment practices, and add in an educator’s professional experience about curriculum, learning, and the student, will improve students’ growth and academic achievement.
If the primary purpose of grading is to measure academic skillfulness of students, one potential long range outcome is developing and/or nurturing highly qualified future professionals. Our assessments should indicate or track progress of learners.
At the end of the day, if our approach to assessment is multi-faceted and continuously reviewed, there will be more future professionals like Packer #3, and we can help Packer #1 and #2 become more than a facade.
How does solely using Averages a potential fog to accurate grade reporting?
What elements of good assessment practices do you find important?
How should an educator’s expertise factor in such decision-making?
No comments:
Post a Comment