Cool as McCumber: Making the Grade

Dec. 21, 2015

As a graduate school faculty member, I always had one hard and fast rule: no one was allowed to change their grade after the final was administered.  I even had that specific rule cited on each and every syllabus I ever created.  Now that may sound wholly logical, but you would be surprised how often it was challenged, and even the number of times it got me, as the instructor, into trouble.  I have been dragged before department chairs, deans, and even a college president for not allowing students to shift their grades after the final. Most complainants were not those who simply failed the class, they were primarily adult students who received a B or C grade when they assumed an A was merited. 

I always used formal criteria for grading final examinations.  In that way, I could calmly point out errors, mistakes and omissions to justify the grade given.  That rarely mattered.  Students would angrily rant about injustice while others would whine about family, work, and various life problems that affected their performance.  I would do everything within my power to help students understand the material during the semester, but after final grades were posted, I wasn’t going to teach the course again (without compensation) or assign and grade extra credit projects.  The course was over.  Sadly, this old-school philosophy of mine caused me much grief for many years.

I learned that people take grades seriously, even when they don’t really matter.  Take the grades the government hands out to agencies and departments for their cybersecurity programs.  A voluminous federal mandate requires agencies to submit reports on their program that are evaluated by a central agency.  Then, an A through F school-type grade is assigned.  These grades then form the basis for inter- and intra-agency wrangling, arguing, and finger-pointing. 

There have also been several government and industry guidelines for a security program’s maturity ratings.  Most of these initiatives outline between five and twelve elements of a security program, and then provide a numerical rating based on the maturity and efficacy of each aspect of the program.  I have been involved with several such maturity reviews, and have been bemused with how incensed and defensive managers and decision-makers become when confronted with a simple one to five rating.  It’s amazing how many people treat a security program maturity rating like a bad grade school report card.  I have sat through countless meetings where these rating have been hotly debated among the raters and ‘ratees’. 

Even when the rating is presented with the objective rating criteria, many security leaders want to argue how the numbers were assigned rather than understand how to use these maturity ratings to validate and improve their security posture.  My guess is that it’s how we’ve been conditioned to approach grading from earliest childhood.  We need to find a better method to evaluate security, because we are currently not making the grade.