Grading is not very different from coding a set of interviews. In coding, you perform an initial “read” and use this to develop a list of distinctive features, words, phrases, or other qualities that indicate a separation or difference in the way the speaker understands something. Technically, this is called discriminant function analysis, but I have always called it coding. You then do a second read in which you apply the list of features to the transcriptions to discover patterns of separations and affinities across the interviewees.
In grading, you perform an initial read of the papers and use this to develop a rubric. This is Ed-speak for that same discriminant function analysis. In a rubric, however, the purpose is not really the discovery of inter-subjectivity across social actors. It is the determination of a grade based on the presence or absence of the features in an essay or research paper. These features occur in the students’ work in at least three different ways: readily apparent, inferred or implied, or absent. So, you can think of a rubric as a grid with the features listed on the left and with three adjoining columns that indicate how that feature might appear in student work if it was readily apparent (++), inferred or implied (+), or absent (-).
To apply the rubric, you first have to be confident that you have indeed identified the important features. Often, I change the wording as I read the essays to be more precise. My rubrics are always evolving. Eventually, I want to use these features as the core comments I supply the students through feedback on their work. So, I want these be as precise as possible. When I’m fairly confident that I have it right, I then read the papers with a blank grid, checking ++, + or – as I find or fail to find what I am looking for. Walvrood calls this criterion-based grading. Once you get used to your own rubric, it is highly efficient. You can go through a stack very quickly. It is also fair. By have some auto-text entries for each of the ++,. + or – possibilities, you can tell the student exactly what worked and what didn’t work, as far as you are concerned. It is actually quite easy to convert the assessments to a single grade. More importantly, it is the best antidote against grade inflation. Student with very different feature profiles are far less likely to get the same grade.
I have written these rubrics for all sorts of learning tasks. The ones for descriptive tasks start out as the longest and most detailed, as you might expect. But I’ve been surprises at how quickly the ones I write for analytical and interpretive tasks grow over time. Two years ago, my department decided to see if we could build a rubric we could use to help build the writing skills of our majors. You can see it at this address. It applies to interpretative and analytical tasks, such as interpretations of theory, research reports and comparative studies. We each add whatever features pertain to a particular assignment. And, we ignore features that are irrelevant to a particular assignment. It’s a good base from which to start.
Anyway, if you like to try it next term, now is the time to start. What are the features that made a difference in the way you graded your students written work this term? What did these features look like when they were readily apparent (++), inferred or implied (+), or absent (-). That’s it. You’re on your way to fairer and more efficient grading.
By the way, it's perfectly reasonable to put the rubric for the assignment in the syllabus. Students perform better when they understand the criteria for evaluation ahead of time.