Before I came to college, I never understood the phrase constructive criticism. How could an insult “construct” anything? “You can’t wear brown shoes with black pants,” my mother would always tell me, a rebuke that only served to “construct” a sense of irritation, although thankfully it was far less extreme than “Tiger Mom” Amy Chua’s instructions to her children. At Cornell, I’ve seen criticism take more hateful forms. Scroll down to the bottom of this article online, or any Sun opinion column — or for that matter, any online forum that enables anonymous commenting — and you’re likely to see some very harsh words. “You … trying to sound smart makes you sound stupider,” one commenter wrote on another opinion writer’s recent column. These brief, degrading comments don’t construct anything other than an atmosphere of hostility.
So where did I discover the benefits of constructive criticism? In the classroom. The other day, while checking my email, I was startled to see a note from a seminar T.A. containing mid-semester feedback on my performance. A week later, a similar note from my professor herself arrived giving me comments and suggestions for my class participation and the class discussion I had led. Although the content was valuable and informative, what seemed even more significant was that the T.A. and the professor had written these notes at all. In my three years at Cornell, excluding comments on papers, I had never received a single evaluation other than a letter grade.
In seminars, where participation in class discussions and Blackboard posts count nearly as much as final papers, it’s enormously helpful to know how to improve your performance. Yet many professors are elusive about these matters. Maybe they’re reluctant to quantify things like participation which are inherently subjective. But if you’re going to put participation on the syllabus and assign it a percentage, then you owe it to your students to be clear about how it is being evaluated — before giving them that final grade.
Professors often espouse the attitude that grades don’t matter, that our classroom experience should be about learning. When it comes to choosing classes, they have a valid point. We should look for subjects that interest us, rather than ways to earn an easy A. But once we’re in the class, we have the right to know how we’re doing. Class participation is as vital to a seminar as taking prelims is to a lecture course. But while sitting for an hour filling out a Scantron sheet allows you to see a quantified measure of how much you’ve learned, doing the readings and thinking up intelligent things to say never results in a single word of feedback. These activities take up just as much time as studying for a prelim, if not more. Some upper-level seminars require students to read an entire book in a week, while many lecture courses can be passed successfully with only occasional moments of frantic cramming.
Colleges are always boasting about how they offer seminars and small classes which teach students to learn, rather than just memorize. But when we’re not told what we’re doing right and wrong, we don’t learn how to learn. Professors who teach seminars should hand out a grading rubric on the first day of class. This rubric should not only indicate, as many class syllabi do, what percentage each activity (e.g. class participation) takes up in the final evaluation; it should also spell out what exactly is required to achieve the highest percentage. For my class, these criteria included number of late Blackboard posts (must be less than 3 for the student’s performance to be considered very good) and level of engagement in class (must be consistently high). Why do other professors not do the same?
Many professors might be horrified at this suggestion. “But we can’t possibly create a set of standardized criteria!” they might cry. “Every student is different, you just know an A student when you see one.” But for students who are not familiar with the discipline — non-history majors taking a sophomore seminar, for instance — such advice can be enormously helpful. The criteria are not “standardized” in the sense of a multiple-choice test. The class discussion rubric I received did not specify exactly how many words I was supposed to say; rather, it discussed the general strategy I should employ in presenting the material, from “effectively uses eye contact” to “clearly defines topic.” These guidelines help the student construct a good presentation, while the use of words like “effectively” and “clearly” leave grading ultimately up to the professor’s discretion.
It’s the professor’s job to teach the students, and a large part of teaching is, well, constructive criticism. Constructive criticism means that it helps you to “construct” a better course of action: craft better essays, plan better presentations, participate more actively in discussions. The professor should point out how students can do better, offering comments and advice rather than just a letter grade. One professor does not offer standardized criteria but has this to say: “Master the material and then go farther.” In other words, in order to do well in any class you should indicate not only that you understand the material, but also that you can apply it.
Professor evaluations are equally problematic. At the end of the semester, students get the chance to turn the tables and rate their professors. But some schools, like the College of Arts and Sciences, do not make evaluations public. I’ve previously called on the College of Arts and Sciences to make public their course evaluations, so that students can see evaluations and make more informed decisions about courses to take. If constructive criticism worked both ways, including both students and professors, it could really construct something solid.
Elisabeth Rosen is a junior in the College of Arts and Sciences. She may be reached at email@example.com. The Critic’s Corner appears alternate Tuesdays this semester.