During pre-enrollment season, many students rely on Rate My Professor to craft their future schedules. Considering the cost and prestige of a Cornell education, some entitlement as to the quality of faculty is not unreasonable. Despite this, the ways in which students are able to interact with their future professors are fundamentally flawed, giving them a poor understanding of the true quality of many of the faculty.
Currently, for most, Rate My Professor is the obvious choice; it is easy to access and usually communicates essential information. Beyond this, many students believe it to be reliable: able to distinguish between professors that would be detrimental to their education from those that might help them flourish.
There is good precedent for this belief. Studies have been conducted showing a positive correlation between Rate My Professor feedback and university-administered evaluation forms. At face value, this seems like a perfectly reasonable argument for the accuracy of Rate My Professor, as official results appear to be reflected in unofficial evaluations.
However, Rate My Professor reviews barely differ in their form from course evaluations, as both ask for written opinions on course and professor performance. If one student were to have written an especially poor review on Rate My Professor, the natural inference is that they would do the same for Cornell’s official course evaluations. To this effect, it is not particularly surprising that Rate My Professor shows correlation with course evaluations, considering they solicit identical forms of student feedback.
The issue with this is that such student evaluations rarely, if ever, correlate with actual teaching effectiveness. Instead, they often reflect student biases, especially against female professors, who receive abnormally high criticism compared to their male counterparts. Even assuming the most optimal set of conditions with minimal prejudice, student evaluations still rarely correlate with positivistically researched pedagogical efficacy. Additionally, it should be noted that those giving Rate My Professor reviews are a self-selecting group, often only representing the two starkest extremes for any one opinion. Students, in consulting these online resources, are searching for an unbiased opinion to guide them, not the bimodal outlook that is usually offered.
This is often the case why so many innovative pedagogical strategies are frowned upon, often misguidedly, so heavily by the student body. Some of these initiatives, such as flipped classrooms, have been shown to be more effective for student learning. Yet, it is hard for these qualities to be fully captured by Rate My Professor reviews, who too often comment on either the professors themselves or personal outcomes within the course.
Leaderboard 2
Now, this does not imply that course evaluations are completely useless. In the most extreme cases, they help weed out the worst lecturers for students, such as individuals that are rude or incompetent in classroom settings. However, when it comes to the finer nuances of teaching, separating an average course from a truly exceptional one, course evaluations and Rate My Professor will always fall short.
Even still, course evaluations, if restructured, do have nascent potential, which Rate My Professor lacks. Course evaluations need to be made semi-mandatory for classes, either giving bonus credit or counting towards participation. This, of course, must be done within the bounds of reasonable taste: multiple-choice questions with places for explanation, a relatively short questionnaire and only the most essential points touched on (comments on course materials take precedence over those on professors). This would eliminate almost all self-selecting bias in evaluations, leaving a broad and representative sample of opinions for teachers to examine.
Secondly, course evaluations should be administered twice during the semester. This is a particularly essential point, as it has the potential to severely dull evaluations’ most conspicuous downsides. Midway through the semester, students are far more sensitive to the mechanistic downsides of courses than their final grade, allowing them to provide more impactful feedback. Through this, they are less likely to feel emboldened in commenting on teachers individually.
Newsletter Signup
Finally, course evaluations should be made public to the entire student body. This dataset, after appropriate processing, is perhaps the best tool that students can have when selecting teachers, providing a reasonable alternative to Rate My Professor. Such practice is common at some universities: MIT, for example, publishes its evaluation datasets for the entire community.
Course evaluations are by no means perfect. Too often they reflect biases, providing teachers with limited knowledge as to how to improve. Yet, with careful consideration, they can be transformed into powerful educational tools, allowing students and faculty to approach teaching more constructively.
Ayman Abou-Alfa is a second year student in the College of Arts & Sciences. His fortnightly column Mind & Matter delves into the intersection of culture and science at Cornell University. He can be reached at [email protected].
The Cornell Daily Sun is interested in publishing a broad and diverse set of content from the Cornell and greater Ithaca community. We want to hear what you have to say about this topic or any of our pieces. Here are some guidelines on how to submit. And here’s our email: [email protected].