September 11, 2011

How Do We Rank?

Print More

Monday is the eve of the release of U.S. News & World Report’s annual college rankings. And like all great eves — New Years, Christmas, Hanukkah, Arbor Day — I will gradually be overcome by a tingling sense of anticipation.Why?Why do I care?Usually, I care about something because of its value and merit. So what, then, is the value and merit of the rankings?Well, not much. In fact, they’re, largely, arbitrary. U.S. News uses several general categories to rank colleges, all of which are assigned relative weights. But, as Malcolm Gladwell points out, the relative weight assigned to each category can substantially influence the outcome of overall rankings. And how does U.S. News determine the relative value of each category? Quite arbitrarily. Robert Morse, who heads the U.S. News rankings project, has said, “We’re not saying we’re social scientists, or we’re subjecting our rankings to some peer-review process. We’re just saying we’ve made this judgment. We’re saying we’ve interviewed a lot of experts, we’ve developed these academic indicators, and we think these measures measure quality schools” (emphasis added).So, not only are the relative weights of each variable arbitrary, the actual variables themselves are too. Forbes produces its own list of college rankings and they use almost entirely different criteria to asses institutional quality — including one which gives 17.5 percent overall weight to student evaluations from RateMyProfessor.com. And, no surprise, they get entirely different results (finding, for instance, Dartmouth, Columbia, UPenn and Cornell all to be inferior to Boston College).These results also exhibit mysterious yearly swings. Bruce Gottlieb, investigating the meteoric rise of Caltech from ninth to first in the 1999 U.S. News rankings, found these swings to be anything but incidental. He explained that they are caused by changes U.S. News makes to its methodology in a purposeful effort to create “volatility” in the rankings to stoke reader interest and generate sales.It would be hard to argue that these intentionally manipulated rankings based on arbitrary variables assigned arbitrary weights are very valuable or meritorious. Yet still, we care.This may be due to the consequences of the rankings, not their actual content. Professor Ron Ehrenberg at the ILR School has shown just how consequential these rankings can be. His research suggests that, holding other factors constant, in the year following a jump in a college’s rank, that college could increase its selectivity, increase its matriculation rate and accept students with higher SAT scores — all while awarding less financial assistance. (And, visa versa for a drop in rank.) So, better rankings attract better students, potentially bettering rankings again and attracting even better students. This circular cycle, while clearly of consequence, does not speak to the quality of a school, merely that of its students — not one in the same. This is even more dramatically clear with the peer assessment category, which constitutes 15 percent of the overall rankings. Peer assessment evaluates institutional quality using top administrators’ opinions of their fellow colleges. However, these opinions usually reflect nothing more than the previous rankings. Colin Diver, the current president of Reed College (and former Dean of UPenn Law School) openly admits that, aside from “badly outdated information” and “fragmentary impressions,” his opinion of many peer institutions is based on the “relative place of a school in the rankings-validated and rankings-influenced pecking order.”This pecking order can become so ingrained in the collective psyche that it comes to define a school across all programs. Penn State Law School once received a middle-tier ranking from a group of approximately 100 lawyers who were handed a list of several law schools and asked to evaluate their quality. The only problem: When this ranking was made, Penn State didn’t actually have a law school.Many universities, aware of how rank can define a college’s entire brand, have made it a mission to improve their rankings, often at the cost of all else. Clemson recently announced its goal to become a top-20 public university. In this pursuit, administrators intentionally ranked all peer institutions below average and allowed classes with over 50 students to grow even larger (U.S. News only penalizes schools for the amount of classes with over 50 students, not the number of students in them).Cornell proves no great exception to this trend. In fact, our school’s strategic plan for 2010-2015 explicitly acknowledges an “overarching aspiration for the University: to be widely recognized as a top-ten research university in the world.” (In case you’re wondering, U.S. News also ranks the “world’s best universities.”)So, the rankings are consequential. Maybe that’s why we care. But they’re only consequential because we give them consequence; we act on them. And why do we act on them?We can’t help ourselves. They appeal to a base urge to judge ourselves relative to others and others relative to ourselves. The rankings aren’t about the school. They’re about you. It doesn’t matter if Cornell is better than Clemson, it matters that you’re better than a Clemson grad — and there’s a list to prove it! It’s a matter of pride and perceived self-worth, for students, faculty, alumni and prospective students deciding where to go. Until we get over that, the rankings will always have meaning, and I’ll always have that tingling sense of anticipation on Rankings Eve.

Sebastian Deri is a junior in the School of Industrial and Labor Relations. He may be reached at sderi@cornellsun.com. Thought Crimes appears alternate Mondays this semester.

Original Author: Sebastian Deri