February 11, 2014

Professor: Columbia Higher Education Study ‘Imprecise’

Print More

By JONATHAN LOBEL

Several Cornell professors and University officials are critical of a new study known as the College Educational Study Project, which attempts to quantify the quality of large research universities by assessing academic rigor and teaching quality.

Prof. Corbin M. Campbell, higher education, Columbia University, and a team of graduate students observed 158 courses and analyzed 149 syllabi at two “selective” research institutions last spring, according to the study’s website. The team then scored the educational quality of each institution on the basis of these observations.

The researchers of the study expect to pursue a second, multi-institutional pilot of 7-10 institutions in the fall of 2014 and ultimately conduct a national study after that, according to the study’s website.

Susan Murphy, vice president of Student and Academic Services, said she is skeptical of these future studies.

“[I’m] not sure [the study] is scalable to any helpful degree,” she said.

However, Murphy said that the study is “thought provoking” because it does not focus on what many university rankings emphasize, namely, selectivity, SAT scores, and funding.

Although Murphy said attempts to quantify the quality of a university are “very difficult,” she said that they are certainly “worth doing.”

The metrics for academic rigor depended on the complexity of the course, the quantity of work assigned and the level of expectations set for students, according to the Chronicle of Higher Education. Teaching quality was based on how well the instructor introduced major concepts and called forth student’s prior knowledge of the material.

Prof. Donald Viands, plant breeding and genetics, said he was concerned that the research team did not directly account for what students actually learned from their courses.

“The most critical factor [in assessing educational quality] is learning so that students are able to accomplish the learning outcomes for the course,” he said.

As a result of this deficiency, Viands called the study “imprecise” in its approach to quantifying teaching quality.

Viands was also critical of the short duration of the study, saying that meaningful conclusions could not be drawn from such an investigation.

“The [study’s] rating is very subjective,” he said. “Many observations, not just one week, are needed in a course.”

Prof. Malden Nesheim, nutritional sciences, said that factors including interaction with committed peers and levels of student and faculty commitment to learning were not measured.

“[It’s] hard to capture all this in a single number,” he said.

Prof. Mark Constas, applied economics and management, also said he was critical of the study’s methodology. According to Constas, the facts that the study had a response rate of less than 33 percent and that no assessment of response bias was provided make the study’s results suspect.

“One of the fundamental weaknesses in social science studies can be attributed to bias associated with non-random sampling procedures or to the failure to adjust estimates that suffer from bias,” he said, saying that the study contained these flaws.

Both studied universities’ scores in academic rigor and teaching quality were statistically indistinguishable, according to the Chronicle of Higher education. Constas attributed this finding to flawed statistical procedures, which lacked sensitivity.

“This study, while thought provoking, does little to promote the view that investigations of education are based on strong theoretical foundations and supported by careful analysis,” he said.