The Cornell Faculty Senate discussed the impact of generative artificial intelligence — models that utilize machine learning to create new and original content — on the ways students and faculty engage in course material during its first meeting of the academic year on Wednesday.
The discussion was led by Prof. Steven Jackson, information science, science and technology studies, who also serves as the Vice-Provost for Academic Innovation, and Rob Vanderlan ’88, executive director for the Center for Teaching Innovation.
Jackson discussed a University report regarding generative artificial intelligence in education and pedagogy, which was released this past summer after a committee was assembled — chaired by Prof. Kavita Bala, computer science, who is also Dean of Cornell Bowers Computing and Information Science, and Prof. Alexander Colvin Ph.D. ’99, industrial and labor relations, who is also Dean of the college. The committee was created last spring to develop guidelines for using GAI at Cornell.
Jackson stressed the importance of the report, noting it was “one of the first of its kind,” with many universities pointing to it to develop their own resources, policies, and recommendations about the future of GAI models.
The report recommends that faculty adopt one of three different approaches to address the concerns of GAI based on their courses.
Faculty should either prohibit the use of GAI where it can interfere with students’ developing a foundational understanding of course material, allow the use of GAI with attribution as a useful resource, or use it as a way to elevate creative thinking or help students with disparate abilities and needs, the report stated.
Vanderlan referenced the importance of the CTI — a group that assists faculty in the development of curricula — in helping faculty navigate the complexities of the GAI issue.
“[We’ve] created web resources that we’re actually quite proud of; pages on academic integrity, on accessibility, on assignment design, which includes a lot of sample language you can use in your syllabus and with your assignments,” Vanderlan said. “Everyday CTI runs Zoom drop-in hours. You can just stop in, bring an assignment, bring questions, bring concerns.”
While there are many ethical concerns about the usage of GAI tools in a classroom setting, especially with its usage on assignments and algorithmic bias, the report lays out a framework for the potential benefits that this technology can bring to campus.
Specifically, the report highlights the customized learning experience that GAI can provide students, while also recognizing the potential harms of the technology.
“Currently, GAI output can include inaccurate information, toxic output, biases embedded in the model through the training process and infringement of copyrights on material and images,” the report stated.
While the report lists numerous recommendations about how faculty should approach the ever-evolving rise of GAI models like ChatGPT, Bard and Dall-E, Jackson stressed that there cannot be a single answer to the issue.
“The whole ethos of the report, I would say, is the importance of tailoring to the specificities of learning objectives, course types, and instructor preference,” Jackson said. ”There is not a blanket University policy.”
The report acknowledges that the use of GAI tools will likely grow over time, emphasizing that faculty use their own judgment and give explicit directions regarding the use of GAI in class.
Jackson noted that despite the comprehensive nature of the report, recommendations may change as GAI technology continues to evolve.
“I think the report has a really nice, circa August 2023 [understanding of the issue],” Jackson said. “I should say six months from now, this may already be a little bit outdated.”
Matthew Kiviat ’27 is a Sun contributor and can be reached at [email protected].