With chatbots like ChatGPT and Bard at our fingertips, it has never been easier to write an essay, solve a math problem or summarize a reading. While many professors despise artificial intelligence for this very reason, others are embracing the technology to facilitate deeper learning.
This academic year, professors across the University are restructuring their academic policies to accommodate the new technology. Artificial intelligence is becoming central to questions about academic integrity and meaningful learning.
The use of AI chatbots in academia became a cause for concern for the University and other institutions earlier this year due to the emergence of one AI engine, ChatGPT. The tool was created by OpenAI, a U.S.-based research laboratory responsible for developing AI software.
OpenAI states on its website that it has repeatedly improved ChatGPT’s engine, strengthening its capabilities of “understand[ing] as well as generat[ing] natural language or code,” using training data derived from Wikipedia and other virtual databases.
Despite ChatGPT’s disclaimer that its data is accurate only up to September 2021 and that it may provide inaccurate information, students across the country have been eager to use the tool for school work. A survey commissioned by Intelligent.com and conducted by SurveyMonkey found that 30 percent of polled students used ChatGPT in the 2022-2023 academic year.
Action taken by universities to manage AI’s use has been progressing at a slow pace. ChatGPT rapidly infiltrated the consumer market in late November 2022 — months after Cornell and its professors had publicized their academic integrity policies, which usually occur once at the start of each semester.
Leaderboard 2
However, the 2023-2024 academic year commencement has given Cornell’s administration and faculty a chance to reexamine and adapt the current system to nascent AI technologies.
The University’s Center for Teaching Innovation, an on-campus organization that provides developmental consultations to teaching community members, held two webinars in early August to discuss generative AI and its classroom applications with interested instructors.
The CTI shared the results of an AI detection study conducted by the University. The report found that detectors used to identify text written by artificial intelligence are not only easily circumventable, and hence ineffective, but also unreliable, as they have been shown to falsely flag original work.
Newsletter Signup
“We do not want to err on the side of falsely accusing anyone,” said CTI Instructional Designer Kim Benowski in one published webinar. “Unfortunately, the generative AI is going to get better at what it does, and detecting it is going to become harder. … Please avoid detection tools.”
ChatGPT’s ability to generate natural language raises concerns for professors teaching writing-centric classes. Educators are beginning to alter their grading criteria and class syllabi, and the modifications vary.
Prof. Magnus Fiskesjö, anthropology, described his difficulties with distinguishing between students’ original work and writing derived from AI.
“I have always relied on asking students to write papers during the semester,” Fiskesjö said. “I don’t feel that I can ask for take-home papers anymore.” He later clarified his statement in an email to The Sun, saying, “I think most students are honest, but there is no way for me to make out the difference.”
Fiskesjö acknowledged that some instructors have implemented alternative uses of AI into their syllabi. However, he said he remains hesitant due to ChatGPT’s possibly biased responses and demonstrated ability to falsify sources and information, stemming from two incidents in which ChatGPT generated fake articles from “The Guardian” that misled a researcher and student. A recent report from the misinformation research company NewsGuard found that ChatGPT-4 generated false claims 98 out of 100 times when prompted by the user.
“There is a preconceived idea that goes into it, but what if it is wrong?” Fiskesjö said. “Will [AI] keep us in the box when we ought to be thinking outside of it?”
Prof. Peter Katzenstein, government, has taught at the University for fifty years and said he has witnessed the effects of various technological changes on education.
“A lot hangs on learning how to use the technology, and that can happen in any classroom,” Katzenstein said. “The defensive mentality of saying ‘all students will cheat’ is not going to serve anybody well.”
Katzenstein mentioned that while the University is determining how to implement AI in small classroom settings, difficulties remain regarding using AI in large lecture settings. This semester, he worked with his teaching assistants to create assignments that would require students to produce original writing, in addition to engaging with chatbots for other assignments.
“There are ways of structuring writing assignments that will still force students to do original writing and thinking,” Katzenstein said, noting AI chatbots’ lack of up-to-date databases and inability to write experientially.
Simultaneously, Katzenstein encourages his TAs to use their discretion to supplementally implement AI in their considerably smaller discussion sections.
Prof. Matthew Evangelista, government, is on leave for the fall semester, but his temporary absence has not prevented him from expressing his indignation towards the technology companies behind generative AI.
“I couldn’t avoid a sense of resentment that Cornell is obliged to devote its resources to coping with a product developed by private, for-profit technology companies that none of us … needed or wanted,” Evangelista wrote in an email to The Sun. “I wish our colleagues at the CTI could devote their attention to pedagogical advances that the faculty value rather than be forced to respond defensively to a technology imposed on us.”
When asked about his feelings toward AI’s development and the obligations placed on CTI, Katzenstein held a differing opinion.
“Technologies happen, and the adoption of technologies in universities is faster than in other institutions,” Katzenstein said. “Am I being forced or coerced? Am I resentful that Cornell has to divert resources for this? No, not at all. I regard this as natural.”
Returning and new students are still adjusting to the University’s sudden change in their academic policies. Oswaldo Grajeda ’26 noted that one of his professors replaced take-home essays with in-class assignments.
“I wish I had the opportunity to write in my dorm so that I have more time to think and analyze … to produce a better essay,” Grajeda said.
Beatrice Perron-Roy ’27 described the differences she observed in AI policy between her high school and Cornell.
“ChatGPT was not really understood by my high school,” Perron-Roy said. “I think it’s fair that professors are changing the writing assignments because [these changes] are helping us. … You’re not expanding your knowledge if you use [chatbots] to write your essays.”
Grajeda shared a similar perspective.
“A lot of students think it’s easier to use AI when writing essays, and [AI] affects their ability to write and think creatively,” Grajeda continued. “I believe that AI should be reduced or avoided in order to grow and develop as a [student].
Ultimately, a general consensus among professors about AI’s impact on the future of academia remains to be reached.
“[This] is academia — we are here to solve the world’s problems,” Fiskesjö said. “To do that, we have to stand on the shoulders of those who came before us. AI defeats the purpose of why we’re here.”
Katzenstein, however, summarized his perspective on AI with an opposing viewpoint.
“Technological change is so deeply intertwined with the advance of science,” Katzenstein concluded. “That is what the University does — expand knowledge and adapt.”