Long gone are antiquated excuses of dogs eating homework and students selling essays after class. The new enemy of plagiarism and cheating is an artificial intelligence chatbot known as ChatGPT.
ChatGPT was built by OpenAI, a San Francisco technology company also credited for GPT-3 and DALL-E2. The platform, launched on Nov. 30, attracted a million users in its first five days.
ChatGPT uses a type of machine learning called natural language processing to generate realistic language at any level.
Previous AI chatbots were capable of specific, explicitly defined jobs like writing marketing copies, but they failed when tasked outside their areas of expertise. ChatGPT is more flexible and intelligent — able to both write jokes and explain scientific concepts at high levels.
Ph.D. student in information science Jose Guridi grad believes that users are unnerved by the human-like capacities of ChatGPT communication.
“I believe that [what is] most revolutionary about ChatGPT and other AI advancements of the last years is not necessarily the technology itself, but how it openly challenges the boundaries between humans and machines,” Guridi said. “What scares people is how we [accept that] technologies can [now] do things we [previously] believed were exclusively human, which forces us to rethink what is to be human, how we generate value and how we will live with these systems.”
Universities are struggling to maintain academic integrity amidst this technological development. An informal and anonymous poll conducted by The Stanford Daily reported that 17 percent of Stanford student respondents admitted to utilizing ChatGPT on their fall quarter assignments and exams.
Prof. Haym Hirsh, computer science and information science, said that while students could always cheat, the introduction of ChatGPT has made plagiarism more accessible.
“Now it becomes possible to just run a program to [complete assignments] — cheaply and as often as you want,” Hirsh said. “And even when it’s not about cheating wholesale, you can use ChatGPT to do elements of assignments — like framing the response to a homework question, letting the software do it rather than the student doing it and learning from that part of the exercise.”
Nick Weising ’24 is Cornell Intellectual Property and Ethics Club’s lead researcher and shared concerns that excess ChatGPT use may make it harder for students in primary and secondary education to acquire foundational skills.
“ChatGPT can blow through multiplication table worksheets, summarize chapters of books and answer historical [and] science questions,” Weising said. “A lot of this work is repetitive to drill core concepts [and] skills into students’ brains so that they can be drawn upon in more advanced courses. This work is also designed to get students acclimated with the process of working through [hard problems which] helps students in the future regardless of their academic or professional career.”
Some professors are also concerned with ChatGPT’s spread of misinformation. Prof. Kim Weeden, sociology, expressed that the common person may struggle to discern whether some AI-generated information is reliable.
“AI technologies are in some sense laundering misinformation and biased information. They grab bits of existing content, feed it through an opaque probability model and then spit out ‘new’ content that’s been stripped of information about its sources,” Weeden said.
At universities across the nation, administrations have formed task forces and held university discussions to address ChatGPT. The University of Buffalo and Furman University plan to establish AI discussions into required classes for freshmen. Washington University and the University of Vermont are adapting academic integrity policies to address generative artificial intelligence.
In other cases, professors are adapting classes to a post-ChatGPT learning environment. Prof. Aumann, philosophy, of Northern Michigan University decided to tweak his teaching methods after a student confessed to utilizing ChatGPT on an assignment. These changes include mandating students to construct first drafts in class.
Over 6,000 instructors from universities, including Harvard University, Yale University and the University of Rhode Island, have also signed up to utilize GPTZero, a program established by Princeton University student Edward Tian to detect AI-generated text. GPTZero was released on Jan. 2 and utilizes ChatGPT against itself to check the level of involvement of the AI system in creating text.
However, Ph.D. student in information science Daniel Mwesigwa grad warns that universities and professors should take extra care when utilizing ChatGPT detection tools to avoid mistaken accusations.
“[ChatGPT detection tools] should be carefully assessed to limit incidences of erroneous accusations of breach of academic integrity,” Mwesigwa said. “Where there could be substantive evidence of unsanctioned use of ChatGPT in academic settings, clear policies must be set in place for fair judgment and adjudication.”
Mwesigwa instead believes that ChatGPT concerns should inspire fundamental pedagogical shifts.
“From a general standpoint, the professors could begin by appreciating the potential of ChatGPT,” Mwesigwa said. “How might ChatGPT be used constructively and collaboratively?”
The Cornell Center for Teaching Innovation similarly argues that designing authentic assessments and establishing clear communication with students is more effective than policing artificial intelligence use.
However, Weeden described that adapting the curriculum to ChatGPT requires extra time and resources.
“I’d like to think [embracing ChatGPT as a new tool and teaching students to use it effectively] will win the day, but unfortunately it’s the most labor-intensive approach for instructors,” Weeden said. “Universities would have to reverse the trend toward larger class sizes and less per student pedagogical support.”
Hirsh also noted that educators will need to be cognizant of where allowing ChatGPT may inhibit students’ growth.
“The real challenge is in areas where writing is an essential part of the learning process — such as in learning how to write since the only way to do it is to get experience doing it,” Hirsh said. “I think educators will eventually figure out where it’s ok to use ChatGPT and where it isn’t, just as in how math educators prohibit the use of a calculator when doing so is important to learning something. Perhaps in some settings, having more in-class writing might be the desirable adaptation.”
Despite these challenges, Digital Humanities Club President and Co-Founder Asher Lipman ’23 thinks ChatGPT can reduce educational boundaries.
“Some of the coolest and most interesting applications of programs like ChatGPT that I’ve seen have been in allowing people with an interest in new fields to get started building things and exploring new ideas right away,” Lipman said. “Isn’t that what academia’s all about?”