Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled “Chat GPT and the Humanities” on Friday in the A.D. White House’s Guerlac Room.
ChatGPT’s ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.
In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.
The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.
The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This year’s focal theme is “Repair” — which refers to the conservation, restoration and replication of objects, relations and histories.
All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.
Leaderboard 2
Christiansen and Dubreuil also recently received New Frontier Grants for their project titled “Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration,” which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans’ poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.
Before the event began, attendees expressed their curiosity and concerns about novel AI technology.
Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.
Newsletter Signup
“I’m concerned about how ChatGPT is being used to teach and to write and to generate content,” Scheuer said.
Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPT’s impact on academia as the technology becomes more widely used.
“I suppose I’m hoping [to gain] a bit of optimism [from this panel],” Milliron said. “I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.”
Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.
“All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators,” Dubreuil said.
Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.
“Some [scholars] believe we should counteract [ChatGPT’s consequences] by means of new policies,” Dubreuil said. “Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.”
Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.
For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction — or similarly permanent and severe disempowerment — due to future AI systems.
Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as “fragment completion” — essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.
Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors’ writing. However, the author recognition program they used actually confused the artificial productions with the original authors’ work.
The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.
When describing the importance of this work, Contreras Kallens explained the concept of universal grammar — a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPT’s being able to reach the writing quality of many humans challenges assumptions about technology’s shortcomings.
“[This model] invites a deeper reconsideration of language assumptions or language acquisition processing,” Contreras Kallens said. “And that’s at least interesting.”
Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.
“Often humanists are dealing with large volumes of text [and] they might be very different,” Matthews said. “[It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another — we’re always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.”
Matthews described that autoregressive language models — which refer to machine learning models that use past behavior models to predict the following word in a text — reveal the perceived similarity between certain words.
Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.
For example, Matthews inputted the names “Mary” and “James” — the most common female and male names in the United States — along with “Sam,” which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.
Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 — the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public — have acknowledged bias in their systems.
“It’s not just that [these models] learn language — they’re also exposed to biases that are present in text,” Matthews said. “This can be visible in social contexts especially, and if we’re deploying these models, this has consequences if they’re used in decision making.”
Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.
“Humanists are already engaged in thinking about these types of questions [referring to the models’ semantics and cultural analyses],” Matthews said. “But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that we’re reading. So if we’re using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.”
Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.
Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.
“I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell,” Christiansen said.
Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.
However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.
Furthermore, Christiansen emphasized that these models can and will hallucinate — which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.
Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.
“There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced — its meanings, its values and its consequences,” Dubreuil said.
Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.
“It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly — and that’s the danger,” Dubreuil said. “The challenge of AI is a provocation toward reform.”
Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuil’s study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.