By Richard Ballard
Artificial intelligence isn’t new. The term “artificial intelligence” was first coined in 1956 during a summer research session at Dartmouth College. Since then, AI has gone through cycles of innovation and stagnation, but it wasn’t until the late 2010s that generative AI — capable of producing human-like text — gained traction. Now, tools like ChatGPT, with over 10 million paying subscribers, have turned academic integrity on its head.
At Cornell, one of the most academically rigorous universities in the world, students feel immense pressure to perform. The affordances of AI could provide relief — streamlining assignments, brainstorming ideas and refining prose — but university policy makes it clear: representing AI-generated content as one’s own work is a violation of academic integrity. The ethical implications are similarly murky. If AI helps with structure and word choice, is that cheating? Where’s the line between assistance and deception?
To help navigate these questions, I spoke with Professor Claire Wardle in the Department of Communication, who has worked for organizations like the BBC and the UN before returning to academia. “I think it’s really important that students know how to use ChatGPT,” she said. “But there’s a difference between using it to double-check your writing and grammar versus submitting something entirely AI-generated.”
For professors like Wardle, the concern is less about policing AI use and more about adapting to it. “I don’t want a statement in my syllabus that says you can’t use AI,” she explained. “What I want is for students to say, ‘I’m using this AI in these ways and this is how I’m reflecting on my use.’” Anonymous student responses suggest that AI is already woven into academic life. “I use ChatGPT to generate outlines for my essays. It helps me get past writer’s block,” said Anonymous ’26. “I’ll be honest — I’ve pasted entire prompts into AI and turned in lightly edited responses,” admitted another student, from the Class of ’25. “Sometimes it just feels easier.”
Professors aren’t oblivious to this reality. According to Wardle, faculty members can often tell when a student submits AI-generated work. “There are obvious tells,” she said. “If it’s too polished, too bland, or uses phrasing that doesn’t match how students talk in class, it’s a giveaway.” Detection tools like Turnitin claim to flag AI-generated content with high accuracy, but Wardle believes the more sustainable solution lies in assignment design. “Just saying ‘Don’t use AI’ is like telling someone to eat salad when their house is full of chocolate,” she said. “You have to design assignments where AI use doesn’t make sense — like asking students to apply course concepts to recent events or making them annotate an AI-generated first draft, explaining why it’s weak.”
The broader conversation around AI and academia extends beyond cheating. Wardle pointed out that AI has accessibility benefits, like generating transcripts for students with disabilities or simplifying complex research papers. But these benefits come with environmental and ethical costs. “Is making AI-generated memes a good use of environmental resources?” she asked. “We also have to be mindful of how much energy these systems consume.”
AI is evolving fast and universities need to keep up. “In five years, we’re going to have to rethink how we teach,” Wardle said. “Why should students read dense academic texts when AI can summarize them? Why attend lectures when AI can generate engaging, customized videos on the same topic? We have to redefine what it means to learn in this new era.”
For now, the conversation needs to shift from punishment to adaptation. “It’s not just about catching people who cheat,” Wardle emphasized. “This technology exists. How can we use it to enhance learning? How can we design assignments that make AI a tool, not a shortcut? It has to come from both sides.”
To test the ability of AI in generating work, the Lifestyle Department delivered the prompt “make a photo of a student asking ChatGPT a question, to write their paper” to OpenAI's ChatGPT. The assessment was that ChatGPT had a hard time creating specific words in the photo, making a scramble of letters appear on the students screen. The platform also added details that you would not commonly expect in a normal photo, such as a robot placed behind the students laptop. Regardless, the picture gives you a general glimpse into what a student might experience when using AI as an academic tool. The scrambled gibberish on the screen displays the imperfections of AI in students’ work.
This visual experiment serves as a reminder that while technology can support students, it can’t replace the critical thinking and personal effort that define academic work as a whole. As education continues to evolve, the challenge is to strike a balance — using tools to enhance productivity without losing the human element that makes learning meaningful.
More about Cornell University’s Artificial Intelligence guidelines are available across several websites, but for each professor, these rules may vary. As students continue to integrate AI into their studies — sometimes transparently, sometimes not — professors must decide whether to fight the tide or learn to swim. One thing is certain: AI isn’t going away and education is changing whether universities are ready or not.
Richard Ballard is a sophomore in the College of Agriculture and Life Sciences. He can be reached at rpb233@cornell.edu.