Skip to Content, Navigation, or Footer.
The Cornell Daily Sun
Friday, Jan. 9, 2026

Burzlaff Office Hours

BURZLAFF | The Basics (3): Learning with Machines — Four Principles for Using AI at Cornell

Reading time: about 8 minutes

Every semester now begins with the same quiet contradiction. One syllabus declares, “No ChatGPT allowed.” Another encourages it as a research aid. And in between, most of us use it anyway. In a recent national survey, more than half of undergraduates reported using artificial intelligence in the past week — to summarize readings, outline ideas or draft essays at midnight. Faculty use it too, though we’re slower to confess it: to sketch lectures, check citations or polish phrasing before pressing “send.” The technology hums in the background of university life — fast, fluent, available and quietly indispensable. And yet, for all its ubiquity, few feel comfortable with it. We worry it’s doing too much thinking for us or replacing something we can’t quite name. Like calculators in the 1980s or Wikipedia in the 2000s, AI has moved from novelty to necessity before we decided what it should mean for us. On our campus, the question is urgent: how do we use AI wisely without letting it hollow out the very work that makes learning human? At stake is the difference between explanation and engagement, between knowledge that is delivered and knowledge that is made.

Nowhere has that question felt more pressing to me than in the classroom itself. It’s what I explored — and will do again next spring — in my course “The Past and Future of Holocaust Survivor Testimonies.” There, we place AI in the most ethically demanding context imaginable: asking what happens when a machine tries to interpret fear, silence and moral ambiguity in Holocaust survivor testimonies. The first answers — and the many failures along the way — have something to teach every student and every instructor on campus.

Even at first glance, AI’s greatest temptation is its smoothness. It produces sentences that glide, arguments that click neatly into place, and summaries that sound as though they’ve already been edited twice. For students juggling five classes and endless deadlines, this fluency can feel like mercy. But the very polish that makes ChatGPT and its kin so seductive also makes them dangerous. Their answers are rarely wrong — but they are rarely alive. The first time we asked ChatGPT to summarize a textbook chapter and a survivor testimony, the results were impeccable — and empty. Every point was correct, but the heartbeat was gone: no struggle, no doubt, no sense of discovery. AI didn’t invent our obsession with polish; it merely perfected it. For decades, higher education has rewarded fluency over friction and performance over reflection. We praise smooth arguments, clean prose and active participation — the very traits a machine can now automate. As faculty, we can’t blame students for seeking what AI now does to perfection. When learning starts to look too perfect, AI simply holds up the mirror to us.

Teaching with — and against — AI has crystallized what many of us already feel instinctively: it can mimic understanding, but it cannot replace the act of thinking. Part of its allure is that it never hesitates. It never loses its train of thought or misreads a sentence. But that’s also its flaw — and, if we’re honest, ours. Some colleagues worry that AI will make students lazy. I worry it will make them fluent — too fluent — before they’ve truly thought something through. The task ahead is not prohibition, but purpose. AI isn’t a scandal; it’s a design challenge. As my colleague Laurent Dubreuil argues in his excellent new book on AI and the humanities, AI can generate content endlessly; only humans can make meaning. We shouldn’t — and at this point, can’t — outsmart automation by walling it off; we can only outgrow it by teaching both its benefits and its flaws. Learning is not efficient by design; it’s meant to be demanding, uncertain and occasionally slow. On campus, what we need now is not stricter rules but a renewed curiosity about what counts as thought. In other words, we need to teach not resistance but discernment — how to think with AI, not through it.

If AI has shown us what learning looks like when it’s too easy, then our task is to rebuild learning around friction. That begins with a new kind of literacy — one rooted not in coding or compliance but in interpretation. I’ve come to see AI literacy as an ethical and intellectual habit. It means reading machine outputs the way we read texts: asking what’s missing, what’s assumed, and what’s quietly distorted. In my classes, students use AI publicly and reflectively, not secretly. In this way, it becomes a collective inquiry: we see what it gets wrong and why that matters. Over time, I’ve distilled four simple habits that can guide both students and faculty:

1. Curiosity: Start with questions that actually matter for your course or goal, not ones that merely fill a prompt. AI can be a helpful shortcut — a way to summarize readings, organize notes, or brainstorm ideas — but it cannot fulfill the whole assignment.

2. Transparency: Acknowledge what AI helped you see — and what it obscured. Track what it gets right — and the many things it gets wrong. That practice begins before class and continues long after it ends.

3. Interpretation: Treat its answers as beginnings, not conclusions. Learning is full of hesitation — of small confusions that push us toward deeper understanding.

4. Dialogue: Use it to sharpen your own thinking, not to outsource it. Be in dialogue with AI — and, ideally, with others when using it.

What we need now is not more alarm or regulation, but a shared language for how to think with machines. These are not high-tech skills; they are humanistic ones, and they call for not only individual habit changes but also institutional support for faculty workshops and cross-disciplinary conversations on AI. Together they turn AI from an oracle into a companion — a tool for reflection rather than replacement. They remind us that technology is not a threat to learning unless we forget that the thinking still belongs — and will belong — to us.

The longer I teach with AI, the more convinced I am that learning depends on imperfection. A perfect sentence, a polished essay, a neatly packaged answer — these are not signs of intelligence but of aftermath. The real work happens in the mess itself: when ideas clash, when sentences collapse, when silence stretches long enough for something new to emerge. Last semester, I asked students to take an AI-generated argument and critique it in groups. Slowly the sentences began to breathe again — hesitant, personal, alive. Each version carried traces of struggle and discovery, of thinking made visible. AI can replicate the shape of intelligence, but it can’t feel the moment of being seen — the discomfort of being wrong, of revising, of thinking again. That feeling — the discomfort of being wrong together, in a community — is what makes learning human. AI can help us begin, but it cannot finish for us. The goal isn’t mastery; it’s mindfulness — learning to use these tools without letting them use us. Because day-to-day education, at its best, is not about perfection. It’s about attention, reflection, and the courage to remain unfinished.

The Cornell Daily Sun is interested in publishing a broad and diverse set of content from the Cornell and greater Ithaca community. We want to hear what you have to say about this topic or any of our pieces. Here are some guidelines on how to submit. And here’s our email: associate-editor@cornellsun.com.


Professor Jan Burzlaff

Jan Burzlaff is an Opinion Columnist and a Postdoctoral Associate in the Program for Jewish Studies. Office Hours (Open Door Edition) is his weekly dispatch to the Cornell community — a professor’s reflections on teaching, learning and the small moments that make a campus feel human. Readers can submit thoughts and questions anonymously through the Tip Sheet here. He can also be reached at profjburzlaff@cornellsun.com.


Read More