AI-Resistant Pedagogy is an instructional approach that accepts the existence of generative AI without surrendering the core work of learning to it. Rather than relying on bans, surveillance, or moral panic, it redesigns courses so that thinking must occur in places machines cannot fully inhabit: live classrooms, oral exchanges, process-based writing, personal reflection, and sustained human presence. This pedagogy emphasizes how ideas are formed—not just what is submitted—by foregrounding drafting, revision, discussion, and decision-making as observable acts. It is not AI-proof, nor does it pretend to be; instead, it makes indiscriminate outsourcing cognitively unrewarding and pedagogically hollow. In doing so, AI-resistant pedagogy treats technology as a background condition rather than the organizing principle of education, restoring friction, accountability, and intellectual agency as non-negotiable features of learning.
***
Carlo Rotella, an English writing instructor at Boston College, refuses to go the way of the dinosaurs in the Age of AI Machines. In his essay “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse,” he explains that he doesn’t lecture much at all. Instead, he talks with his students—an endangered pedagogical practice—and discovers something that flatly contradicts the prevailing moral panic: his students are not freeloading intellectual mercenaries itching to outsource their brains to robot overlords. They are curious. They want to learn how to write. They want to understand how tools work and how thinking happens. This alone punctures the apocalyptic story line that today’s students will inevitably cheat their way through college with AI while instructors helplessly clutch their blue books like rosary beads.
Rotella is not naïve. He admits that any instructor who continues teaching on autopilot is “sleepwalking in a minefield.” Faced with Big Tech’s frictionless temptations—and humanity’s reliable preference for shortcuts—he argues that teachers must adapt or become irrelevant. But adaptation doesn’t mean surrender. It means recommitting to purposeful reading and writing, dialing back technological dependence, and restoring face-to-face intellectual community. His key distinction is surgical and useful: good teaching isn’t AI-proof; it’s AI-resistant. Resistance comes from three old-school but surprisingly radical moves—pen-and-paper and oral exams, teaching the writing process rather than just collecting finished products, and placing real weight on what happens inside the classroom. In practice, that means in-class quizzes, short handwritten essays, scaffolded drafting, and collaborative discussion—students learning how to build arguments brick by brick instead of passively absorbing a two-hour lecture like academic soup.
Personal narrative becomes another line of defense. As Mark Edmundson notes, even when students lean on AI, reflective writing forces them to feed the machine something dangerously human: their own experience. That act alone creates friction. In my own courses, students write a six-page research paper on whether online entertainment sharpens or corrodes critical thinking. The opening paragraph is a 300-word confession about a habitual screen indulgence—YouTube, TikTok, a favorite creator—and an honest reckoning with whether it educates or anesthetizes. The conclusion demands a final verdict about their own personal viewing habits: intellectual growth or cognitive decay? To further discourage lazy outsourcing, I show them AI-generated examples in all their hollow, bloodless glory—perfectly grammatical, utterly vacant. Call it AI-shaming if you like. I call it a public service. Nothing cures overreliance on machines faster than seeing what they produce when no human soul is involved.









