Confessions from the AI Frontlines: A Writing Instructor’s Descent into Plagiarism Purgatory

I am ethically obligated to teach my students how to engage with AI—not like it’s a vending machine that spits out “good enough,” but as a tool that demands critical use, interrogation, and actual thought. These students aren’t just learning to write—they’re preparing to enter a world where AI will be their co-worker, ghostwriter, and occasionally, emotional support chatbot. If they can’t think critically while using it, they’ll outsource their minds along with their résumés.

So, I build my assignments like fortified bunkers. Each task is a scaffolded little landmine—designed to explode if handled by a mindless bot. Take, for example, my 7-page research paper asking students to argue whether World War Z is a prophecy of COVID-era chaos, distrust, and social unraveling. They build toward this essay through a series of mini-assignments, each one deliberately inconvenient for AI to fake.

Mini Assignment #1: An introductory paragraph based on a live interview. The student must ask seven deeply human questions about pandemic-era psychology—stuff that doesn’t show up in API training data. These aren’t just prompts; they’re empathy traps. Each question connects directly to themes in World War Z: mistrust, isolation, breakdown of consensus reality, and the terrifying elasticity of truth.

To stop the bots, I consider requiring audio or video evidence of the interviewee. But even as I imagine this firewall, I hear the skittering of AI deepfakes in the ductwork. I know what’s coming. I know my students will find a way to beat me.

And that’s when I begin to spiral.

What started as teaching has now mutated into digital policing. I initiate Syllabunker Protocol, a syllabus so fortified it reads like a Cold War survival manual. My rubric becomes a lie detector. My assignments, booby traps.

But the students evolve faster than I do.

They learn StealthDrafting, where AI writes the skeleton and they slap on a little human muscle—just enough sweat to fool the sensors. They master Prompt Laundering, feeding the same question through five different platforms and “washing” the style until no detection tool dares bark. My countermeasures only teach them how to outwit me better.

And thus I find myself locked in combat with The Plagiarism Hydra. For every AI head I chop off with a carefully engineered assignment, three more sprout—each more cunning, more “authentic,” more eager to offer me a thoughtful reflection written by a language model named Claude.

This isn’t a class anymore. It’s an arms race. A Cold War of Composition. I set traps, they leap them. I raise standards, they outflank them. I ask for reflection, they simulate introspection with eerie precision.

The irony? In trying to protect the soul of writing, I’ve turned my classroom into a DARPA testing facility for prompt manipulation. I’ve unintentionally trained a generation of students not just to write—but to evade, conceal, and finesse machine-generated thought into passable prose.

So here I am, red pen in hand, staring into the algorithmic abyss. And the abyss, of course, has already rewritten my syllabus.

Comments

Leave a comment