Tag: artificial-intelligence

  • Acid-Washed Jeans and Artificial Intelligence: The Rise and Fall of Instant Cool

    Acid-Washed Jeans and Artificial Intelligence: The Rise and Fall of Instant Cool

    I have a confession that belongs in the Museum of Bad Decisions: I wore acid-washed jeans in the 80s. Not casually. Not ironically. I wore them to teach college writing at twenty-four, convinced I was the cool professor—the kind of man who could annotate a thesis statement and headline a Duran Duran video without changing outfits.

    The problem, of course, is that everyone thought they were that guy. Acid-washed jeans thrived because they delivered instant mythology. You looked like you had lived—hard, fast, dangerously—when in reality you had simply survived a trip to the mall. They were rebellion by chemical treatment, authenticity by rinse cycle. For a brief, glittering moment, that illusion worked. But illusions collapse under mass adoption. When everyone looks distressed, no one looks interesting. The jeans had nowhere to go; they began at maximum volume and stayed there, screaming. Eventually, the culture regained its hearing, glanced downward, and realized it had dressed itself like survivors of a denim-related explosion. Acid wash didn’t fade—it was exiled.

    I think about that rise and fall when I look at my students’ shifting attitude toward AI. In 2022, AI arrived like those jeans: a miracle fabric promising salvation from drudgery, writer’s block, and the existential dread of the blank page. It offered pre-fabricated brilliance—the intellectual version of showing up to the gym already sweating. Students embraced it with the same breathless certainty that this time, finally, the shortcut would make them exceptional.

    Now? They roll their eyes. They call it cringey.

    What changed is not the technology but the perception of authenticity. Factory-installed insight, like factory-installed distress, has become suspect. My students are not naïve; they have finely tuned detectors for fraud. They live in a world saturated with performance—the influencer selling a life they don’t live, the hollow expert recycling borrowed ideas, the unprepared instructor filling class time by sharing his dreams and domestic dramas while they politely tune him out and read Tolstoy’s War and Peace or the entire oeuvre of J.K. Rowling. 

    AI, at its worst, slots neatly into that ecosystem. It produces language that sounds like thinking without the inconvenience of actually thinking. And my students can hear the hollowness.

    This does not mean AI is useless. At its best, it belongs alongside Word, Google Docs, and Grammarly—a tool, not a personality. But tools do not build a self. They do not generate voice, conviction, or the slow accumulation of insight that makes writing worth reading. Lean on them too heavily, and the result isn’t mastery—it’s dependency dressed up as efficiency.

    My students understand this. That’s why the fever has broken. The early hype—the belief that AI would function as a kind of intellectual superpower—has lost its grip. The spell didn’t shatter because AI failed. It shattered because people learned to recognize the difference between something that helps you think and something that pretends to think for you.

    Acid-washed jeans didn’t disappear because denim stopped working. They disappeared because people grew embarrassed of the shortcut.

    AI isn’t going anywhere.

    But the illusion that it can make you interesting just by wearing it?

    That’s already out of style.

  • The Semester When Students Got Tired of AI Slop

    The Semester When Students Got Tired of AI Slop

    My critical thinking class this spring has produced something I have not seen in several years: essays that sound like they were written by human beings.

    The first two mini-essays show almost no signs of AI cheating. Students wrote about the theme of optimization without integration in the Black Mirror episode “Joan Is Awful,” and about toxic positivity and infantilization in “Rachel, Jack, and Ashley Too.” These are not easy concepts. Yet the writing has been thoughtful, uneven in places, occasionally clumsy—in other words, unmistakably human.

    Part of the explanation lies in the design of the assignments. I structured them as hybrids. Students begin with a single analytical paragraph about the episode itself. Then they pivot and connect the theme to their own lives. The second step is the key. AI can summarize television episodes all day long, but it has a harder time fabricating the peculiar messiness of someone’s actual life.

    But the assignments alone do not explain the shift.

    Conversations with students suggest something more interesting is happening: they are tired of AI. Not ethically troubled, not philosophically conflicted—simply exhausted. They complain about what they call AI slop: bloated paragraphs that say everything and mean nothing, prose that sounds like a motivational speaker trapped inside a thesaurus.

    They are burned out on the smooth, inflated voice of the machine.

    What they seem to want instead is something refreshingly primitive—authentic expression. The Black Mirror episodes help. The themes are sharp, strange, and slightly disturbing, which gives students something real to react to. They also appreciate that the assignments are short—well under 1,000 words. These essays function as warm-ups before the larger research papers later in the semester.

    The result, at least so far, is encouraging.

    After four years of watching AI creep into every corner of student writing, I may be seeing the beginning of a recalibration. Students appear to be treating AI less like a magic genie that produces instant essays and more like what it actually works best as: a tool for editing and cleanup.

    I could be misreading the moment. Trends in education are famous for evaporating the second you start feeling optimistic.

    But for now, the classroom sounds different.

    The paragraphs have fingerprints on them again.

  • The Sweet Tooth Age: How We Traded Depth for Dopamine

    The Sweet Tooth Age: How We Traded Depth for Dopamine

    In “The Orality Theory of Everything,” Derek Thompson makes a striking observation about human progress. One of civilization’s great turning points was the shift from orality to literacy. In oral cultures, knowledge traveled through speech, storytelling, and shared memory. Communication was social, flexible, and immediate. Literacy changed everything. Once ideas could be recorded, people could think alone, think slowly, and think deeply. Writing made possible the abstract systems—calculus, physics, modern biology, quantum mechanics—that underpin the technological world. The move from orality to literacy didn’t just change communication. It changed the human mind.

    Now the concern is that we may be drifting in the opposite direction.

    As social media expands, sustained reading declines. Attention fragments. Communication becomes faster, louder, and more performative. Thompson explored this shift in a conversation with Joe Weisenthal of the Odd Lots podcast, who draws heavily on the work of Walter Ong, the Jesuit scholar who wrote Orality and Literacy. Ong’s insight was simple but profound: when ideas are not recorded and preserved, people think differently. They rely on improvisation, memory shortcuts, and conversational instinct. But when ideas live in texts—books, essays, archives—people develop interiority: the capacity for reflection, precision, and layered analysis.

    It would be too simple to say we now live in a post-literate society. We still read. We still write. But the cognitive environment has changed. Our brains increasingly gravitate toward information that is fast, simplified, and emotionally stimulating. The habits required for what Cal Newport calls “deep work” now feel unnatural, even burdensome.

    A useful analogy is food. Literacy is like preparing a slow, nutritious meal. It requires time, effort, and attention, but the nourishment is real and lasting. The current media environment offers something else entirely: intellectual candy. Quick hits. Bright packaging. Strong flavor. Minimal substance. We have entered what might be called the Sweet Tooth Age—a culture that prefers pre-digested, entertaining fragments of ideas over sustained, solitary engagement. The concepts may sound serious, but they arrive in baby-food form: softened, sweetened, and stripped of complexity.

    After forty years of teaching college writing, I’ve watched this shift unfold in real time. In the past six years especially, many instructors have adjusted their expectations. Reading loads have shrunk. Full books are assigned less often. In an effort to get authentic, non-AI responses, more teachers rely on in-class writing. Some have abandoned homework entirely and grade only what students produce under supervision.

    This strategy has practical advantages. It guarantees original work. It keeps students accountable. But it also reflects a quiet surrender to the Sweet Tooth Age. The modern workplace—the environment our students are entering—runs on the same quick-cycle attention economy. Their exposure to slow thinking may be brief and largely confined to the classroom. When they transition to their careers, they may find that on-demand writing is no longer required or relevant. 

    Not just education but politics and culture are being swept by this new age of dopamine cravings. The Sweet Tooth Age carries a cost, and the bill will come due.

    The content that wins in the attention economy is not the most accurate or thoughtful. It is the most stimulating. It is colorful, simplified, emotionally charged, and designed to produce a quick surge of interest—what the brain experiences as a dopamine reward. But reacting to stimulation is not the same as thinking. Performance is not analysis.

    Performance, in fact, is the preferred tool of the demagogue.

    When audiences lose the habit of slow reading and critical evaluation, they become vulnerable to what might be called Kayfabe personalities—figures who are larger than life, theatrical, and emotionally compelling, but who operate more like entertainers than honest brokers. The message matters less than the performance. Complexity disappears. Nuance becomes weakness. Certainty, outrage, and spectacle take center stage.

    In such an environment, critical thinking doesn’t merely decline. It becomes a competitive disadvantage.

    This is why the Sweet Tooth Age is more than an educational concern. It is a political and cultural risk. A public trained to consume stimulation rather than evaluate evidence becomes easy to mobilize and difficult to inform. Emotion outruns judgment. Identity replaces analysis. The center—built on patience, evidence, and compromise—struggles to hold.

    When literacy weakens, the consequences do not remain confined to the classroom.

    They spread outward—into public discourse, institutional trust, and civic stability. The shift back toward orality is not simply a change in media habits. It is a shift toward immediacy over reflection, reaction over reasoning, spectacle over substance.

    And when a culture begins to prefer performance to thought, chaos is not an accident.

    It is the logical outcome.

  • Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Existential Redundancy is what happens when the world keeps running smoothly—and you slowly realize it no longer needs you to keep the lights on. It isn’t unemployment; it’s obsolescence with benefits. Machines cook your meals, balance your passwords, drive your car, curate your entertainment, and tuck you into nine hours of perfect algorithmic sleep. Your life becomes a spa run by robots: efficient, serene, and quietly humiliating. Comfort increases. Consequence disappears. You are no longer relied upon, consulted, or required—only serviced. Meaning thins because it has always depended on friction: being useful to someone, being necessary somewhere, being the weak link a system cannot afford to lose. Existential Redundancy names the soft panic that arrives when efficiency outruns belonging and you’re left staring at a world that works flawlessly without your fingerprints on anything.

    Picture the daily routine. A robot prepares pasta with basil hand-picked by a drone. Another cleans the dishes before you’ve even tasted dessert. An app shepherds you into perfect sleep. A driverless car ferries you through traffic like a padded cell on wheels. Screens bloom on every wall in the name of safety, insurance, and convenience, until privacy becomes a fond memory you half suspect you invented. You have time—oceans of it. But you are not a novelist or a painter or anyone whose passions demand heroic labor. You are intelligent, capable, modestly ambitious, and suddenly unnecessary. With every task outsourced and every risk eliminated, the old question—What do you do with your life?—mutates into something colder: Where do you belong in a system that no longer needs your hands, your judgment, or your effort?

    So humanity does what it always does when it feels adrift: it forms support groups. Digital circles bloom overnight—forums, wellness pods, existential check-ins—places to talk about the hollow feeling of being perfectly cared for and utterly unnecessary. But even here, the machines step in. AI moderates the sessions. Bots curate the pain. Algorithms schedule the grief and optimize the empathy. Your confession is summarized before it lands. Your despair is tagged, categorized, and gently rerouted toward a premium subscription tier. Therapy becomes another frictionless service—efficient, soothing, and devastating in its implication. You sought human connection to escape redundancy, and found yourself processed by the very systems that made you redundant in the first place. In the end, even your loneliness is automated, and the final insult arrives wrapped in flawless customer service: Thank you for sharing. Your feelings have been successfully handled.

  • Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    I am a connoisseur of fine timepieces. I notice the way a sunray dial catches light like a held breath, the authority of a bezel click that says someone cared. I’ve worn Tudor Black Bays and Omega Planet Oceans as loaners—the horological equivalent of renting a Maserati for a reckless weekend—exhilarating, loud with competence, impossible to forget. My own collection is high-end Seiko divers, watches that deliver lapidary excellence at half the tariff: fewer theatrics, just ruthless execution. Precision doesn’t need a luxury tax.

    That same appetite governs my reading. A tight, aphoristic paragraph can spike my pulse the way a Planet Ocean does on the wrist. I collect sentences the way others collect steel and sapphire. Wilde. Pascal. Kierkegaard. La Rochefoucauld. These writers practice compression as a moral discipline. A lapidary writer treats language like stone—cuts until only the hardest facet remains, then stops. Anything extra is vanity.

    I am not, however, a tourist. I have no patience for writers who mistake arch tone for insight, who wear cynicism like a designer jacket and call it wisdom. Aphorisms can curdle into poses. Style without penetration is just a shiny case housing a dead movement.

    This is why I’m unsentimental about AI. Left alone, language models are unruly factories—endless output, hollow shine, fluent nonsense by the ton. Slop with manners. But handled by someone with a lapidary sensibility, they can polish. They can refine. They can help a sentence find its edge. What they cannot do is teach taste.

    Taste precedes tools. Before you let a machine touch your prose, you must have lived with the masters long enough to feel the difference between a gem and its counterfeit. That discernment takes years. There is no shortcut. You become a jeweler by ruining stones, by learning what breaks and what holds.

    Lapidary sensibility is not impressed by abundance or fluency. It responds to compression, inevitability, and bite. It is bodily: a tightening of attention, a flicker of pleasure, the instant you know a sentence could not be otherwise. You don’t acquire it through mimicry or prompts. You acquire it through exposure, failure, and long intimacy with sentences that refuse to waste your time.

    Remember this, then: AI can assist only where judgment already exists. Without that baseline, you are not collaborating with a tool. You are feeding quarters into a very expensive Slop Machine.

  • AI as Tool, Toy, or Idol: A Taxonomy of Belief

    AI as Tool, Toy, or Idol: A Taxonomy of Belief

    Your attitude toward AI machines is not primarily technical; it is theological—whether you admit it or not. Long before you form an opinion about prompts, models, or productivity gains, you have already decided what you believe about human nature, meaning, and salvation. That orientation quietly determines whether AI strikes you as a tool, a toy, or a temptation. There are three dominant postures.

    If you are a political-sapien, you believe history is the only stage that matters and justice is the closest thing we have to salvation. There is no eternal kingdom waiting in the wings; this world is the whole play, and it must be repaired with human hands. Divine law holds no authority here—only reason, negotiation, and evolving ethical frameworks shaped by shared notions of fairness. Humans, you believe, are essentially good if the scaffolding is sound. Build the right systems and decency will follow. Politics is not mere governance; it is moral engineering. AI machines, from this view, are tools on probation. If they democratize power, flatten hierarchies, and distribute wealth more equitably, they are allies. If they concentrate power, automate inequality, or deepen asymmetry, they are villains in need of constraint or dismantling.

    If you are a hedonist-sapien, you turn away from society’s moral drama and toward the sovereign self. The highest goods are pleasure, freedom, and self-actualization. Politics is background noise; transcendence is unnecessary. Life is about feeling good, living well, and removing friction wherever possible. AI machines arrive not as a problem but as a gift—tools that streamline consumption, curate taste, and optimize comfort. They promise a smoother, more luxurious life with fewer obstacles and more options. Of the three orientations, the hedonist-sapien embraces AI with the least hesitation and the widest grin, welcoming it as the ultimate personal assistant in the lifelong project of maximizing pleasure and minimizing inconvenience.

    If you are a devotional-sapien, you begin with a darker diagnosis. Humanity is fallen, and no amount of policy reform, pleasure, or purchasing power can make it whole. You don’t expect salvation from governments, markets, or optimization schemes; you expect it only from your Maker. You may share the political-sapien’s concern for justice and enjoy the hedonist-sapien’s creature comforts, but you refuse to confuse either with redemption. You are not shopping for happiness; you are seeking restoration. Spiritual health—not efficiency—is the measure that matters. From this vantage, AI machines look less like neutral tools and more like idols-in-training: shiny substitutes promising mastery, insight, or transcendence without repentance or grace. Unsurprisingly, the devotional-sapien is the most skeptical of AI’s expanding role in human life.

    Because your orientation shapes what you think humans need most—justice, pleasure, or redemption—it also shapes how you use AI, how much you trust it, and what you expect it to deliver. Before asking what AI can do for you, it is worth asking a more dangerous question: what are you secretly hoping it will save you from?

  • What Cochinita Pibil Can Teach Us About Learning

    What Cochinita Pibil Can Teach Us About Learning

    Academic Friction is the intentional reintroduction of difficulty, resistance, and human presence into the learning process as a corrective to academic nihilism. Academic friction rejects the premise that education should be frictionless, efficient, or fully mediated by machines, insisting instead that intellectual growth requires struggle, solitude, and sustained attention. It is created through practices that cannot be outsourced or automated—live writing, oral presentations, performance, slow reading, and protected time for thought—forcing students to confront ideas without the buffer of AI assistance. Far from being punitive, academic friction restores agency, rebuilds cognitive stamina, and reawakens curiosity by making learning consequential again. It treats difficulty not as an obstacle to be removed, but as the very medium through which thinking, meaning, and human development occur.

    Greatness is born from resistance. Depth is what happens when something pushes back. Friction is not an obstacle to meaning; it is the mechanism that creates it. Strip friction away and you don’t get excellence—you get efficiency, speed, and a thin satisfaction that evaporates on contact. This is as true in food as it is in thinking.

    Consider cochinita pibil, a dish that seems to exist for the sole purpose of proving that greatness takes time. Nothing about it is casual. Pork shoulder is marinated overnight in achiote paste, bitter orange juice, garlic, cumin, oregano—an aggressive, staining bath that announces its intentions early. The meat doesn’t just absorb flavor; it surrenders to it. Traditionally, it is wrapped in banana leaves, sealed like contraband, and buried underground in a pit oven. Heat rises slowly. Smoke seeps inward. Hours pass. The pork breaks down molecule by molecule, fibers loosening until resistance gives way to tenderness. This is not cooking as convenience; it is cooking as ordeal. The reward is depth—meat so saturated with flavor it feels ancient, ceremonial, earned.

    Now here’s the confession: as much as I love food, I love convenience more. And convenience is just another word for frictionless. I will eat oatmeal three times a day without hesitation. Not because oatmeal is great, but because it is obedient. It asks nothing of me. Pour, stir, microwave, done. Oatmeal does not resist. It does not demand patience, preparation, or attention. It delivers calories with monk-like efficiency. It is fuel masquerading as a meal, and I choose it precisely because it costs me nothing.

    The life of the intellect follows the same fork in the road. There is the path of cochinita pibil and the path of oatmeal. One requires slow reading, sustained writing, confusion, revision, and the willingness to sit with discomfort until something breaks open. The other offers summaries, shortcuts, prompts, and frictionless fluency—thought calories without intellectual nutrition. Both will keep you alive. Only one will change you.

    The tragedy of our moment is not that people prefer oatmeal. It’s that we’ve begun calling it cuisine. We’ve mistaken smoothness for insight and speed for intelligence. Real thinking, like real cooking, is messy, time-consuming, and occasionally exhausting. It stains the counter. It leaves you unsure whether it will be worth it until it is. But when it works, it produces something dense, resonant, and unforgettable.

    Cochinita pibil does not apologize for the effort it requires. Neither should serious thought. If we want depth, we have to accept friction. Otherwise, we’ll live well-fed on oatmeal—efficient, unchallenged, and never quite transformed.

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • How Real Writing Survives in the Age of ChatGPT

    How Real Writing Survives in the Age of ChatGPT

    AI-Resistant Pedagogy is an instructional approach that accepts the existence of generative AI without surrendering the core work of learning to it. Rather than relying on bans, surveillance, or moral panic, it redesigns courses so that thinking must occur in places machines cannot fully inhabit: live classrooms, oral exchanges, process-based writing, personal reflection, and sustained human presence. This pedagogy emphasizes how ideas are formed—not just what is submitted—by foregrounding drafting, revision, discussion, and decision-making as observable acts. It is not AI-proof, nor does it pretend to be; instead, it makes indiscriminate outsourcing cognitively unrewarding and pedagogically hollow. In doing so, AI-resistant pedagogy treats technology as a background condition rather than the organizing principle of education, restoring friction, accountability, and intellectual agency as non-negotiable features of learning.

    ***

    Carlo Rotella, an English writing instructor at Boston College, refuses to go the way of the dinosaurs in the Age of AI Machines. In his essay “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse,” he explains that he doesn’t lecture much at all. Instead, he talks with his students—an endangered pedagogical practice—and discovers something that flatly contradicts the prevailing moral panic: his students are not freeloading intellectual mercenaries itching to outsource their brains to robot overlords. They are curious. They want to learn how to write. They want to understand how tools work and how thinking happens. This alone punctures the apocalyptic story line that today’s students will inevitably cheat their way through college with AI while instructors helplessly clutch their blue books like rosary beads.

    Rotella is not naïve. He admits that any instructor who continues teaching on autopilot is “sleepwalking in a minefield.” Faced with Big Tech’s frictionless temptations—and humanity’s reliable preference for shortcuts—he argues that teachers must adapt or become irrelevant. But adaptation doesn’t mean surrender. It means recommitting to purposeful reading and writing, dialing back technological dependence, and restoring face-to-face intellectual community. His key distinction is surgical and useful: good teaching isn’t AI-proof; it’s AI-resistant. Resistance comes from three old-school but surprisingly radical moves—pen-and-paper and oral exams, teaching the writing process rather than just collecting finished products, and placing real weight on what happens inside the classroom. In practice, that means in-class quizzes, short handwritten essays, scaffolded drafting, and collaborative discussion—students learning how to build arguments brick by brick instead of passively absorbing a two-hour lecture like academic soup.

    Personal narrative becomes another line of defense. As Mark Edmundson notes, even when students lean on AI, reflective writing forces them to feed the machine something dangerously human: their own experience. That act alone creates friction. In my own courses, students write a six-page research paper on whether online entertainment sharpens or corrodes critical thinking. The opening paragraph is a 300-word confession about a habitual screen indulgence—YouTube, TikTok, a favorite creator—and an honest reckoning with whether it educates or anesthetizes. The conclusion demands a final verdict about their own personal viewing habits: intellectual growth or cognitive decay? To further discourage lazy outsourcing, I show them AI-generated examples in all their hollow, bloodless glory—perfectly grammatical, utterly vacant. Call it AI-shaming if you like. I call it a public service. Nothing cures overreliance on machines faster than seeing what they produce when no human soul is involved.

  • Everyone in Education Wants Authenticity–Just Not for Themselves

    Everyone in Education Wants Authenticity–Just Not for Themselves

    Reciprocal Authenticity Deadlock names the breakdown of trust that occurs when students and instructors simultaneously demand human originality, effort, and intellectual presence from one another while privately relying on AI to perform that very labor for themselves. In this condition, authenticity becomes a weapon rather than a value: students resent instructors whose materials feel AI-polished and hollow, while instructors distrust students whose work appears frictionless and synthetic. Each side believes the other is cheating the educational contract, even as both quietly violate it. The result is not merely hypocrisy but a structural impasse in which sincerity is expected but not modeled, and education collapses into mutual surveillance—less a shared pursuit of understanding than a standoff over who is still doing the “real work.”

    ***

    If you are a college student today, you are standing in the middle of an undeclared war over AI, with no neutral ground and no clean rules of engagement. Your classmates are using AI in wildly different ways: some are gaming the system with surgical efficiency, some are quietly hollowing out their own education, and others are treating it like a boot camp for future CEOhood. From your desk, you can see every outcome at once. And then there’s the other surprise—your instructors. A growing number of them are now producing course materials that carry the unmistakable scent of machine polish: prose that is smooth but bloodless, competent but lifeless, stuffed with clichés and drained of voice. Students are taking to Rate My Professors to lodge the very same complaints teachers have hurled at student essays for years. The irony is exquisite. The tables haven’t just turned; they’ve flipped.

    What emerges is a slow-motion authenticity crisis. Teachers worry that AI will dilute student learning into something pre-chewed and nutrient-poor, while students worry that their education is being outsourced to the same machines. In the worst version of this standoff, each side wants authenticity only from the other. Students demand human presence, originality, and intellectual risk from their professors—while reserving the right to use AI for speed and convenience. Professors, meanwhile, embrace AI as a labor-saving miracle for themselves while insisting that students do the “real work” the hard way. Both camps believe they are acting reasonably. Both are convinced the other is cutting corners. The result is not collaboration but a deadlock: a classroom defined less by learning than by a mutual suspicion over who is still doing the work that education is supposed to require.