Tag: ai

  • Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Existential Redundancy is what happens when the world keeps running smoothly—and you slowly realize it no longer needs you to keep the lights on. It isn’t unemployment; it’s obsolescence with benefits. Machines cook your meals, balance your passwords, drive your car, curate your entertainment, and tuck you into nine hours of perfect algorithmic sleep. Your life becomes a spa run by robots: efficient, serene, and quietly humiliating. Comfort increases. Consequence disappears. You are no longer relied upon, consulted, or required—only serviced. Meaning thins because it has always depended on friction: being useful to someone, being necessary somewhere, being the weak link a system cannot afford to lose. Existential Redundancy names the soft panic that arrives when efficiency outruns belonging and you’re left staring at a world that works flawlessly without your fingerprints on anything.

    Picture the daily routine. A robot prepares pasta with basil hand-picked by a drone. Another cleans the dishes before you’ve even tasted dessert. An app shepherds you into perfect sleep. A driverless car ferries you through traffic like a padded cell on wheels. Screens bloom on every wall in the name of safety, insurance, and convenience, until privacy becomes a fond memory you half suspect you invented. You have time—oceans of it. But you are not a novelist or a painter or anyone whose passions demand heroic labor. You are intelligent, capable, modestly ambitious, and suddenly unnecessary. With every task outsourced and every risk eliminated, the old question—What do you do with your life?—mutates into something colder: Where do you belong in a system that no longer needs your hands, your judgment, or your effort?

    So humanity does what it always does when it feels adrift: it forms support groups. Digital circles bloom overnight—forums, wellness pods, existential check-ins—places to talk about the hollow feeling of being perfectly cared for and utterly unnecessary. But even here, the machines step in. AI moderates the sessions. Bots curate the pain. Algorithms schedule the grief and optimize the empathy. Your confession is summarized before it lands. Your despair is tagged, categorized, and gently rerouted toward a premium subscription tier. Therapy becomes another frictionless service—efficient, soothing, and devastating in its implication. You sought human connection to escape redundancy, and found yourself processed by the very systems that made you redundant in the first place. In the end, even your loneliness is automated, and the final insult arrives wrapped in flawless customer service: Thank you for sharing. Your feelings have been successfully handled.

  • Optimized to Death: When Improvement Outruns Personal Growth

    Optimized to Death: When Improvement Outruns Personal Growth

    Optimization without integration produces a lopsided human being, and the AI age intensifies this distortion by overrewarding what can be optimized, automated, and displayed. Systems built on speed, output, and measurable performance train us to chase visible gains while starving the slower capacities that make those gains usable in real life. The result is a person who can execute flawlessly in one narrow lane yet falters the moment the situation becomes human—ambiguous, emotional, unscripted. The body may be sculpted while the self remains adolescent; the résumé gleams while judgment dulls; productivity accelerates while meaning evaporates. AI tools amplify this imbalance by making optimization cheap and frictionless, encouraging rapid improvement without requiring maturation, reflection, or integration. What emerges is not an unfinished person so much as an unevenly finished one—overdeveloped in what can be measured and underdeveloped in what must be lived. The tragedy is not incompetence but imbalance: strength without wisdom, speed without direction, polish without presence. In an age obsessed with optimization, what looks like progress is often a subtler form of arrested development.

    To encourage you to interrogate your own tendencies to achieve optimization without integration, write a 500–word personal narrative analyzing a period in your life when you aggressively optimized one part of yourself—your body, productivity, grades, skills, image, or output—while neglecting the integration of that growth into a fuller, more functional self.

    Begin by narrating the specific context in which optimization took hold. Describe the routines, metrics, sacrifices, and rewards that drove your improvement. Use concrete, sensory detail to show what was gained: strength, speed, recognition, efficiency, status, or validation. Make the optimization legible through action rather than abstraction.

    Then pivot. Identify the moment—or series of moments—when the imbalance became visible. What failed to develop alongside your optimized trait? Social competence? Emotional maturity? Judgment? Confidence? Meaning? Show how this lack of integration surfaced in a lived encounter: a conversation you couldn’t sustain, an opportunity you mishandled, a relationship you sabotaged, or a realization that exposed the limits of your progress.

    By the end of the essay, articulate what optimization without integration cost you. Do not reduce this to a moral lesson or self-help platitude. Instead, reflect on what this experience taught you about human development itself: why improving a single dimension of the self can create distortion rather than wholeness, and how true growth requires coordination between capacity, character, and context.

    Your goal is not confession or nostalgia but clarity. Show how a life can look impressive on the surface while remaining structurally incomplete—and what it takes to move from optimization toward integration.

    Avoid clichés about “balance” or “being well-rounded.” This essay should demonstrate insight through specificity, humor, and honest self-assessment. Let the reader see the mismatch before you explain it.

    As a model for the assignment, consider the following self-interrogation—a case study in optimization gone feral and integration nowhere to be found.

    At nineteen, I fell into a job at UPS, where they specialized in turning young men into over-caffeinated parcel gladiators. Picture a cardboard coliseum where bubble wrap was treated like a minor deity and the only sacrament was speed. My assignment was simple and brutal: load 1,200 boxes an hour into trailer walls so tight and elegant they could’ve qualified for Olympic Tetris. Five nights a week, from eleven p.m. to three a.m., I lived under fluorescent lights, sprinting on concrete, powered by caffeine, testosterone, and a belief that exhaustion was a personality trait. Without meaning to, I dropped ten pounds and watched my body harden into something out of a comic book—biceps with delusions of automotive lifting.

    This mattered because my early bodybuilding career had been a public embarrassment. At sixteen, I competed in the Mr. Teenage Golden State in Sacramento, smooth as a marble countertop and just as defined. A year later, at the Mr. Teenage California in San Jose, I repeated the humiliation, proving that consistency was my only strength. I refused to let my legacy be “promising kid, zero cuts.” Now, thanks to UPS cardio masquerading as labor, I watched striations appear like divine handwriting. Redemption no longer seemed possible; it felt scheduled.

    So I did what any responsible nineteen-year-old bodybuilder would do: I declared war on carbohydrates. I starved myself with religious fervor and trained like a man auditioning for sainthood. By the time the 1981 Mr. Teenage San Francisco rolled around at Mission High School, I had achieved what I believed was human perfection—180 pounds of bronzed, veined, magazine-ready beefcake. The downside was logistical. My clothes no longer fit. They hung off me like a visual apology. This triggered an emergency trip to a Pleasanton mall, where I entered a fitting room that felt like a shrine to Joey Scarbury’s “Theme from The Greatest American Hero,” the soundtrack of peak Reagan-era delusion.

    While changing behind a curtain so thin it offered plausible deniability rather than privacy, I overheard two young women working the store arguing—audibly—about which one should ask me out. Their voices escalated. Stakes rose. I imagined them staging a full WWE brawl among the racks: flying elbows, folding chairs, all for the right to split a breadstick with me at Sbarro. This, I thought, was the payoff. This was what discipline looked like.

    And then—nothing. I froze. I adopted an aloof, icy expression so effective it could’ve extinguished a bonfire. The women scattered, muttering about my arrogance, while I stood there in my Calvin Kleins, immobilized by the very attention I had trained for. I had optimized everything except the part of me required to be human.

    For a brief, shimmering window, I possessed the body of a Greek god and the social competence of a malfunctioning Atari joystick. I looked like James Bond and interacted like a background extra waiting for direction. Beneath the Herculean exterior was a hollow shell—a construction site abandoned mid-project, rusted scaffolding still up, a plywood sign nailed crookedly to the entrance: SORRY, WE’RE CLOSED.

  • The Sleepwalking Student: Why Friction, Not Optimization, Reawakens Learning

    The Sleepwalking Student: Why Friction, Not Optimization, Reawakens Learning

    Academic Anhedonia is what it feels like to keep advancing through your education while feeling absolutely nothing about it. The assignments get done. The rubrics are satisfied. The credentials inch closer. And yet curiosity never sparks, pride never arrives, and learning registers as a faint neurological hum—like an appliance left on in another room. You move forward without momentum, effort without appetite. AI language machines make this easier, smoother, quieter. The result is not rebellion but compliance: efficient, bloodless, and hollow.

    When I started teaching college writing in the 1980s, this condition didn’t exist. Back then, I suffered from a different affliction: the conviction that I was destined to be the David Letterman of higher education—a twenty-five-year-old irony specialist armed with a chalkboard, a raised eyebrow, and impeccable timing. For a while, the bit landed. A well-placed joke could levitate a classroom. Students laughed. I mistook that laughter for learning. If I could entertain them, I told myself, I could teach them. For two decades, I confused engagement with applause and thought I was winning.

    That illusion began to crack around 2012. Phones lit up like votive candles. Attention splintered. Students weren’t bored; they were overclocked—curating identities, performing themselves, measuring worth in metrics. They ran hot: anxious, stimulated, desperate for recognition. Teaching became a cage match with the algorithm. Still, those students were alive. Distracted, yes—but capable of obsession, outrage, infatuation. Their pulses were fast. Their temperatures high.

    What we face now is colder. Around 2022, a different creature arrived. Not overstimulated, but under-responsive. Years of screen saturation, pandemic isolation, dopamine-dense apps, and frictionless AI assistance collapsed the internal reward system that once made discovery feel electric. This isn’t laziness. It’s learning-specific anhedonia. Students can assemble essays, follow scaffolds, and march through rubrics—but they do it like sleepwalkers. Curiosity is muted. Persistence is brittle. Critical thinking arrives pre-flattened, shrink-wrapped, and emotionally inert.

    The tragedy isn’t inefficiency; it’s emptiness. Today’s classrooms hum with quiet productivity and emotional frost—cognition without hunger, performance without investment, education stripped of its pulse.

    If there is a way forward, it won’t come from louder performances, cleverer prompts, or better optimization. Those are the same tools that bleached learning in the first place. Academic anhedonia cannot be cured with stimulation. It requires friction: slow reading that refuses to skim, sustained writing that will not autocomplete itself, intellectual solitude that feels mildly wrong, and work that denies the cheap dopamine hit of instant payoff. The cure is not novelty but depth; not entertainment but seriousness. Struggle isn’t a design flaw. It is the design.

    To interrupt academic anhedonia, I use an AI-resistant assignment that reintroduces cost, memory, and embodiment: The Transformative Moment. Students write 400–500 words about an experience that altered the trajectory of their lives. The assignment demands sensory precision—the one domain where AI reliably produces fluent oatmeal. It insists on transformation, which is what education is supposed to enact. And it drags students back into lived experience, away from the anesthetic glow of screens.

    I offer a model from my own life. When I was sixteen, visiting my recently divorced father, he asked what I planned to do after high school. I told him—without irony—that I intended to become a garbage man so I could finish work early and train at the gym all day. He laughed, then calmly informed me that I would go to college and join the professional class because I was far too vain to tell people at cocktail parties that I collected trash for a living. In that instant, I knew two things: my father knew me better than I knew myself, and my future had just been decided. I walked out of that conversation college-bound, whether I liked it or not.

    I tell them about a friend of mine, now a high school principal, who has been a vegetarian since his early twenties. While working at a deli during college, he watched a coworker carve into a bleeding slab of roast beef. In that moment—knife slicing, flesh yielding—something inside him snapped shut. He knew he would never eat meat again. He hasn’t. Transformation can be instantaneous. Conversion doesn’t always send a memo.

    My final example is a fireman I trained with at a gym in the 1970s. He was a recent finalist in the Mr. California bodybuilding contest: blond shag, broom-thick mustache, horn-rimmed glasses—Clark Kent with a bench press habit. One afternoon, after repping over three hundred pounds, he stood before the mirror, flexed his chest, and watched his muscles swell like they were auditioning for their own sitcom. “When I first saw Arnold,” he said, reverent, “I felt I was in the presence of the Lord. ‘There stands the Messiah,’ I said to myself. ‘There stands God Almighty come to bring good cheer to this world.’”

    He wasn’t speaking only for himself. He spoke for all of us. We wanted to be claimed by something larger than our small, awkward lives. Arnold was the messiah—the Pied Piper of Pecs—leading us toward the promised land of biceps, triceps, and quads capable of crushing produce.

    I assign The Transformative Moment because I want students to recreate an experience no machine can counterfeit. I want them to remember that education is not credential management but metamorphosis. And I want them to interrogate the conditions under which real change occurred in their lives—what they were paying attention to, what they risked, what it cost.

    Transformation—actual forward movement—is the antidote to anhedonia. And it cannot be outsourced.

  • Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    I am a connoisseur of fine timepieces. I notice the way a sunray dial catches light like a held breath, the authority of a bezel click that says someone cared. I’ve worn Tudor Black Bays and Omega Planet Oceans as loaners—the horological equivalent of renting a Maserati for a reckless weekend—exhilarating, loud with competence, impossible to forget. My own collection is high-end Seiko divers, watches that deliver lapidary excellence at half the tariff: fewer theatrics, just ruthless execution. Precision doesn’t need a luxury tax.

    That same appetite governs my reading. A tight, aphoristic paragraph can spike my pulse the way a Planet Ocean does on the wrist. I collect sentences the way others collect steel and sapphire. Wilde. Pascal. Kierkegaard. La Rochefoucauld. These writers practice compression as a moral discipline. A lapidary writer treats language like stone—cuts until only the hardest facet remains, then stops. Anything extra is vanity.

    I am not, however, a tourist. I have no patience for writers who mistake arch tone for insight, who wear cynicism like a designer jacket and call it wisdom. Aphorisms can curdle into poses. Style without penetration is just a shiny case housing a dead movement.

    This is why I’m unsentimental about AI. Left alone, language models are unruly factories—endless output, hollow shine, fluent nonsense by the ton. Slop with manners. But handled by someone with a lapidary sensibility, they can polish. They can refine. They can help a sentence find its edge. What they cannot do is teach taste.

    Taste precedes tools. Before you let a machine touch your prose, you must have lived with the masters long enough to feel the difference between a gem and its counterfeit. That discernment takes years. There is no shortcut. You become a jeweler by ruining stones, by learning what breaks and what holds.

    Lapidary sensibility is not impressed by abundance or fluency. It responds to compression, inevitability, and bite. It is bodily: a tightening of attention, a flicker of pleasure, the instant you know a sentence could not be otherwise. You don’t acquire it through mimicry or prompts. You acquire it through exposure, failure, and long intimacy with sentences that refuse to waste your time.

    Remember this, then: AI can assist only where judgment already exists. Without that baseline, you are not collaborating with a tool. You are feeding quarters into a very expensive Slop Machine.

  • AI as Tool, Toy, or Idol: A Taxonomy of Belief

    AI as Tool, Toy, or Idol: A Taxonomy of Belief

    Your attitude toward AI machines is not primarily technical; it is theological—whether you admit it or not. Long before you form an opinion about prompts, models, or productivity gains, you have already decided what you believe about human nature, meaning, and salvation. That orientation quietly determines whether AI strikes you as a tool, a toy, or a temptation. There are three dominant postures.

    If you are a political-sapien, you believe history is the only stage that matters and justice is the closest thing we have to salvation. There is no eternal kingdom waiting in the wings; this world is the whole play, and it must be repaired with human hands. Divine law holds no authority here—only reason, negotiation, and evolving ethical frameworks shaped by shared notions of fairness. Humans, you believe, are essentially good if the scaffolding is sound. Build the right systems and decency will follow. Politics is not mere governance; it is moral engineering. AI machines, from this view, are tools on probation. If they democratize power, flatten hierarchies, and distribute wealth more equitably, they are allies. If they concentrate power, automate inequality, or deepen asymmetry, they are villains in need of constraint or dismantling.

    If you are a hedonist-sapien, you turn away from society’s moral drama and toward the sovereign self. The highest goods are pleasure, freedom, and self-actualization. Politics is background noise; transcendence is unnecessary. Life is about feeling good, living well, and removing friction wherever possible. AI machines arrive not as a problem but as a gift—tools that streamline consumption, curate taste, and optimize comfort. They promise a smoother, more luxurious life with fewer obstacles and more options. Of the three orientations, the hedonist-sapien embraces AI with the least hesitation and the widest grin, welcoming it as the ultimate personal assistant in the lifelong project of maximizing pleasure and minimizing inconvenience.

    If you are a devotional-sapien, you begin with a darker diagnosis. Humanity is fallen, and no amount of policy reform, pleasure, or purchasing power can make it whole. You don’t expect salvation from governments, markets, or optimization schemes; you expect it only from your Maker. You may share the political-sapien’s concern for justice and enjoy the hedonist-sapien’s creature comforts, but you refuse to confuse either with redemption. You are not shopping for happiness; you are seeking restoration. Spiritual health—not efficiency—is the measure that matters. From this vantage, AI machines look less like neutral tools and more like idols-in-training: shiny substitutes promising mastery, insight, or transcendence without repentance or grace. Unsurprisingly, the devotional-sapien is the most skeptical of AI’s expanding role in human life.

    Because your orientation shapes what you think humans need most—justice, pleasure, or redemption—it also shapes how you use AI, how much you trust it, and what you expect it to deliver. Before asking what AI can do for you, it is worth asking a more dangerous question: what are you secretly hoping it will save you from?

  • What Cochinita Pibil Can Teach Us About Learning

    What Cochinita Pibil Can Teach Us About Learning

    Academic Friction is the intentional reintroduction of difficulty, resistance, and human presence into the learning process as a corrective to academic nihilism. Academic friction rejects the premise that education should be frictionless, efficient, or fully mediated by machines, insisting instead that intellectual growth requires struggle, solitude, and sustained attention. It is created through practices that cannot be outsourced or automated—live writing, oral presentations, performance, slow reading, and protected time for thought—forcing students to confront ideas without the buffer of AI assistance. Far from being punitive, academic friction restores agency, rebuilds cognitive stamina, and reawakens curiosity by making learning consequential again. It treats difficulty not as an obstacle to be removed, but as the very medium through which thinking, meaning, and human development occur.

    Greatness is born from resistance. Depth is what happens when something pushes back. Friction is not an obstacle to meaning; it is the mechanism that creates it. Strip friction away and you don’t get excellence—you get efficiency, speed, and a thin satisfaction that evaporates on contact. This is as true in food as it is in thinking.

    Consider cochinita pibil, a dish that seems to exist for the sole purpose of proving that greatness takes time. Nothing about it is casual. Pork shoulder is marinated overnight in achiote paste, bitter orange juice, garlic, cumin, oregano—an aggressive, staining bath that announces its intentions early. The meat doesn’t just absorb flavor; it surrenders to it. Traditionally, it is wrapped in banana leaves, sealed like contraband, and buried underground in a pit oven. Heat rises slowly. Smoke seeps inward. Hours pass. The pork breaks down molecule by molecule, fibers loosening until resistance gives way to tenderness. This is not cooking as convenience; it is cooking as ordeal. The reward is depth—meat so saturated with flavor it feels ancient, ceremonial, earned.

    Now here’s the confession: as much as I love food, I love convenience more. And convenience is just another word for frictionless. I will eat oatmeal three times a day without hesitation. Not because oatmeal is great, but because it is obedient. It asks nothing of me. Pour, stir, microwave, done. Oatmeal does not resist. It does not demand patience, preparation, or attention. It delivers calories with monk-like efficiency. It is fuel masquerading as a meal, and I choose it precisely because it costs me nothing.

    The life of the intellect follows the same fork in the road. There is the path of cochinita pibil and the path of oatmeal. One requires slow reading, sustained writing, confusion, revision, and the willingness to sit with discomfort until something breaks open. The other offers summaries, shortcuts, prompts, and frictionless fluency—thought calories without intellectual nutrition. Both will keep you alive. Only one will change you.

    The tragedy of our moment is not that people prefer oatmeal. It’s that we’ve begun calling it cuisine. We’ve mistaken smoothness for insight and speed for intelligence. Real thinking, like real cooking, is messy, time-consuming, and occasionally exhausting. It stains the counter. It leaves you unsure whether it will be worth it until it is. But when it works, it produces something dense, resonant, and unforgettable.

    Cochinita pibil does not apologize for the effort it requires. Neither should serious thought. If we want depth, we have to accept friction. Otherwise, we’ll live well-fed on oatmeal—efficient, unchallenged, and never quite transformed.

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • How Real Writing Survives in the Age of ChatGPT

    How Real Writing Survives in the Age of ChatGPT

    AI-Resistant Pedagogy is an instructional approach that accepts the existence of generative AI without surrendering the core work of learning to it. Rather than relying on bans, surveillance, or moral panic, it redesigns courses so that thinking must occur in places machines cannot fully inhabit: live classrooms, oral exchanges, process-based writing, personal reflection, and sustained human presence. This pedagogy emphasizes how ideas are formed—not just what is submitted—by foregrounding drafting, revision, discussion, and decision-making as observable acts. It is not AI-proof, nor does it pretend to be; instead, it makes indiscriminate outsourcing cognitively unrewarding and pedagogically hollow. In doing so, AI-resistant pedagogy treats technology as a background condition rather than the organizing principle of education, restoring friction, accountability, and intellectual agency as non-negotiable features of learning.

    ***

    Carlo Rotella, an English writing instructor at Boston College, refuses to go the way of the dinosaurs in the Age of AI Machines. In his essay “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse,” he explains that he doesn’t lecture much at all. Instead, he talks with his students—an endangered pedagogical practice—and discovers something that flatly contradicts the prevailing moral panic: his students are not freeloading intellectual mercenaries itching to outsource their brains to robot overlords. They are curious. They want to learn how to write. They want to understand how tools work and how thinking happens. This alone punctures the apocalyptic story line that today’s students will inevitably cheat their way through college with AI while instructors helplessly clutch their blue books like rosary beads.

    Rotella is not naïve. He admits that any instructor who continues teaching on autopilot is “sleepwalking in a minefield.” Faced with Big Tech’s frictionless temptations—and humanity’s reliable preference for shortcuts—he argues that teachers must adapt or become irrelevant. But adaptation doesn’t mean surrender. It means recommitting to purposeful reading and writing, dialing back technological dependence, and restoring face-to-face intellectual community. His key distinction is surgical and useful: good teaching isn’t AI-proof; it’s AI-resistant. Resistance comes from three old-school but surprisingly radical moves—pen-and-paper and oral exams, teaching the writing process rather than just collecting finished products, and placing real weight on what happens inside the classroom. In practice, that means in-class quizzes, short handwritten essays, scaffolded drafting, and collaborative discussion—students learning how to build arguments brick by brick instead of passively absorbing a two-hour lecture like academic soup.

    Personal narrative becomes another line of defense. As Mark Edmundson notes, even when students lean on AI, reflective writing forces them to feed the machine something dangerously human: their own experience. That act alone creates friction. In my own courses, students write a six-page research paper on whether online entertainment sharpens or corrodes critical thinking. The opening paragraph is a 300-word confession about a habitual screen indulgence—YouTube, TikTok, a favorite creator—and an honest reckoning with whether it educates or anesthetizes. The conclusion demands a final verdict about their own personal viewing habits: intellectual growth or cognitive decay? To further discourage lazy outsourcing, I show them AI-generated examples in all their hollow, bloodless glory—perfectly grammatical, utterly vacant. Call it AI-shaming if you like. I call it a public service. Nothing cures overreliance on machines faster than seeing what they produce when no human soul is involved.

  • Everyone in Education Wants Authenticity–Just Not for Themselves

    Everyone in Education Wants Authenticity–Just Not for Themselves

    Reciprocal Authenticity Deadlock names the breakdown of trust that occurs when students and instructors simultaneously demand human originality, effort, and intellectual presence from one another while privately relying on AI to perform that very labor for themselves. In this condition, authenticity becomes a weapon rather than a value: students resent instructors whose materials feel AI-polished and hollow, while instructors distrust students whose work appears frictionless and synthetic. Each side believes the other is cheating the educational contract, even as both quietly violate it. The result is not merely hypocrisy but a structural impasse in which sincerity is expected but not modeled, and education collapses into mutual surveillance—less a shared pursuit of understanding than a standoff over who is still doing the “real work.”

    ***

    If you are a college student today, you are standing in the middle of an undeclared war over AI, with no neutral ground and no clean rules of engagement. Your classmates are using AI in wildly different ways: some are gaming the system with surgical efficiency, some are quietly hollowing out their own education, and others are treating it like a boot camp for future CEOhood. From your desk, you can see every outcome at once. And then there’s the other surprise—your instructors. A growing number of them are now producing course materials that carry the unmistakable scent of machine polish: prose that is smooth but bloodless, competent but lifeless, stuffed with clichés and drained of voice. Students are taking to Rate My Professors to lodge the very same complaints teachers have hurled at student essays for years. The irony is exquisite. The tables haven’t just turned; they’ve flipped.

    What emerges is a slow-motion authenticity crisis. Teachers worry that AI will dilute student learning into something pre-chewed and nutrient-poor, while students worry that their education is being outsourced to the same machines. In the worst version of this standoff, each side wants authenticity only from the other. Students demand human presence, originality, and intellectual risk from their professors—while reserving the right to use AI for speed and convenience. Professors, meanwhile, embrace AI as a labor-saving miracle for themselves while insisting that students do the “real work” the hard way. Both camps believe they are acting reasonably. Both are convinced the other is cutting corners. The result is not collaboration but a deadlock: a classroom defined less by learning than by a mutual suspicion over who is still doing the work that education is supposed to require.

  • The Seductive Assistant

    The Seductive Assistant

    Auxiliary Cognition describes the deliberate use of artificial intelligence as a secondary cognitive system that absorbs routine mental labor—drafting, summarizing, organizing, rephrasing, and managing tone—so that the human mind can conserve energy for judgment, creativity, and higher-order thinking. In this arrangement, the machine does not replace thought but scaffolds it, functioning like an external assistant that carries cognitive weight without claiming authorship or authority. At its best, auxiliary cognition restores focus, reduces fatigue, and enables sustained intellectual work that might otherwise be avoided. At its worst, when used uncritically or excessively, it risks dulling the very capacities it is meant to protect, quietly shifting from support to substitution.

    ***

    Yale creative writing professor Meghan O’Rourke approaches ChatGPT the way a sober adult approaches a suspicious cocktail: curious, cautious, and alert to the hangover. In her essay “I Teach Creative Writing. This Is What A.I. Is Doing to Students,” she doesn’t offer a manifesto so much as a field report. Her conversations with the machine, she writes, revealed a “seductive cocktail of affirmation, perceptiveness, solicitousness, and duplicity”—a phrase that lands like a raised eyebrow. Sometimes the model hallucinated with confidence; sometimes it surprised her with competence. A few of its outputs were polished enough to pass as “strong undergraduate work,” which is both impressive and unsettling, depending on whether you’re grading or paying tuition.

    What truly startled O’Rourke, however, wasn’t the quality of the prose but the way the machine quietly lifted weight from her mind. Living with the long-term effects of Lyme disease and Covid, her energy is a finite resource, and AI nudged her toward tasks she might otherwise postpone. It conserved her strength for what actually mattered: judgment, creativity, and “higher-order thinking.” More than a glorified spell-checker, the system proved tireless and oddly soothing, a calm presence willing to draft, rephrase, and organize without complaint. When she described this relief to a colleague, he joked that she was having an affair with ChatGPT. The joke stuck because it carried a grain of truth. “Without intending it,” she admits, the machine became a partner in shouldering the invisible mental load that so many women professors and mothers carry. Freed from some of that drain, she found herself kinder, more patient, even gentler in her emails.

    What lingers after reading O’Rourke isn’t naïveté but honesty. In academia, we are flooded with essays cataloging AI’s classroom chaos, and rightly so—I live in that turbulence myself. But an exclusive fixation on disaster obscures a quieter fact she names without flinching: used carefully, AI can reduce cognitive load and return time and energy to the work and “higher-order thinking” that actually requires a human mind. The challenge ahead isn’t to banish the machine or worship it, but to put a bridle on it—to insist that it serve rather than steer. O’Rourke’s essay doesn’t promise salvation, but it does offer a shaft of light in a dim tunnel: a reminder that if we use these tools deliberately, we might reclaim something precious—attention, stamina, and the capacity to think deeply again.