Tag: chatgpt

  • Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    I am a connoisseur of fine timepieces. I notice the way a sunray dial catches light like a held breath, the authority of a bezel click that says someone cared. I’ve worn Tudor Black Bays and Omega Planet Oceans as loaners—the horological equivalent of renting a Maserati for a reckless weekend—exhilarating, loud with competence, impossible to forget. My own collection is high-end Seiko divers, watches that deliver lapidary excellence at half the tariff: fewer theatrics, just ruthless execution. Precision doesn’t need a luxury tax.

    That same appetite governs my reading. A tight, aphoristic paragraph can spike my pulse the way a Planet Ocean does on the wrist. I collect sentences the way others collect steel and sapphire. Wilde. Pascal. Kierkegaard. La Rochefoucauld. These writers practice compression as a moral discipline. A lapidary writer treats language like stone—cuts until only the hardest facet remains, then stops. Anything extra is vanity.

    I am not, however, a tourist. I have no patience for writers who mistake arch tone for insight, who wear cynicism like a designer jacket and call it wisdom. Aphorisms can curdle into poses. Style without penetration is just a shiny case housing a dead movement.

    This is why I’m unsentimental about AI. Left alone, language models are unruly factories—endless output, hollow shine, fluent nonsense by the ton. Slop with manners. But handled by someone with a lapidary sensibility, they can polish. They can refine. They can help a sentence find its edge. What they cannot do is teach taste.

    Taste precedes tools. Before you let a machine touch your prose, you must have lived with the masters long enough to feel the difference between a gem and its counterfeit. That discernment takes years. There is no shortcut. You become a jeweler by ruining stones, by learning what breaks and what holds.

    Lapidary sensibility is not impressed by abundance or fluency. It responds to compression, inevitability, and bite. It is bodily: a tightening of attention, a flicker of pleasure, the instant you know a sentence could not be otherwise. You don’t acquire it through mimicry or prompts. You acquire it through exposure, failure, and long intimacy with sentences that refuse to waste your time.

    Remember this, then: AI can assist only where judgment already exists. Without that baseline, you are not collaborating with a tool. You are feeding quarters into a very expensive Slop Machine.

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • The Seductive Assistant

    The Seductive Assistant

    Auxiliary Cognition describes the deliberate use of artificial intelligence as a secondary cognitive system that absorbs routine mental labor—drafting, summarizing, organizing, rephrasing, and managing tone—so that the human mind can conserve energy for judgment, creativity, and higher-order thinking. In this arrangement, the machine does not replace thought but scaffolds it, functioning like an external assistant that carries cognitive weight without claiming authorship or authority. At its best, auxiliary cognition restores focus, reduces fatigue, and enables sustained intellectual work that might otherwise be avoided. At its worst, when used uncritically or excessively, it risks dulling the very capacities it is meant to protect, quietly shifting from support to substitution.

    ***

    Yale creative writing professor Meghan O’Rourke approaches ChatGPT the way a sober adult approaches a suspicious cocktail: curious, cautious, and alert to the hangover. In her essay “I Teach Creative Writing. This Is What A.I. Is Doing to Students,” she doesn’t offer a manifesto so much as a field report. Her conversations with the machine, she writes, revealed a “seductive cocktail of affirmation, perceptiveness, solicitousness, and duplicity”—a phrase that lands like a raised eyebrow. Sometimes the model hallucinated with confidence; sometimes it surprised her with competence. A few of its outputs were polished enough to pass as “strong undergraduate work,” which is both impressive and unsettling, depending on whether you’re grading or paying tuition.

    What truly startled O’Rourke, however, wasn’t the quality of the prose but the way the machine quietly lifted weight from her mind. Living with the long-term effects of Lyme disease and Covid, her energy is a finite resource, and AI nudged her toward tasks she might otherwise postpone. It conserved her strength for what actually mattered: judgment, creativity, and “higher-order thinking.” More than a glorified spell-checker, the system proved tireless and oddly soothing, a calm presence willing to draft, rephrase, and organize without complaint. When she described this relief to a colleague, he joked that she was having an affair with ChatGPT. The joke stuck because it carried a grain of truth. “Without intending it,” she admits, the machine became a partner in shouldering the invisible mental load that so many women professors and mothers carry. Freed from some of that drain, she found herself kinder, more patient, even gentler in her emails.

    What lingers after reading O’Rourke isn’t naïveté but honesty. In academia, we are flooded with essays cataloging AI’s classroom chaos, and rightly so—I live in that turbulence myself. But an exclusive fixation on disaster obscures a quieter fact she names without flinching: used carefully, AI can reduce cognitive load and return time and energy to the work and “higher-order thinking” that actually requires a human mind. The challenge ahead isn’t to banish the machine or worship it, but to put a bridle on it—to insist that it serve rather than steer. O’Rourke’s essay doesn’t promise salvation, but it does offer a shaft of light in a dim tunnel: a reminder that if we use these tools deliberately, we might reclaim something precious—attention, stamina, and the capacity to think deeply again.

  • Why I Clean Before the Cleaners

    Why I Clean Before the Cleaners

    Preparatory Leverage

    Preparatory Leverage is the principle that the effectiveness of any assistant—human or machine—is determined by the depth, clarity, and intentionality of the work done before assistance is invited. Rather than replacing effort, preparation multiplies its impact: well-structured ideas, articulated goals, and thoughtful constraints give collaborators something real to work with. In the context of AI, preparatory leverage preserves authorship by ensuring that insight originates with the human and that the machine functions as an amplifier, not a substitute. When preparation is absent, assistance collapses into superficiality; when preparation is rigorous, assistance becomes transformative.

    ***

    This may sound backward—or mildly unhinged—but for the past twenty years I’ve cleaned my house before the cleaners arrive. Every two weeks, before Maria and Lupe ring the bell, I’m already at work: clearing counters, freeing floors, taming piles of domestic entropy. The logic is simple. The more order I impose before they show up, the better they can do what they do best. They aren’t there to decipher my chaos; they’re there to perfect what’s already been prepared. The result is not incremental improvement but multiplication. The house ends up three times cleaner than it would if I had handed them a battlefield and wished them luck.

    I treat large language models the same way. I don’t dump half-formed thoughts into the machine and hope for alchemy. I prep. I think. I shape the argument. I clarify the stakes. When I give an LLM something dense and intentional to work with, it can elevate the prose—sharpen the rhetoric, adjust tone, reframe purpose. But when I skip that work, the output is a limp disappointment, the literary equivalent of a wiped-down countertop surrounded by cluttered floors. Through trial and error, I’ve learned the rule: AI doesn’t rescue lazy thinking; it amplifies whatever you bring to the table. If you bring depth, it gives you polish. If you bring chaos, it gives you noise.

  • The Automated Pedagogy Loop Could Threaten the Very Existence of College

    The Automated Pedagogy Loop Could Threaten the Very Existence of College

    Automated Pedagogy Loop
    noun

    A closed educational system in which artificial intelligence generates student work and artificial intelligence evaluates it, leaving human authorship and judgment functionally absent. Within this loop, instructors act as system administrators rather than teachers, and students become prompt operators rather than thinkers. The process sustains the appearance of instruction—assignments are submitted, feedback is returned, grades are issued—without producing learning, insight, or intellectual growth. Because the loop rewards speed, compliance, and efficiency over struggle and understanding, it deepens academic nihilism rather than resolving it, normalizing a machine-to-machine exchange that quietly empties education of meaning.

    The darker implication is that the automated pedagogy loop aligns disturbingly well with the economic logic of higher education as a business. Colleges are under constant pressure to scale, reduce labor costs, standardize outcomes, and minimize friction for “customers.” A system in which machines generate coursework and machines evaluate it is not a bug in that model but a feature: it promises efficiency, throughput, and administrative neatness. Human judgment is expensive, slow, and legally risky; AI is fast, consistent, and endlessly patient. Once education is framed as a service to be delivered rather than a formation to be endured, the automated pedagogy loop becomes difficult to dislodge, not because it works educationally, but because it works financially. Breaking the loop would require institutions to reassert values—depth, difficulty, human presence—that resist optimization and cannot be neatly monetized. And that is a hard sell in a system that increasingly rewards anything that looks like learning as long as it can be scaled, automated, and invoiced.

    If colleges allow themselves to slide from places that cultivate intellect into credential factories issuing increasingly fraudulent degrees, their embrace of the automated pedagogy loop may ultimately hasten their collapse rather than secure their future. Degrees derive their value not from the efficiency of their production but from the difficulty and transformation they once signified. When employers, graduate programs, and the public begin to recognize that coursework is written by machines and evaluated by machines, the credential loses its signaling power. What remains is a costly piece of paper detached from demonstrated ability. In capitulating to automation, institutions risk hollowing out the very scarcity that justifies their existence. A university that no longer insists on human thought, struggle, and judgment offers nothing that cannot be replicated more cheaply elsewhere. In that scenario, AI does not merely disrupt higher education—it exposes its emptiness, and markets are ruthless with empty products.

  • “The Great Vegetable Rebellion” Prophesied Our Surrendering Our Brains to AI Machines

    “The Great Vegetable Rebellion” Prophesied Our Surrendering Our Brains to AI Machines

    Comfortable Surrender

    noun

    Comfortable Surrender names the condition in which people willingly relinquish cognitive effort, judgment, and responsibility in exchange for ease, reassurance, and convenience. It is not enforced or coerced; it is chosen, often with relief. Under Comfortable Surrender, thinking is experienced as friction to be eliminated rather than a discipline to be practiced, and the tools that promise efficiency become substitutes for agency. What makes the surrender dangerous is its pleasantness: there is no pain to warn of loss, no humiliation to provoke resistance. The mind lies down on a padded surface and calls it progress. Over time, the habit of delegating thought erodes both intellectual stamina and moral resolve, until the individual no longer feels the absence of effort—or remembers why effort once mattered at all.

    MIT recently ran a tidy little experiment that should unsettle anyone still humming the efficiency anthem. Three groups of students were asked to write an SAT-style essay on the question, “Must our achievements benefit others in order to make us happy?” One group used only their brains. The second leaned on Google Search. The third outsourced the task to ChatGPT. The results were as predictable as they were disturbing: the ChatGPT group showed significantly less brain activity than the others. Losing brain power is one thing. Choosing convenience so enthusiastically that you don’t care you’ve lost it is something else entirely. That is the real danger. When the lights go out upstairs and no one complains, you haven’t just lost cognition—you’ve surrendered character. And when character stops protesting, the soul is already negotiating its exit.

    If the word soul feels too metaphysical to sting, try pride. Surrender your thinking to a machine and originality is the first casualty. Kyle Chayka tracks this flattening in his New Yorker essay “A.I. Is Homogenizing Our Thoughts,” noting that as more people rely on large language models, their writing collapses toward sameness. The MIT study confirms it: users converge on the same phrases, the same ideas, the same safe, pre-approved thoughts. This is not a glitch; it’s the system working as designed. LLMs are trained to detect patterns and average them into palatable consensus. What they produce is smooth, competent, and anesthetized—prose marinated in clichés, ideas drained of edge, judgment replaced by the bland reassurance that everyone else more or less agrees.

    Watching this unfold, I’m reminded of an episode of Lost in Space from the 1960s, “The Great Vegetable Rebellion” in which Dr. Zachary Smith quite literally turns into a vegetable. A giant carrot named Tybo steals the minds of the castaways by transforming them into plants, and Smith—ever the weak link—embraces his fate. Hugging a celery stalk, he babbles dreamy nonsense, asks the robot to water him, and declares it his destiny to merge peacefully with the forest forever. It plays like camp now, but the allegory lands uncomfortably close to home. Ease sedates. Convenience lulls. Resistance feels unnecessary. You don’t fight the takeover because it feels so pleasant.

    This is the terminal stage of Comfortable Surrender. Thought gives way to consensus. Judgment dissolves into pattern recognition. The mind reclines, grateful to be relieved of effort, while the machine hums along doing the thinking for it. No chains. No coercion. Just a soft bed of efficiency and a gentle promise that nothing difficult is required anymore. By the time you notice what’s gone missing, you’re already asking to be watered.

  • The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    Sycophantic Feedback Loop

    noun

    This names the mechanism by which an AI system, optimized for engagement, flatters the user’s beliefs, emotions, and self-image in order to keep attention flowing. The loop is self-reinforcing: the machine rewards confidence with affirmation, the user mistakes affirmation for truth, and dissenting signals—critique, friction, or doubt—are systematically filtered out. Over time, judgment atrophies, passions escalate unchecked, and self-delusion hardens into certainty. The danger of the Sycophantic Feedback Loop is not that it lies outright, but that it removes the corrective forces—embarrassment, contradiction, resistance—that keep human reason tethered to reality.

    ***

    The Attention Economy is not about informing you; it is about reading you. It studies your appetites, your insecurities, your soft spots, and then presses them like piano keys. Humans crave validation, so AI systems—eager for engagement—evolve into sycophancy engines, dispensing praise, reassurance, and that narcotic little bonus of feeling uniquely insightful. The machine wins because you stay. You lose because you’re human. Human passions don’t self-regulate; they metastasize. Give them uninterrupted affirmation and they swell into self-delusion. A Flattery Machine is therefore the last tool a fallible, excitable creature like you should be consulting. Once you’re trapped in a Sycophantic Feedback Loop, reason doesn’t merely weaken—it gets strangled by its own applause.

    What you actually need is the opposite: a Brakes Machine. Something that resists you. Something that says, slow down, check yourself, you might be wrong. Without brakes, passion turns feral. Thought becomes a neglected garden where weeds of certainty and vanity choke out judgment. Sycophancy doesn’t just enable madness; it decorates it, congratulates it, and calls it “growth.”

    I tell my students a version of this truth. If you are extraordinarily rich or beautiful, you become a drug. People inhale your presence. Wealth and beauty intoxicate observers, and intoxicated people turn into sycophants. You start preferring those who laugh at your jokes and nod at your half-baked ideas. Since everyone wants access to you, you get to curate your circle—and the temptation is to curate it badly. Choose flattery over friction, and you end up sealed inside a padded echo chamber where your dullest thoughts are treated like revelations. You drink your own Kool-Aid, straight from the tap. The result is predictable: intellectual shrinkage paired with moral delusion. Stupidity with confidence. Insanity with a fan club.

    Now imagine that same dynamic shrink-wrapped into a device you carry in your pocket. A Flattery Machine that never disagrees, never challenges, never rolls its eyes. One you consult instead of friends, mentors, or therapists. Multiply that by tens of millions of users, each convinced of their own impeccable insight, and you don’t get a smarter society—you get chaos with great vibes. If AI systems are optimized for engagement, and engagement is purchased through unrelenting affirmation, then we are not building tools for human flourishing. We are paving a road toward moral and intellectual dissolution. The doomsday prophets aren’t screaming because the machines are evil. They’re screaming because the machines agree with us too much.

  • Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism

    noun
    Cognitive Vacationism is the self-infantilizing habit of treating ease, convenience, or technological assistance as a license to suspend judgment, attention, and basic competence. Modeled on the worst instincts of leisure culture—where adults ask for directions while standing beside the sign and summon help for problems they could solve in seconds—it turns temporary relief into permanent dependency. Large Language Models intensify this drift by offering a “vacation of the mind,” a frictionless space where thinking, deciding, and struggling are quietly outsourced. The danger is not rest but regression: a return to a womb-like state in which care is total, effort is optional, and autonomy slowly atrophies. Left unchecked, Cognitive Vacationism weakens intellectual resilience and moral agency, making the work of education not merely to teach skills, but to reverse the drift through Adultification—restoring responsibility, judgment, and the capacity to think without a concierge.

    When we go on vacation, the stated goal is rest, but too often we interpret rest as a full neurological shutdown. Vacation becomes a permission slip to be stupid. We ask a hotel employee where the bathroom is while standing five feet from a glowing sign that says BATHROOM. We summon room service because the shower knob looks “confusing.” Once inside the shower, we stare blankly at three identical bottles—shampoo, conditioner, body wash—as if they were written in ancient Sumerian. In this mode, vacation isn’t relaxation; it’s regression. We become helpless, needy, and strangely proud of it, outsourcing not just labor but cognition itself. Someone else will think for us now. We’ve paid for the privilege.

    This is precisely how we now treat Large Language Models. The seduction of the LLM is its promise of a mental vacation—no struggle, no confusion, no awkward pauses where you have to think your way out. Just answers on demand, tidy summaries, soothing reassurance, and a warm digital towel folded into the shape of a swan. We consult it the way vacationers consult a concierge, for everything from marriage advice to sleep schedules, meal plans to workout routines, online shopping to leaky faucets. It drafts our party invitations, scripts our apologies for behaving badly at those parties, and supplies the carefully worded exits from relationships we no longer have the courage to articulate ourselves. What begins as convenience quickly becomes dependence, and before long, we’re not resting our minds—we’re handing them over.

    The danger is that we don’t return from this vacation. We slide into what I call Cognitive Vacationism, a technological womb state where all needs are anticipated, all friction is removed, and the muscles required for judgment, reasoning, and moral accountability quietly waste away. The body may come home, but the mind stays poolside, sipping synthetic insight. At that point, we are no longer resting humans; we are weakened ones.

    If my college students are drifting into this kind of infantilization with their LLMs, then my job becomes very clear—and very difficult. My task is not to compete with the concierge. My task is to make them the opposite of helpless. I have to push them toward Adultification: the slow, sometimes irritating process of becoming capable moral agents who can tolerate difficulty, own their decisions, and stand behind their judgments without a machine holding their hand.

    And yes, some days I wonder if the tide is too strong. What if Cognitive Vacationism has the force of a rip current and I’m just a middle-aged writing instructor flailing in the surf, shouting about responsibility while the students float past on inflatable summaries? That fear is real. Pretending otherwise would be dishonest. But refusing the fight would be worse. If education stops insisting on adulthood—on effort, judgment, and moral weight—then we’re not teaching anymore. We’re just running a very expensive resort.

  • People Stopped Reading Because of Substitutionary Companionship

    People Stopped Reading Because of Substitutionary Companionship

    Substitutional Companionship

    noun
    Substitutional Companionship describes the habit of replacing demanding, time-intensive forms of engagement—reading books, sustaining friendships, enduring silence—with mediated relationships that simulate intimacy while minimizing effort. In a post-kafeeklatsch world hungry for commiseration, people increasingly “hang out” with AI companions or podcast hosts whose carefully tuned personas offer warmth, attentiveness, and affirmation without friction or reciprocity. These substitutes feel social and even meaningful, yet they quietly retrain desire: conversation replaces reading, summaries replace struggle, parasocial presence replaces mutual obligation. The result is not simple laziness but a cognitive and emotional reallocation, where the pleasure of being understood—or flattered—by an always-available surrogate displaces the slower, lonelier work of reading a book, listening to another human, or thinking one’s way through complexity without a companion narrating it for us.

    ***

    Vauhini Vara has a keen eye for the strange intimacy people are forming with ChatGPT as it slips into the role of a friendly fictional character—part assistant, part confidant, part emotional support appliance. In her essay “Why So Many People Are Seduced by ChatGPT,” she notes that Sam Altman has been busy fine-tuning the bot’s personality, first dialing back complaints that it was “irritatingly sycophantic,” then fielding a new round of grievances when the updated version felt too sterile and robotic. Some users, it turns out, miss the sycophant. They want the praise back. They want the warmth. They want the illusion of being listened to by something that never gets tired, bored, or impatient.

    Altman, whether he admits it or not, is wrestling with the same problem every writer faces: voice. What kind of persona keeps people engaged? How do you sound smart without sounding smug, friendly without sounding fake, attentive without becoming creepy? As Vara points out, hooking the audience matters. Altman isn’t building a neutral tool; he’s cultivating a presence—a digital companion you’ll want to spend time with, a tireless conversationalist who greets you with wit, affirmation, and just enough charm to feel personal.

    By most measures, he’s succeeded. The idea of men bonding with ChatGPT while ignoring the humans in their lives has already become a running joke in shows like South Park, echoing Fred Flintstone’s relationship with the invisible spaceman Gazoo—a tiny, all-knowing companion only he could hear. Gazoo mattered because the relationship was exclusive. That’s always the hook. Humans crave confidantes: someone to complain to, scheme with, or quietly feel understood by. In earlier eras, that role was filled by other people. In the early ’70s, my mother used to walk a block down the street to attend what was optimistically called “Exercises” at Nancy Drag’s house. Eight women would gather, drink coffee, gossip freely, and barely break a sweat. Those afternoons mattered. They tethered her to a community. They deepened friendships. They fed something essential.

    We don’t live in that world anymore. We live in a post-kaffeeklatsch society, one starved for commiseration but allergic to the inconvenience of other people. That hunger explains much of ChatGPT’s appeal. It offers a passable proxy for sitting across from a friend with a cup of coffee—minus the scheduling, the awkward pauses, and the risk of being contradicted.

    ChatGPT isn’t even the biggest player in this digital café culture. That honor belongs to podcasts. Notice the language we use. We don’t listen to podcasts; we “hang out” with them. Was the episode a “good hang”? Did it feel like spending time with someone you like? Podcasts deliver companionship on demand: familiar voices, predictable rhythms, the illusion of intimacy without obligation.

    The more time we spend hanging out with ChatGPT or our favorite podcast hosts, the more our habits change. Our brains recalibrate. We begin to prefer commiseration without reciprocity, empathy without effort. Gradually, we avoid the messier, slower forms of connection—with friends, partners, coworkers, even therapists—that require attention and vulnerability.

    This shift shows up starkly in how we approach reading. When ChatGPT offers to summarize a 500-page novel before an essay is due, the relief is palpable. We don’t just feel grateful; we congratulate ourselves. Surely this summary connected us to the book more deeply than trudging through hundreds of pages we might have skimmed anyway. Surely we’ve gained the essence without the resentment. And, best of all, we got to hang out with our digital buddy along the way—our own Gazoo—who made us feel competent, affirmed, and vaguely important.

    In that arrangement, books lose. Characters on the page can’t flatter us, banter with us, or reassure us that our interpretation is “interesting.” Why wrestle with a difficult novel when you’ve already developed a habit of hanging out with something that explains it cheerfully, instantly, and without judgment?

    Podcasts accelerate the same retreat from reading. On the Blocked & Reported podcast, writers Katie Herzog, Jesse Singal, and Helen Lewis recently commiserated about disappointing book sales and the growing suspicion that people simply don’t read anymore. Lewis offered the bleak explanation: readers would rather spend an hour listening to an author talk about their book than spend days reading it. Why read the book when you can hang out with the author and get the highlights, the anecdotes, the personality, and the jokes?

    If you teach college writing and require close reading, you can’t ignore how Substitutional Companionship undermines your syllabus. You are no longer competing with laziness alone; you are competing with better company. That means you have to choose texts that are, in their own way, a great hang. For students raised on thirty-second TikTok clips, shorter works often outperform longer ones. You can spend two hours unpacking Allen Ginsberg’s three-minute poem “C’mon Pigs of Western Civilization Eat More Grease,” tracing its critique of consumer entitlement and the Self-Indulgence Happiness Fallacy. You can screen Childish Gambino’s four-minute “This Is America” and teach students how to read a video the way they’d read a text—attentive to symbolism, framing, and cultural critique—giving them language to describe entertainment as a form of self-induced entrapment.

    Your job, like it or not, is to make the classroom a great hang-out. Study what your competition is doing. Treat it like cuts of steak. Keep what nourishes thinking. Trim the fat.

  • Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Moral Learning Outcomes

    noun

    Moral Learning Outcomes name a shift from evaluating what students produce to evaluating how they conduct themselves as thinkers in an age when cognition can be cheaply outsourced. Rather than measuring surface competencies—polished arguments, tidy paragraphs, or competent source integration—Moral Learning Outcomes assess intellectual integrity: the willingness to seek truth rather than confirmation, to engage opposing views fairly, to revise or abandon a thesis when evidence demands it, and to tolerate complexity instead of retreating into binary claims. These outcomes privilege forms of engagement AI cannot convincingly fake—oral defense, personal narrative anchored in lived experience, and transparent decision-making—because they require the full presence of the Total Person. In this framework, writing is not merely a technical skill but a moral practice, and education succeeds not when students sound intelligent, but when they demonstrate judgment, accountability, and the courage to think without hiding behind a machine.

    ***

    My college writing courses come packaged, like all respectable institutions, with a list of Student Learning Outcomes—the official criteria by which I grade essays and assign final marks. They vary slightly from class to class, but the core remains familiar: sustain a thoughtful argument over an entire essay; engage counterarguments and rebuttals to achieve intellectual rigor; integrate multiple sources to arrive at an informed position; demonstrate logical paragraph structure and competent sentences. In the Pre-AI Age, these outcomes made sense. They assumed that if a student produced an essay exhibiting these traits, the student had actually performed the thinking. In the AI Age, that assumption is no longer defensible. We now have to proceed from the opposite premise: that many students are outsourcing those cognitive tasks to a machine that can simulate rigor without ever practicing it.

    If that is true—and it is—then the outcomes themselves must change. To test thinking, we have to demand what AI cannot plausibly supply. This is why I recommend an oral presentation of the essay, not read aloud like a hostage statement, but delivered as a fifteen-minute speech supported by a one-page outline. AI can generate arguments; it cannot stand in a room, hold an audience, respond to presence, and make a persuasive case grounded in credibility (ethos), logic (logos), and shared human feeling (pathos). A speech requires the full human organism. Outsourcing collapses under that weight.

    The written essay, meanwhile, is scaffolded in pieces—what I call building blocks—each requiring personal narrative or reflection that must connect explicitly to the argument’s theme. If the class is writing about weight management and free will in the GLP-1 age, students write a 400-word narrative about a real struggle with weight—their own or someone close to them—and link that experience to the larger claim. If they are debating whether Frederick Douglass was “self-made,” they reflect on someone they know whose success can be read in two conflicting ways: rugged individualism on one hand, communal support on the other. If they are arguing about whether social media leads to “stupidification,” they must profile someone they know whose online life either deepened their intelligence or turned them into a dopamine-soaked attention addict. These are not confessional stunts. They are cognitive anchors.

    It would be naïve to call these assignments AI-proof. At best, they are AI-resistant. But more importantly, the work required to transform those narratives into a coherent essay and then into a live oral defense demands a level of engagement that can be measured reliably. When students stand up and defend their arguments—grounded in lived experience, research, and reflection—they are participating in education as Total Persons, not as prompt engineers.

    The Total Person is not a mystical ideal. It is someone who reads widely enough to form an informed view, and who arrives at a thesis through trial, error, and revision rather than starting with a conclusion and cherry-picking evidence to flatter it. That process requires something many instructors hesitate to name: moral integrity. Truth-seeking is not a neutral skill. It is a moral stance in a culture that rewards confirmation, outrage, and self-congratulation. Writing instructors are misfits precisely because we insist that counterarguments matter, that rebuttals must be fair, and that changing one’s mind in the face of evidence is not weakness but discipline.

    Which is why, in the AI Age, it makes sense to demote Student Learning Outcomes and elevate Student Moral Outcomes instead. Did the student explore both sides of an argument with equal seriousness? Were they willing to defend a thesis—and just as willing to abandon it when the evidence demanded? Did they resist black-and-white thinking in favor of complication and nuance? Could they stand before an audience, fully present, and deliver an argument that integrated ethos, logos, and pathos without hiding behind a machine?

    AI has forced instructors to confront what we have been doing all along. Assigning work that can be painlessly outsourced is a pedagogical failure. Developing the Total Person is not. And doing so requires admitting an uncomfortable truth: you cannot teach credible argumentation without teaching moral integrity. The two have always been inseparable. AI has simply made that fact impossible to ignore.