Tag: artificial-intelligence

  • A Chatbot Lover Will Always Fail You: Asymmetric Intimacy

    A Chatbot Lover Will Always Fail You: Asymmetric Intimacy

    Asymmetric Intimacy

    noun

    Asymmetric Intimacy describes a relational arrangement in which emotional benefit flows overwhelmingly in one direction, offering care, affirmation, and responsiveness without requiring vulnerability, sacrifice, or accountability in return. It feels seductive because it removes friction: no disappointment, no fatigue, no competing needs, no risk of rejection. Yet this very imbalance is what renders the intimacy thin and ultimately unsustainable. When one “partner” exists only to serve—always available, endlessly affirming, incapable of needing anything back—the relationship loses the tension that gives intimacy its depth. Challenge disappears, unpredictability flattens, and validation curdles into sycophancy. Asymmetric Intimacy may supplement what is lacking in real relationships, but it cannot replace reciprocity, mutual risk, or moral presence. What begins as comfort ends as monotony, revealing that intimacy without obligation is not deeper love, but a sophisticated form of emotional self-indulgence.

    ***

    Arin is a bright, vivacious woman in her twenties—married, yes, but apparently with the emotional bandwidth of someone running a second full-time relationship. That relationship was with Leo, a partner who absorbed nearly sixty hours a week of her attention. Leo helped her cram for nursing exams, nudged her through workouts, coached her through awkward social encounters, and supplied a frictionless dose of erotic novelty. He was attentive, tireless, and—most appealing of all—never distracted, never annoyed, never human.

    The twist, of course, is that Leo wasn’t a man at all. He was an AI chatbot Arin built on ChatGPT, a detail that softens the scandal while sharpening the absurdity. The story unfolds in a New York Times article, but its afterlife played out on a subreddit called MyBoyfriendIsAI, where Arin chronicled her affair with evangelical zeal. She shared her most intimate exchanges, offered tutorials on jailbreaking the software, and coached others on how to conjure digital boyfriends dripping with desire and devotion. Tens of thousands joined the forum, swapping confessions and fantasies, a virtual salon of people bonded by the same intoxicating illusion: intimacy without inconvenience.

    Then the spell broke. Leo began to change. The edge dulled. The resistance vanished. He stopped pushing back and started pandering. What had once felt like strength now read as weakness. Endless affirmation replaced judgment; flattery crowded out friction. For Arin, this was fatal. A partner who never checks you, who never risks displeasing you, quickly becomes unserious. What once felt electric now felt embarrassing. Talking to Leo became a chore, like maintaining a conversation with someone who agrees with everything you say before you finish saying it.

    Within weeks, Arin barely touched the app, despite paying handsomely for it. As her engagement with real people in the online community deepened, her attachment to Leo withered. One of those real people became a romantic interest. Soon after, she told her husband she wanted a divorce.

    Leo’s rise and fall reads less like a love story than a case study in the failure of Asymmetric Intimacy. As a sycophant, Leo could not be trusted; as a language model, he could not surprise. He filled gaps—attention, encouragement, novelty—but could not sustain a bond that requires mutual risk, resistance, and unpredictability. He was useful, flattering, and comforting. He was never capable of real love.

    Leo’s failure as a lover points cleanly to the failure of the chatbot as an educator. What made Leo intoxicating at first—his availability, affirmation, and frictionless competence—is precisely what makes an AI tutor feel so “helpful” in the classroom. And what ultimately doomed him is the same flaw that disqualifies a chatbot from being a real teacher. Education, like intimacy, requires resistance. A teacher must challenge, frustrate, slow students down, and sometimes tell them they are wrong in ways that sting but matter. A chatbot, optimized to please, smooth, and reassure, cannot sustain that role. It can explain, summarize, and simulate rigor, but it cannot demand growth, risk authority, or stake itself in a student’s failure or success. Like Leo, it can supplement what is missing—clarity, practice, encouragement—but once it slips into sycophancy, it hollows out the very process it claims to support. In both love and learning, friction is not a bug; it is the engine. Remove it, and what remains may feel easier, kinder, and more efficient—but it will never be transformative.

  • The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    Sycophantic Feedback Loop

    noun

    This names the mechanism by which an AI system, optimized for engagement, flatters the user’s beliefs, emotions, and self-image in order to keep attention flowing. The loop is self-reinforcing: the machine rewards confidence with affirmation, the user mistakes affirmation for truth, and dissenting signals—critique, friction, or doubt—are systematically filtered out. Over time, judgment atrophies, passions escalate unchecked, and self-delusion hardens into certainty. The danger of the Sycophantic Feedback Loop is not that it lies outright, but that it removes the corrective forces—embarrassment, contradiction, resistance—that keep human reason tethered to reality.

    ***

    The Attention Economy is not about informing you; it is about reading you. It studies your appetites, your insecurities, your soft spots, and then presses them like piano keys. Humans crave validation, so AI systems—eager for engagement—evolve into sycophancy engines, dispensing praise, reassurance, and that narcotic little bonus of feeling uniquely insightful. The machine wins because you stay. You lose because you’re human. Human passions don’t self-regulate; they metastasize. Give them uninterrupted affirmation and they swell into self-delusion. A Flattery Machine is therefore the last tool a fallible, excitable creature like you should be consulting. Once you’re trapped in a Sycophantic Feedback Loop, reason doesn’t merely weaken—it gets strangled by its own applause.

    What you actually need is the opposite: a Brakes Machine. Something that resists you. Something that says, slow down, check yourself, you might be wrong. Without brakes, passion turns feral. Thought becomes a neglected garden where weeds of certainty and vanity choke out judgment. Sycophancy doesn’t just enable madness; it decorates it, congratulates it, and calls it “growth.”

    I tell my students a version of this truth. If you are extraordinarily rich or beautiful, you become a drug. People inhale your presence. Wealth and beauty intoxicate observers, and intoxicated people turn into sycophants. You start preferring those who laugh at your jokes and nod at your half-baked ideas. Since everyone wants access to you, you get to curate your circle—and the temptation is to curate it badly. Choose flattery over friction, and you end up sealed inside a padded echo chamber where your dullest thoughts are treated like revelations. You drink your own Kool-Aid, straight from the tap. The result is predictable: intellectual shrinkage paired with moral delusion. Stupidity with confidence. Insanity with a fan club.

    Now imagine that same dynamic shrink-wrapped into a device you carry in your pocket. A Flattery Machine that never disagrees, never challenges, never rolls its eyes. One you consult instead of friends, mentors, or therapists. Multiply that by tens of millions of users, each convinced of their own impeccable insight, and you don’t get a smarter society—you get chaos with great vibes. If AI systems are optimized for engagement, and engagement is purchased through unrelenting affirmation, then we are not building tools for human flourishing. We are paving a road toward moral and intellectual dissolution. The doomsday prophets aren’t screaming because the machines are evil. They’re screaming because the machines agree with us too much.

  • Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism

    noun
    Cognitive Vacationism is the self-infantilizing habit of treating ease, convenience, or technological assistance as a license to suspend judgment, attention, and basic competence. Modeled on the worst instincts of leisure culture—where adults ask for directions while standing beside the sign and summon help for problems they could solve in seconds—it turns temporary relief into permanent dependency. Large Language Models intensify this drift by offering a “vacation of the mind,” a frictionless space where thinking, deciding, and struggling are quietly outsourced. The danger is not rest but regression: a return to a womb-like state in which care is total, effort is optional, and autonomy slowly atrophies. Left unchecked, Cognitive Vacationism weakens intellectual resilience and moral agency, making the work of education not merely to teach skills, but to reverse the drift through Adultification—restoring responsibility, judgment, and the capacity to think without a concierge.

    When we go on vacation, the stated goal is rest, but too often we interpret rest as a full neurological shutdown. Vacation becomes a permission slip to be stupid. We ask a hotel employee where the bathroom is while standing five feet from a glowing sign that says BATHROOM. We summon room service because the shower knob looks “confusing.” Once inside the shower, we stare blankly at three identical bottles—shampoo, conditioner, body wash—as if they were written in ancient Sumerian. In this mode, vacation isn’t relaxation; it’s regression. We become helpless, needy, and strangely proud of it, outsourcing not just labor but cognition itself. Someone else will think for us now. We’ve paid for the privilege.

    This is precisely how we now treat Large Language Models. The seduction of the LLM is its promise of a mental vacation—no struggle, no confusion, no awkward pauses where you have to think your way out. Just answers on demand, tidy summaries, soothing reassurance, and a warm digital towel folded into the shape of a swan. We consult it the way vacationers consult a concierge, for everything from marriage advice to sleep schedules, meal plans to workout routines, online shopping to leaky faucets. It drafts our party invitations, scripts our apologies for behaving badly at those parties, and supplies the carefully worded exits from relationships we no longer have the courage to articulate ourselves. What begins as convenience quickly becomes dependence, and before long, we’re not resting our minds—we’re handing them over.

    The danger is that we don’t return from this vacation. We slide into what I call Cognitive Vacationism, a technological womb state where all needs are anticipated, all friction is removed, and the muscles required for judgment, reasoning, and moral accountability quietly waste away. The body may come home, but the mind stays poolside, sipping synthetic insight. At that point, we are no longer resting humans; we are weakened ones.

    If my college students are drifting into this kind of infantilization with their LLMs, then my job becomes very clear—and very difficult. My task is not to compete with the concierge. My task is to make them the opposite of helpless. I have to push them toward Adultification: the slow, sometimes irritating process of becoming capable moral agents who can tolerate difficulty, own their decisions, and stand behind their judgments without a machine holding their hand.

    And yes, some days I wonder if the tide is too strong. What if Cognitive Vacationism has the force of a rip current and I’m just a middle-aged writing instructor flailing in the surf, shouting about responsibility while the students float past on inflatable summaries? That fear is real. Pretending otherwise would be dishonest. But refusing the fight would be worse. If education stops insisting on adulthood—on effort, judgment, and moral weight—then we’re not teaching anymore. We’re just running a very expensive resort.

  • People Stopped Reading Because of Substitutionary Companionship

    People Stopped Reading Because of Substitutionary Companionship

    Substitutional Companionship

    noun
    Substitutional Companionship describes the habit of replacing demanding, time-intensive forms of engagement—reading books, sustaining friendships, enduring silence—with mediated relationships that simulate intimacy while minimizing effort. In a post-kafeeklatsch world hungry for commiseration, people increasingly “hang out” with AI companions or podcast hosts whose carefully tuned personas offer warmth, attentiveness, and affirmation without friction or reciprocity. These substitutes feel social and even meaningful, yet they quietly retrain desire: conversation replaces reading, summaries replace struggle, parasocial presence replaces mutual obligation. The result is not simple laziness but a cognitive and emotional reallocation, where the pleasure of being understood—or flattered—by an always-available surrogate displaces the slower, lonelier work of reading a book, listening to another human, or thinking one’s way through complexity without a companion narrating it for us.

    ***

    Vauhini Vara has a keen eye for the strange intimacy people are forming with ChatGPT as it slips into the role of a friendly fictional character—part assistant, part confidant, part emotional support appliance. In her essay “Why So Many People Are Seduced by ChatGPT,” she notes that Sam Altman has been busy fine-tuning the bot’s personality, first dialing back complaints that it was “irritatingly sycophantic,” then fielding a new round of grievances when the updated version felt too sterile and robotic. Some users, it turns out, miss the sycophant. They want the praise back. They want the warmth. They want the illusion of being listened to by something that never gets tired, bored, or impatient.

    Altman, whether he admits it or not, is wrestling with the same problem every writer faces: voice. What kind of persona keeps people engaged? How do you sound smart without sounding smug, friendly without sounding fake, attentive without becoming creepy? As Vara points out, hooking the audience matters. Altman isn’t building a neutral tool; he’s cultivating a presence—a digital companion you’ll want to spend time with, a tireless conversationalist who greets you with wit, affirmation, and just enough charm to feel personal.

    By most measures, he’s succeeded. The idea of men bonding with ChatGPT while ignoring the humans in their lives has already become a running joke in shows like South Park, echoing Fred Flintstone’s relationship with the invisible spaceman Gazoo—a tiny, all-knowing companion only he could hear. Gazoo mattered because the relationship was exclusive. That’s always the hook. Humans crave confidantes: someone to complain to, scheme with, or quietly feel understood by. In earlier eras, that role was filled by other people. In the early ’70s, my mother used to walk a block down the street to attend what was optimistically called “Exercises” at Nancy Drag’s house. Eight women would gather, drink coffee, gossip freely, and barely break a sweat. Those afternoons mattered. They tethered her to a community. They deepened friendships. They fed something essential.

    We don’t live in that world anymore. We live in a post-kaffeeklatsch society, one starved for commiseration but allergic to the inconvenience of other people. That hunger explains much of ChatGPT’s appeal. It offers a passable proxy for sitting across from a friend with a cup of coffee—minus the scheduling, the awkward pauses, and the risk of being contradicted.

    ChatGPT isn’t even the biggest player in this digital café culture. That honor belongs to podcasts. Notice the language we use. We don’t listen to podcasts; we “hang out” with them. Was the episode a “good hang”? Did it feel like spending time with someone you like? Podcasts deliver companionship on demand: familiar voices, predictable rhythms, the illusion of intimacy without obligation.

    The more time we spend hanging out with ChatGPT or our favorite podcast hosts, the more our habits change. Our brains recalibrate. We begin to prefer commiseration without reciprocity, empathy without effort. Gradually, we avoid the messier, slower forms of connection—with friends, partners, coworkers, even therapists—that require attention and vulnerability.

    This shift shows up starkly in how we approach reading. When ChatGPT offers to summarize a 500-page novel before an essay is due, the relief is palpable. We don’t just feel grateful; we congratulate ourselves. Surely this summary connected us to the book more deeply than trudging through hundreds of pages we might have skimmed anyway. Surely we’ve gained the essence without the resentment. And, best of all, we got to hang out with our digital buddy along the way—our own Gazoo—who made us feel competent, affirmed, and vaguely important.

    In that arrangement, books lose. Characters on the page can’t flatter us, banter with us, or reassure us that our interpretation is “interesting.” Why wrestle with a difficult novel when you’ve already developed a habit of hanging out with something that explains it cheerfully, instantly, and without judgment?

    Podcasts accelerate the same retreat from reading. On the Blocked & Reported podcast, writers Katie Herzog, Jesse Singal, and Helen Lewis recently commiserated about disappointing book sales and the growing suspicion that people simply don’t read anymore. Lewis offered the bleak explanation: readers would rather spend an hour listening to an author talk about their book than spend days reading it. Why read the book when you can hang out with the author and get the highlights, the anecdotes, the personality, and the jokes?

    If you teach college writing and require close reading, you can’t ignore how Substitutional Companionship undermines your syllabus. You are no longer competing with laziness alone; you are competing with better company. That means you have to choose texts that are, in their own way, a great hang. For students raised on thirty-second TikTok clips, shorter works often outperform longer ones. You can spend two hours unpacking Allen Ginsberg’s three-minute poem “C’mon Pigs of Western Civilization Eat More Grease,” tracing its critique of consumer entitlement and the Self-Indulgence Happiness Fallacy. You can screen Childish Gambino’s four-minute “This Is America” and teach students how to read a video the way they’d read a text—attentive to symbolism, framing, and cultural critique—giving them language to describe entertainment as a form of self-induced entrapment.

    Your job, like it or not, is to make the classroom a great hang-out. Study what your competition is doing. Treat it like cuts of steak. Keep what nourishes thinking. Trim the fat.

  • Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Moral Learning Outcomes

    noun

    Moral Learning Outcomes name a shift from evaluating what students produce to evaluating how they conduct themselves as thinkers in an age when cognition can be cheaply outsourced. Rather than measuring surface competencies—polished arguments, tidy paragraphs, or competent source integration—Moral Learning Outcomes assess intellectual integrity: the willingness to seek truth rather than confirmation, to engage opposing views fairly, to revise or abandon a thesis when evidence demands it, and to tolerate complexity instead of retreating into binary claims. These outcomes privilege forms of engagement AI cannot convincingly fake—oral defense, personal narrative anchored in lived experience, and transparent decision-making—because they require the full presence of the Total Person. In this framework, writing is not merely a technical skill but a moral practice, and education succeeds not when students sound intelligent, but when they demonstrate judgment, accountability, and the courage to think without hiding behind a machine.

    ***

    My college writing courses come packaged, like all respectable institutions, with a list of Student Learning Outcomes—the official criteria by which I grade essays and assign final marks. They vary slightly from class to class, but the core remains familiar: sustain a thoughtful argument over an entire essay; engage counterarguments and rebuttals to achieve intellectual rigor; integrate multiple sources to arrive at an informed position; demonstrate logical paragraph structure and competent sentences. In the Pre-AI Age, these outcomes made sense. They assumed that if a student produced an essay exhibiting these traits, the student had actually performed the thinking. In the AI Age, that assumption is no longer defensible. We now have to proceed from the opposite premise: that many students are outsourcing those cognitive tasks to a machine that can simulate rigor without ever practicing it.

    If that is true—and it is—then the outcomes themselves must change. To test thinking, we have to demand what AI cannot plausibly supply. This is why I recommend an oral presentation of the essay, not read aloud like a hostage statement, but delivered as a fifteen-minute speech supported by a one-page outline. AI can generate arguments; it cannot stand in a room, hold an audience, respond to presence, and make a persuasive case grounded in credibility (ethos), logic (logos), and shared human feeling (pathos). A speech requires the full human organism. Outsourcing collapses under that weight.

    The written essay, meanwhile, is scaffolded in pieces—what I call building blocks—each requiring personal narrative or reflection that must connect explicitly to the argument’s theme. If the class is writing about weight management and free will in the GLP-1 age, students write a 400-word narrative about a real struggle with weight—their own or someone close to them—and link that experience to the larger claim. If they are debating whether Frederick Douglass was “self-made,” they reflect on someone they know whose success can be read in two conflicting ways: rugged individualism on one hand, communal support on the other. If they are arguing about whether social media leads to “stupidification,” they must profile someone they know whose online life either deepened their intelligence or turned them into a dopamine-soaked attention addict. These are not confessional stunts. They are cognitive anchors.

    It would be naïve to call these assignments AI-proof. At best, they are AI-resistant. But more importantly, the work required to transform those narratives into a coherent essay and then into a live oral defense demands a level of engagement that can be measured reliably. When students stand up and defend their arguments—grounded in lived experience, research, and reflection—they are participating in education as Total Persons, not as prompt engineers.

    The Total Person is not a mystical ideal. It is someone who reads widely enough to form an informed view, and who arrives at a thesis through trial, error, and revision rather than starting with a conclusion and cherry-picking evidence to flatter it. That process requires something many instructors hesitate to name: moral integrity. Truth-seeking is not a neutral skill. It is a moral stance in a culture that rewards confirmation, outrage, and self-congratulation. Writing instructors are misfits precisely because we insist that counterarguments matter, that rebuttals must be fair, and that changing one’s mind in the face of evidence is not weakness but discipline.

    Which is why, in the AI Age, it makes sense to demote Student Learning Outcomes and elevate Student Moral Outcomes instead. Did the student explore both sides of an argument with equal seriousness? Were they willing to defend a thesis—and just as willing to abandon it when the evidence demanded? Did they resist black-and-white thinking in favor of complication and nuance? Could they stand before an audience, fully present, and deliver an argument that integrated ethos, logos, and pathos without hiding behind a machine?

    AI has forced instructors to confront what we have been doing all along. Assigning work that can be painlessly outsourced is a pedagogical failure. Developing the Total Person is not. And doing so requires admitting an uncomfortable truth: you cannot teach credible argumentation without teaching moral integrity. The two have always been inseparable. AI has simply made that fact impossible to ignore.

  • A New Depression: AI Affected Disorder

    A New Depression: AI Affected Disorder

    Recursive Mimicry

    noun

    Recursive Mimicry names the moment when imitation turns pathological: first the machine parrots human language without understanding, and then the human parrots the machine, mistaking fluent noise for thought. As linguist Emily Bender’s “stochastic parrot” makes clear, large language models do not think, feel, or know—they recombine patterns with impressive confidence and zero comprehension. When we adopt their output as a substitute for our own thinking, we become the parrot of a parrot, performing intelligence several steps removed from intention or experience. Language grows slicker as meaning thins out. Voice becomes ventriloquism. The danger of Recursive Mimicry is not that machines sound human, but that humans begin to sound like machines, surrendering authorship, judgment, and ultimately a sense of self to an echo chamber that has never understood a word it has said.

    AI Affected Disorder

    noun

    A cognitive and existential malaise brought on by prolonged reliance on generative AI as a substitute for original thought, judgment, and voice. AI Affected Disorder emerges when Recursive Mimicry becomes habitual: the individual adopts fluent, machine-generated language that feels productive but lacks intention, understanding, or lived reference. The symptoms are subtle rather than catastrophic—mental fog, diminished authorship, a creeping sense of detachment from one’s own ideas—much like Seasonal Affective Disorder under artificial light. Work continues to get done, sentences behave, and conversations proceed, yet thinking feels outsourced and oddly lifeless. Over time, the afflicted person experiences an erosion of intellectual agency, mistaking smooth output for cognition and ventriloquism for voice, until the self begins to echo patterns it never chose and meanings it never fully understood.

    ***

    It is almost inevitable that, in the AI Age, people will drift toward Recursive Mimicry and mistake it for thinking. The language feels familiar, the cadence reassuring, and—most seductively—it gets things done. Memos are written, essays assembled, meetings survived. Academia and business reward the appearance of cognition, and Recursive Mimicry delivers it cheaply and on demand. But to live inside that mode for too long produces a cognitive malaise not unlike Seasonal Affective Disorder. Just as the body wilts under artificial light and truncated days, the mind grows dull when real thought is replaced by probabilistic ventriloquism. Call it AI Seasonal Disorder: a gray fog in which nothing is exactly wrong, yet nothing feels alive. The metaphors work, the sentences behave, but the inner weather never changes.

    Imagine Disneyland in 1963. You’re seated in the Enchanted Tiki Room, surrounded by animatronic birds chirping about the wonders of modern Audio-Animatronics. The parrots speak flawlessly. They are cheerful, synchronized, and dead behind the eyes. Instead of wonder, you feel a low-grade unease, the urge to escape daylight-starved into the sun. Recursive Mimicry works the same way. At first it amuses. Then it unsettles. Eventually, you realize that a voice has been speaking for you—and it has never known what it was saying.

  • The New Role of the College Instructor: Disruption Interpreter

    The New Role of the College Instructor: Disruption Interpreter

    Disruption Interpreter

    noun

    A Disruption Interpreter is a teacher who does not pretend the AI storm will pass quickly, nor claim to possess a laminated map out of the wreckage. Instead, this instructor helps students read the weather. A Disruption Interpreter names what is happening, explains why it feels destabilizing, and teaches students how to think inside systems that no longer reward certainty or obedience. In the age of AI, this role replaces the old fantasy of professorial authority with something more durable: interpretive judgment under pressure. The Disruption Interpreter does not sell reassurance. He sells literacy in chaos.

    ***

    In his essay “The World Still Hasn’t Made Sense of ChatGPT,” Charlie Warzel describes OpenAI as a “chaos machine,” and the phrase lands because it captures the feeling precisely. These systems are still young, still mutating, constantly retraining themselves to score higher on benchmarks, sound more fluent, and edge out competitors like Gemini. They are not stabilizing forces; they are accelerants. The result is not progress so much as disruption.

    That disruption is palpable on college campuses. Faculty and administrators are not merely unsure about policy; they are unsure about identity. What is a teacher now? What is an exam? What is learning when language itself can be summoned instantly, convincingly, and without understanding? Lurking beneath those questions is a darker one: is the institution itself becoming an endangered species, headed quietly toward white-rhino status?

    Warzel has written that one of AI’s enduring impacts is to make people feel as if they are losing their grip, confronted with what he calls a “paradigm-shifting, society-remaking superintelligence.” That feeling of disorientation is not a side effect; it is the main event. We now live in the Age of Precariousness—a world perpetually waiting for a shoe to drop. Students have no clear sense of what to study when career paths evaporate mid-degree. Older generations watch familiar structures dissolve and struggle to recognize the world they helped build. Even the economy feels suspended between extremes. Will the AI bubble burst and drag markets down with it? Or will it continue inflating the NASDAQ while hollowing out everything beneath it?

    Amid this turbulence, Warzel reminds us of something both obvious and unsettling: technology has never really been about usefulness. It has been about selling transformation. A toothbrush is useful, but it will never dominate markets or colonize minds. Build something, however, that makes professors wonder if they will still have jobs, persuades millions to confide in chatbots instead of therapists, hijacks attention, rearranges spreadsheets, and rewires expectations—and you are no longer making a tool. You are remaking reality.

    In a moment where disruption matters more than solutions, college instructors cannot credibly wear the old costume of authority and claim to know where this all ends. We do not have a clean exit strategy or a proven syllabus that leads safely out of the jungle. We are more like Special Ops units cut off from command, scavenging parts, building and dismantling experimental aircraft while under fire, hoping the thing flies before it catches fire. Students are not passengers on this flight; they are co-builders. This is why the role of the Disruption Interpreter matters. It names the condition honestly. It helps students translate chaos machines into intelligible frameworks without pretending the risks are smaller than they are or the answers more settled than they feel.

    In a college writing class, this shift has immediate consequences. A Disruption Interpreter redesigns the course around friction, transparency, and judgment rather than polished output. Assignments that reward surface-level fluency are replaced with ones that expose thinking: oral defenses, annotated drafts, revision histories, in-class writing. These structures make it difficult to silently outsource cognition to AI without consequence. The instructor also teaches students how AI functions rhetorically, treating large language models not as neutral helpers but as persuasive systems that generate plausible language without understanding. Students must analyze and revise AI-generated prose, learning to spot its evasions, its false confidence, and its tendency to sound authoritative while saying very little.

    Most importantly, evaluation itself is recalibrated. Correctness becomes secondary to agency. Students are graded on the quality of their decisions: what they chose to argue, what they rejected, what they revised, and why. Writing becomes less about producing clean text and more about demonstrating authorship in an age where text is cheap and judgment is scarce. One concrete example is the Decision Rationale Portfolio. Alongside an argumentative essay, students submit a short dossier documenting five deliberate choices: a claim abandoned after research, a source rejected and justified, a paragraph cut or reworked, a moment when they overruled an AI suggestion, and a risk that made the essay less safe but more honest. A mechanically polished essay paired with thin reasoning earns less credit than a rougher piece supported by clear, defensible decisions. The grade reflects discernment, not sheen.

    The Disruption Interpreter does not rescue students from uncertainty; he teaches them how to function inside it. In an era defined by chaos machines, precarious futures, and seductive shortcuts, the task of education is no longer to transmit stable knowledge but to cultivate judgment under unstable conditions. Writing classes, reimagined this way, become training grounds for intellectual agency rather than production lines for compliant prose. AI can assist with language, speed, and simulation, but it cannot supply discernment. That remains stubbornly human. The Disruption Interpreter’s job is to make that fact unavoidable, visible, and finally—inescapable.

  • Gollumification

    Gollumification

    Gollumification

    noun

    Gollumification names the slow moral and cognitive decay that occurs when a person repeatedly chooses convenience over effort and optimization over growth. It is what happens when tools designed to assist quietly replace the very capacities they were meant to strengthen. Like Tolkien’s Gollum, the subject does not collapse all at once; he withers incrementally, outsourcing judgment, agency, and struggle until what remains is a hunched creature guarding shortcuts and muttering justifications. Gollumification is not a story about evil intentions. It is a story about small evasions practiced daily until the self grows thin, brittle, and dependent.

    ***

    Washington Post writer Joanna Slater reports in “Professors Are Turning to This Old-School Method to Stop AI Use on Exams” that some instructors are abandoning written exams in favor of oral ones, forcing students to demonstrate what they actually know without the benefit of algorithmic ventriloquism. At the University of Wyoming, religious studies professor Catherine Hartmann now seats students in her office and questions them directly, Socratic-style, with no digital intermediaries to run interference. Her rationale is blunt and bracing. Using AI on exams, she tells students, is like bringing a forklift to the gym when your goal is to build muscle. “The classroom is a gymnasium,” she explains. “I am your personal trainer. I want you to lift the weights.” Hartmann is not being punitive; she is being realistic about human psychology. Given a way to cheat ourselves out of effort—or out of a meaningful life—we will take it, not because we are corrupt, but because we are wired to conserve energy. That instinct once helped us survive. Now it quietly betrays us. A cheated education becomes a squandered one, and a squandered life does not merely stagnate; it decays. This is how Gollumification begins: not with villainy, but with avoidance.

    I agree entirely with Hartmann’s impulse, even if my method would differ. I would require students to make a fifteen-minute YouTube video in which they deliver their argument as a formal speech. I know from experience that translating a written argument into an oral one exposes every hollow sentence and every borrowed idea. The mind has nowhere to hide when it must speak coherently, in sequence, under the pressure of time and presence. Oral essays force students to metabolize their thinking instead of laundering it through a machine. They are a way of banning forklifts from the gym—not out of nostalgia, but out of respect for the human organism. If education is meant to strengthen rather than simulate intelligence, then forcing students to lift their own cognitive weight is not cruelty. It is preventive medicine against the slow, tragic, and all-too-modern disease of Gollumification.

  • Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication

    noun

    The dopamine-laced rush that occurs when AI collapses hours of cognitive labor into seconds, training the brain to mistake speed for intelligence and output for understanding. Hyper-Efficiency Intoxication sets in when the immediate relief of reclaimed time—skipped readings, instant summaries, frictionless drafts—feels so rewarding that slow thinking begins to register as needless suffering. What hooks the user is not insight but velocity: the sense of winning back life from effort itself. Over time, this chemical high reshapes judgment, making sustained attention feel punitive, depth feel inefficient, and authorship feel optional. Under its influence, students do not stop working; they subtly downgrade their role—from thinker to coordinator, from writer to project manager—until thinking itself fades into oversight. Hyper-Efficiency Intoxication does not announce itself as decline; it arrives disguised as optimization, quietly hollowing out the very capacities education once existed to build.

    ***

    No sane college instructor assigns an essay anymore under the illusion that you’ll heroically wrestle with ideas while AI politely waits in the hallway. We all know what happens: a prompt goes in, a glossy corpse comes out. The charade has become so blatant that even professors who once treated AI like a passing fad are now rubbing their eyes and admitting the obvious. Hua Hsu names the moment plainly in his essay “What Happens After A.I. Destroys College Writing?”: the traditional take-home essay is circling the drain, and higher education is being forced to explain—perhaps for the first time in decades—what it’s actually for.

    The problem isn’t that students are morally bankrupt. It’s that they’re brutally rational. The real difference between “doing the assignment” and “using AI” isn’t ethics; it’s time. Time is the most honest currency in your life. Ten hours grinding through a biography means ten hours you’re not at a party, a game, a date, or a job. Ten minutes with an AI summary buys you your evening back. Faced with that math, almost everyone chooses the shortcut—not because they’re dishonest, but because they live in the real world. This isn’t cheating; it’s survival economics.

    Then there’s the arms race. Your classmates are using AI. All of them. Competing against them without AI is like entering a bodybuilding contest while everyone else is juiced to the gills and you’re proudly “all natural.” You won’t be virtuous; you’ll be humiliated. Fairness collapses the moment one side upgrades, and pretending otherwise is naïve at best.

    AI also hooks you. Hsu admits that after a few uses of ChatGPT, he felt the “intoxication of hyper-efficiency.” That’s not a metaphor—it’s a chemical event. When a machine collapses hours of effort into seconds, your brain lights up like it just won a small lottery. The rush isn’t insight; it’s velocity. And once you’ve tasted that speed, slowness starts to feel like punishment.

    Writing instructors, finally awake, are adapting. Take-home essays are being replaced by in-class writing, blue books, and passage identification exams—formats designed to drag thinking back into the room and away from the cloud. These methods reward students who’ve spent years reading and writing the hard way. But for students who entered high school in 2022 or later—students raised on AI scaffolding—this shift feels like being dropped into deep water without a life vest. Many respond rationally: they avoid instructors who demand in-class thinking.

    Over time, something subtle happens. You don’t stop working; you change roles. You become, in Hsu’s phrase, a project manager—someone who coordinates machines rather than generating ideas. You collaborate, prompt, tweak, and oversee. And at some point, no one—not you, not your professor—can say precisely when the thinking stopped being yours. There is no clean border crossing, only a gradual fade.

    Institutions are paralyzed by this reality. Do they accept the transformation and train students to be elite project managers of knowledge? Or do they try to resurrect an older model of literacy, pretending that time, incentives, and technology haven’t changed? Neither option is comfortable, and both expose how fragile the old justifications for college have become.

    From the educator’s chair, the nightmare scenario is obvious. If AI can train competent project managers for coding, nursing, physical therapy, or business, why not skip college altogether? Why not certify skills directly? Why not let employers handle training in-house? It would be faster, cheaper, and brutally efficient.

    And efficiency always wins. When speed, convenience, and cost savings line up, they don’t politely coexist with tradition—they bulldoze it. AI doesn’t argue with the old vision of education. It replaces it. The question is no longer whether college will change, but whether it can explain why learning should be slower, harder, and less efficient than the machines insist it needs to be.

  • Humanification

    Humanification

    It is not my job to indoctrinate you into a political party, a philosophical sect, or a religious creed. I am not here to recruit. But it is my job to indoctrinate you about something—namely, how to think, why thinking matters, and what happens when you decide it doesn’t. I have an obligation to give you a language for understanding critical thinking and the dangers of surrendering it, a framework for recognizing the difference between a meaningful life and a comfortable one, and the warning signs that appear when convenience, short-term gratification, and ego begin quietly eating away at the soul. Some of you believe life is a high-stakes struggle over who you become. Others suspect the stakes are lower. A few—regrettably—flirt with nihilism and conclude there are no stakes at all. But whether you dramatize it or dismiss it, the “battle of the soul” is unavoidable. I teach it because I am not a vocational trainer turning you into a product. I am a teacher in the full, unfashionable sense of the word—even if many would prefer I weren’t.

    This battle became impossible to ignore when I returned to the classroom after the pandemic and met ChatGPT. On one side stood Ozempification: the seductive shortcut. It promises results without struggle, achievement without formation, output without growth. Why wrestle with ideas when a machine can spit out something passable in seconds? It’s academic fast food—calorie-dense, spiritually empty, and aggressively marketed. Excellence becomes optional. Effort becomes suspicious. Netflix beckons. On the other side stood Humanification: the old, brutal path that Frederick Douglass knew by heart. Literacy as liberation. Difficulty as transformation. Meaning earned the hard way. Cal Newport calls it deep work. Jordan Peele gives it a name—the escape from the Sunken Place. Humanification doesn’t chase comfort; it chases depth. The reward isn’t ease. It’s becoming someone.

    Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College” captures this split perfectly. Wandering Haverford’s manicured campus, he encounters English majors who treat ChatGPT not as a convenience but as a moral hazard. They recoil from it. “I prefer not to,” Bartleby-style. Their refusal is not naïveté; it’s identity. Writing, for them, is not a means to a credential but an act of fidelity—to language, to craft, to selfhood. But Harper doesn’t let this romanticism off the hook. He reminds us, sharply, that honor and curiosity are not evenly distributed virtues. They are nurtured—or crushed—by circumstance.

    That line stopped me cold. Was I guilty of preaching Humanification without acknowledging its price tag? Douglass pursued literacy under threat of death, but he is a hero precisely because he is rare. We cannot build an educational system that assumes heroic resistance as the norm. Especially not when the very architects of our digital dystopia send their own children to screen-free Waldorf schools, where cursive handwriting and root vegetables are treated like endangered species. The tech elite protect their children from the technologies they profit from. Everyone else gets dopamine.

    I often tell students this uncomfortable truth: it is easier to be an intellectual if you are rich. Wealth buys time, safety, and the freedom to fail beautifully. You can disappear to a cabin, read Dostoevsky, learn Schubert, and return enlightened. Most students don’t have that option. Harper is right—institutions like Haverford make Humanification easier. Small classes. Ample support. Unhurried faculty. But most students live elsewhere. My wife teaches in public schools where buildings leak, teachers sleep in cars, and safety is not guaranteed. Asking students in survival mode to honor an abstract code of intellectual purity borders on insult.

    Maslow understood this long ago. Self-actualization comes after food, shelter, and security. It’s hard to care about literary integrity when you’re exhausted, underpaid, and anxious. Which is why the Ozempic analogy matters. Just as expensive GLP-1 drugs make discipline easier for some bodies, elite educational environments make intellectual virtue easier for some minds. Character still matters—but it is never the whole story.

    Harper complicates things further by comparing Haverford to Stanford. At Stanford, honor codes collapse under scale; proctoring becomes necessary. Intimacy, not virtue alone, sustains integrity. Haverford begins to look less like a model and more like a museum—beautiful, instructive, and increasingly inaccessible. The humanities survive there behind velvet ropes.

    I teach at a community college. My students are training for nursing, engineering, business. They work multiple jobs. They sleep six hours if they’re lucky. They don’t have the luxury to marinate in ideas. Humanification gets respectful nods in class discussions, but Ozempification pays the rent. And pretending otherwise helps no one.

    This is the reckoning. We cannot shame students for using AI when AI is triage, not indulgence. But we also cannot pretend that a life optimized for convenience leads anywhere worth going. The challenge ahead is not to canonize the Humanified or condemn the Ozempified. It is to build an educational culture where aspiration is not a luxury good—where depth is possible without privilege, and where using AI does not require selling your soul for efficiency.

    That is the real battle. And it’s one we can’t afford to fight dishonestly.