Tag: education

  • How to Resist Academic Nihilism

    How to Resist Academic Nihilism

    Academic Nihilism and Academic Rejuvenation

    Academic Nihilism names the moment when college instructors recognize—often with a sinking feeling—that the conditions students need to thrive are perfectly misaligned with the conditions they actually inhabit. Students need solitude, friction, deep reading and writing, and the slow burn of intellectual curiosity. What they get instead is a reward system that celebrates the surrender of agency to AI machines; peer pressure to eliminate effort; and a hypercompetitive, zero-sum academic culture where survival matters more than understanding. Time scarcity all but forces students to offload thinking to tools that generate pages while quietly draining cognitive stamina. Add years of screen-saturated distraction and a near-total deprivation of deep reading during formative stages, and you end up with students who lack the literacy baseline to engage meaningfully with writing prompts—or even to use AI well. When instructors capitulate to this reality, they cease being teachers in any meaningful sense. They become functionaries who comply with institutional “AI literacy policies,” which increasingly translate to a white-flag admission: we give up. Students submit AI-generated work; instructors “assess” it with AI tools; and the loop closes in a fog of futility. The emptiness of the exchange doesn’t resolve Academic Nihilism—it seals it shut.

    The only alternative is resistance—something closer to Academic Rejuvenation. That resistance begins with a deliberate reintroduction of friction. Instructors must design moments that demand full human presence: oral presentations, performances, and live writing tasks that deny students the luxury of hiding behind a machine. Solitude must be treated as a scarce but essential resource, to be rationed intentionally—sometimes as little as a protected half-hour of in-class writing can feel revolutionary. Curiosity must be reawakened by tethering coursework to the human condition itself. And here the line is bright: if you believe life is a low-stakes, nihilistic affair summed up by a faded 1980s slogan—“Life’s a bitch; then you die”—you are probably in the wrong profession. But if you believe human lives can either wither into Gollumification or rise toward higher purpose, and you are willing to let that belief inform your teaching, then Academic Rejuvenation is still possible. Even in the age of AI machines.

  • Why College Writing Instructors Must Teach the Self-Interrogation Principle

    Why College Writing Instructors Must Teach the Self-Interrogation Principle

    Self-Interrogation Principle

    noun

    The Self-Interrogation Principle holds that serious writing inevitably becomes a moral act because precise language exposes self-deception and forces individuals to confront their own motives, evasions, and contradictions. Rather than treating personal narrative as therapeutic indulgence or sentimental “enrichment,” this principle treats it as an instrument of clarity: when students articulate their behavior accurately, dysfunctional patterns lose their charm and become difficult to sustain. The aim is not confession for its own sake, nor a classroom turned talk show, but disciplined self-examination that collapses euphemism and replaces clever rationalization with honest reckoning. In this view, education cannot operate in a moral vacuum; teaching students how to think, argue, and write necessarily involves teaching them how to see themselves clearly. In the AI Age—when both cognitive labor and moral discomfort can be outsourced—the Self-Interrogation Principle insists that growth requires personal presence, linguistic precision, and the courage to endure what one discovers once illusion gives way to understanding.

    ***

    Thirty years ago, I assigned what now feels like a reckless little time bomb: a five-page extended definition essay on the term passive-aggressive. Students had to begin with a single, unsparing sentence—passive-aggressive behavior as an immature, cowardly, indirect way of expressing hostility—then unpack four or five defining traits and, finally, illustrate the concept with a personal chronicle. The goal was not linguistic finesse. It was exposure. I wanted students to stop admiring passive aggression as coy, clever, or emotionally sophisticated and see it instead for what it is: dysfunction with good PR.

    One essay has stayed with me for three decades. It came from a stunning nineteen-year-old who could have easily assembled a respectable boyfriend the way most people order coffee. Instead, she chose the town slob. He was twenty-six, unemployed by conviction, and committed to the craft of professional bumming. He was proudly unwashed, insufferably pungent, and permanently horizontal. He spent his days in her parents’ living room—drinking her father’s favorite beer, eating his snacks, parking himself in his favorite chair, and monopolizing the television like a hostile takeover. He belched. He cackled. He stank. And all the while, his girlfriend watched with satisfaction as her father’s misery fermented. She resented her father—another strong-willed soul who refused to bend—and rather than confront him directly, she opted for a scorched-earth tactic: ruin her own romantic prospects to punish him. Bite my nose to spite your face, weaponized.

    I remember her sitting across from me in my office as I read the essay, half-imagining it as a dark sitcom pilot. But there was nothing cute about it. When we talked, she told me that writing the essay forced her to see the ugliness of what she was doing with unbearable clarity. The realization filled her with such self-disgust that she ejected the boyfriend from her parents’ house and attempted, awkwardly but honestly, to confront her father directly. The assignment did two things no rubric could measure. It made her interrogate her own character, and it precipitated a real, irreversible change in her life.

    Thirty years later, I’m still unsure what to make of that. I’m gratified, of course—but uneasy. Is it my job to turn a writing class into a daytime talk show, where students inventory their neuroses and emerge “healed”? Is moral reckoning an accidental side effect of good pedagogy, or an unavoidable one?

    My answer, uncomfortable though it may be, is that a writing class cannot exist in a moral vacuum. Character matters. The courage to examine one’s own failures matters. Writing things down with enough precision that self-deception collapses under its own weight matters. Whether I like it or not, I have to endorse what I now call the Self-Interrogation Principle. Students do not come to class as blank slates hungry only for skills. They arrive starved for moral clarity—about the world and about themselves. And when language sharpens perception, perception sometimes demands change.

    I’m reminded of a department meeting in the early nineties where faculty were arguing over the value of assigning personal narratives. One professor defended them by saying they led to “personal enrichment.” A colleague—an infamous alcoholic, who sulked at meetings in his black leather jacket, appeared to be drunk at the table—exploded. “Personal enrichment? What the hell does that even mean?” he shouted as his spittle flew across the room. “Just another woeful cliché. Are you not ashamed?” The woman shrank into her chair, the meeting moved on, and the words personal enrichment was quietly banished. Today, in the AI Age, I will defend it without apology. That student’s essay was enriching in the only sense that matters: it helped a young adult grow up.

    I am not proposing that every assignment resemble an episode of Oprah. But one or two assignments that force honest self-examination have enormous value. They remind us that writing is not merely a transferable skill or a vocational tool. It is a means of moral reckoning. You cannot outsource that reckoning to a machine, and you cannot teach writing while pretending it doesn’t exist. If we are serious about education, we have to teach the Total Person—or admit we are doing something else entirely.

  • Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Why Student Learning Outcomes Should be Replaced with Moral Learning Outcomes

    Moral Learning Outcomes

    noun

    Moral Learning Outcomes name a shift from evaluating what students produce to evaluating how they conduct themselves as thinkers in an age when cognition can be cheaply outsourced. Rather than measuring surface competencies—polished arguments, tidy paragraphs, or competent source integration—Moral Learning Outcomes assess intellectual integrity: the willingness to seek truth rather than confirmation, to engage opposing views fairly, to revise or abandon a thesis when evidence demands it, and to tolerate complexity instead of retreating into binary claims. These outcomes privilege forms of engagement AI cannot convincingly fake—oral defense, personal narrative anchored in lived experience, and transparent decision-making—because they require the full presence of the Total Person. In this framework, writing is not merely a technical skill but a moral practice, and education succeeds not when students sound intelligent, but when they demonstrate judgment, accountability, and the courage to think without hiding behind a machine.

    ***

    My college writing courses come packaged, like all respectable institutions, with a list of Student Learning Outcomes—the official criteria by which I grade essays and assign final marks. They vary slightly from class to class, but the core remains familiar: sustain a thoughtful argument over an entire essay; engage counterarguments and rebuttals to achieve intellectual rigor; integrate multiple sources to arrive at an informed position; demonstrate logical paragraph structure and competent sentences. In the Pre-AI Age, these outcomes made sense. They assumed that if a student produced an essay exhibiting these traits, the student had actually performed the thinking. In the AI Age, that assumption is no longer defensible. We now have to proceed from the opposite premise: that many students are outsourcing those cognitive tasks to a machine that can simulate rigor without ever practicing it.

    If that is true—and it is—then the outcomes themselves must change. To test thinking, we have to demand what AI cannot plausibly supply. This is why I recommend an oral presentation of the essay, not read aloud like a hostage statement, but delivered as a fifteen-minute speech supported by a one-page outline. AI can generate arguments; it cannot stand in a room, hold an audience, respond to presence, and make a persuasive case grounded in credibility (ethos), logic (logos), and shared human feeling (pathos). A speech requires the full human organism. Outsourcing collapses under that weight.

    The written essay, meanwhile, is scaffolded in pieces—what I call building blocks—each requiring personal narrative or reflection that must connect explicitly to the argument’s theme. If the class is writing about weight management and free will in the GLP-1 age, students write a 400-word narrative about a real struggle with weight—their own or someone close to them—and link that experience to the larger claim. If they are debating whether Frederick Douglass was “self-made,” they reflect on someone they know whose success can be read in two conflicting ways: rugged individualism on one hand, communal support on the other. If they are arguing about whether social media leads to “stupidification,” they must profile someone they know whose online life either deepened their intelligence or turned them into a dopamine-soaked attention addict. These are not confessional stunts. They are cognitive anchors.

    It would be naïve to call these assignments AI-proof. At best, they are AI-resistant. But more importantly, the work required to transform those narratives into a coherent essay and then into a live oral defense demands a level of engagement that can be measured reliably. When students stand up and defend their arguments—grounded in lived experience, research, and reflection—they are participating in education as Total Persons, not as prompt engineers.

    The Total Person is not a mystical ideal. It is someone who reads widely enough to form an informed view, and who arrives at a thesis through trial, error, and revision rather than starting with a conclusion and cherry-picking evidence to flatter it. That process requires something many instructors hesitate to name: moral integrity. Truth-seeking is not a neutral skill. It is a moral stance in a culture that rewards confirmation, outrage, and self-congratulation. Writing instructors are misfits precisely because we insist that counterarguments matter, that rebuttals must be fair, and that changing one’s mind in the face of evidence is not weakness but discipline.

    Which is why, in the AI Age, it makes sense to demote Student Learning Outcomes and elevate Student Moral Outcomes instead. Did the student explore both sides of an argument with equal seriousness? Were they willing to defend a thesis—and just as willing to abandon it when the evidence demanded? Did they resist black-and-white thinking in favor of complication and nuance? Could they stand before an audience, fully present, and deliver an argument that integrated ethos, logos, and pathos without hiding behind a machine?

    AI has forced instructors to confront what we have been doing all along. Assigning work that can be painlessly outsourced is a pedagogical failure. Developing the Total Person is not. And doing so requires admitting an uncomfortable truth: you cannot teach credible argumentation without teaching moral integrity. The two have always been inseparable. AI has simply made that fact impossible to ignore.

  • The New Role of the College Instructor: Disruption Interpreter

    The New Role of the College Instructor: Disruption Interpreter

    Disruption Interpreter

    noun

    A Disruption Interpreter is a teacher who does not pretend the AI storm will pass quickly, nor claim to possess a laminated map out of the wreckage. Instead, this instructor helps students read the weather. A Disruption Interpreter names what is happening, explains why it feels destabilizing, and teaches students how to think inside systems that no longer reward certainty or obedience. In the age of AI, this role replaces the old fantasy of professorial authority with something more durable: interpretive judgment under pressure. The Disruption Interpreter does not sell reassurance. He sells literacy in chaos.

    ***

    In his essay “The World Still Hasn’t Made Sense of ChatGPT,” Charlie Warzel describes OpenAI as a “chaos machine,” and the phrase lands because it captures the feeling precisely. These systems are still young, still mutating, constantly retraining themselves to score higher on benchmarks, sound more fluent, and edge out competitors like Gemini. They are not stabilizing forces; they are accelerants. The result is not progress so much as disruption.

    That disruption is palpable on college campuses. Faculty and administrators are not merely unsure about policy; they are unsure about identity. What is a teacher now? What is an exam? What is learning when language itself can be summoned instantly, convincingly, and without understanding? Lurking beneath those questions is a darker one: is the institution itself becoming an endangered species, headed quietly toward white-rhino status?

    Warzel has written that one of AI’s enduring impacts is to make people feel as if they are losing their grip, confronted with what he calls a “paradigm-shifting, society-remaking superintelligence.” That feeling of disorientation is not a side effect; it is the main event. We now live in the Age of Precariousness—a world perpetually waiting for a shoe to drop. Students have no clear sense of what to study when career paths evaporate mid-degree. Older generations watch familiar structures dissolve and struggle to recognize the world they helped build. Even the economy feels suspended between extremes. Will the AI bubble burst and drag markets down with it? Or will it continue inflating the NASDAQ while hollowing out everything beneath it?

    Amid this turbulence, Warzel reminds us of something both obvious and unsettling: technology has never really been about usefulness. It has been about selling transformation. A toothbrush is useful, but it will never dominate markets or colonize minds. Build something, however, that makes professors wonder if they will still have jobs, persuades millions to confide in chatbots instead of therapists, hijacks attention, rearranges spreadsheets, and rewires expectations—and you are no longer making a tool. You are remaking reality.

    In a moment where disruption matters more than solutions, college instructors cannot credibly wear the old costume of authority and claim to know where this all ends. We do not have a clean exit strategy or a proven syllabus that leads safely out of the jungle. We are more like Special Ops units cut off from command, scavenging parts, building and dismantling experimental aircraft while under fire, hoping the thing flies before it catches fire. Students are not passengers on this flight; they are co-builders. This is why the role of the Disruption Interpreter matters. It names the condition honestly. It helps students translate chaos machines into intelligible frameworks without pretending the risks are smaller than they are or the answers more settled than they feel.

    In a college writing class, this shift has immediate consequences. A Disruption Interpreter redesigns the course around friction, transparency, and judgment rather than polished output. Assignments that reward surface-level fluency are replaced with ones that expose thinking: oral defenses, annotated drafts, revision histories, in-class writing. These structures make it difficult to silently outsource cognition to AI without consequence. The instructor also teaches students how AI functions rhetorically, treating large language models not as neutral helpers but as persuasive systems that generate plausible language without understanding. Students must analyze and revise AI-generated prose, learning to spot its evasions, its false confidence, and its tendency to sound authoritative while saying very little.

    Most importantly, evaluation itself is recalibrated. Correctness becomes secondary to agency. Students are graded on the quality of their decisions: what they chose to argue, what they rejected, what they revised, and why. Writing becomes less about producing clean text and more about demonstrating authorship in an age where text is cheap and judgment is scarce. One concrete example is the Decision Rationale Portfolio. Alongside an argumentative essay, students submit a short dossier documenting five deliberate choices: a claim abandoned after research, a source rejected and justified, a paragraph cut or reworked, a moment when they overruled an AI suggestion, and a risk that made the essay less safe but more honest. A mechanically polished essay paired with thin reasoning earns less credit than a rougher piece supported by clear, defensible decisions. The grade reflects discernment, not sheen.

    The Disruption Interpreter does not rescue students from uncertainty; he teaches them how to function inside it. In an era defined by chaos machines, precarious futures, and seductive shortcuts, the task of education is no longer to transmit stable knowledge but to cultivate judgment under unstable conditions. Writing classes, reimagined this way, become training grounds for intellectual agency rather than production lines for compliant prose. AI can assist with language, speed, and simulation, but it cannot supply discernment. That remains stubbornly human. The Disruption Interpreter’s job is to make that fact unavoidable, visible, and finally—inescapable.

  • Transactional Transformation Fallacy

    Transactional Transformation Fallacy

    noun

    The Transactional Transformation Fallacy is the belief that personal change can be purchased rather than practiced. It treats growth as a commercial exchange: pay the fee, swipe the card, enroll in the program, and improvement will arrive as a deliverable. Effort becomes optional, discipline a quaint accessory. In this logic, money substitutes for resolve, proximity replaces participation, and the hard interior work of becoming someone else is quietly delegated to a service provider. It is a comforting fantasy, and a profitable one, because it promises results without inconvenience.

    ***

    I once had a student who worked as a personal trainer. She earned decent money, but she disliked the job for reasons that had nothing to do with exercise science and everything to do with human nature. Her clients were not untrained so much as uncommitted. She gave them solid programs, explained the movements, laid out sensible menus, and checked in faithfully. Then she watched them vanish between sessions. They skipped workouts on non-training days. They treated nutrition guidelines as aspirational literature. They arrived at the gym exhaling whiskey and nicotine, their pores broadcasting last night’s bad decisions like a public service announcement. They paid her, showed up once or twice a week, and mistook attendance for effort. Many were lonely. Others liked telling friends they “had a trainer,” as if that phrase itself conferred seriousness, discipline, or physical virtue. They believed that money applied to a problem was the same thing as resolve applied to a life.

    The analogy to college is unavoidable. If a student enters higher education with the same mindset—pay tuition, outsource thinking to AI, submit algorithmically polished assignments, and expect to emerge transformed—they are operating squarely within the Transactional Transformation Fallacy. They imagine education as a vending machine: insert payment, press degree, receive wisdom. Like the Scarecrow awaiting his brain from the Wizard of Oz, they expect character and intelligence to be bestowed rather than built. This fantasy has always haunted consumer culture, but AI supercharges it by making the illusion briefly convincing. The greatest challenge facing higher education in the years ahead will not be cheating per se, but this deeper delusion: the belief that knowledge, discipline, and selfhood can be bought wholesale, without friction, struggle, or sustained effort.

  • Gollumification

    Gollumification

    Gollumification

    noun

    Gollumification names the slow moral and cognitive decay that occurs when a person repeatedly chooses convenience over effort and optimization over growth. It is what happens when tools designed to assist quietly replace the very capacities they were meant to strengthen. Like Tolkien’s Gollum, the subject does not collapse all at once; he withers incrementally, outsourcing judgment, agency, and struggle until what remains is a hunched creature guarding shortcuts and muttering justifications. Gollumification is not a story about evil intentions. It is a story about small evasions practiced daily until the self grows thin, brittle, and dependent.

    ***

    Washington Post writer Joanna Slater reports in “Professors Are Turning to This Old-School Method to Stop AI Use on Exams” that some instructors are abandoning written exams in favor of oral ones, forcing students to demonstrate what they actually know without the benefit of algorithmic ventriloquism. At the University of Wyoming, religious studies professor Catherine Hartmann now seats students in her office and questions them directly, Socratic-style, with no digital intermediaries to run interference. Her rationale is blunt and bracing. Using AI on exams, she tells students, is like bringing a forklift to the gym when your goal is to build muscle. “The classroom is a gymnasium,” she explains. “I am your personal trainer. I want you to lift the weights.” Hartmann is not being punitive; she is being realistic about human psychology. Given a way to cheat ourselves out of effort—or out of a meaningful life—we will take it, not because we are corrupt, but because we are wired to conserve energy. That instinct once helped us survive. Now it quietly betrays us. A cheated education becomes a squandered one, and a squandered life does not merely stagnate; it decays. This is how Gollumification begins: not with villainy, but with avoidance.

    I agree entirely with Hartmann’s impulse, even if my method would differ. I would require students to make a fifteen-minute YouTube video in which they deliver their argument as a formal speech. I know from experience that translating a written argument into an oral one exposes every hollow sentence and every borrowed idea. The mind has nowhere to hide when it must speak coherently, in sequence, under the pressure of time and presence. Oral essays force students to metabolize their thinking instead of laundering it through a machine. They are a way of banning forklifts from the gym—not out of nostalgia, but out of respect for the human organism. If education is meant to strengthen rather than simulate intelligence, then forcing students to lift their own cognitive weight is not cruelty. It is preventive medicine against the slow, tragic, and all-too-modern disease of Gollumification.

  • Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication

    noun

    The dopamine-laced rush that occurs when AI collapses hours of cognitive labor into seconds, training the brain to mistake speed for intelligence and output for understanding. Hyper-Efficiency Intoxication sets in when the immediate relief of reclaimed time—skipped readings, instant summaries, frictionless drafts—feels so rewarding that slow thinking begins to register as needless suffering. What hooks the user is not insight but velocity: the sense of winning back life from effort itself. Over time, this chemical high reshapes judgment, making sustained attention feel punitive, depth feel inefficient, and authorship feel optional. Under its influence, students do not stop working; they subtly downgrade their role—from thinker to coordinator, from writer to project manager—until thinking itself fades into oversight. Hyper-Efficiency Intoxication does not announce itself as decline; it arrives disguised as optimization, quietly hollowing out the very capacities education once existed to build.

    ***

    No sane college instructor assigns an essay anymore under the illusion that you’ll heroically wrestle with ideas while AI politely waits in the hallway. We all know what happens: a prompt goes in, a glossy corpse comes out. The charade has become so blatant that even professors who once treated AI like a passing fad are now rubbing their eyes and admitting the obvious. Hua Hsu names the moment plainly in his essay “What Happens After A.I. Destroys College Writing?”: the traditional take-home essay is circling the drain, and higher education is being forced to explain—perhaps for the first time in decades—what it’s actually for.

    The problem isn’t that students are morally bankrupt. It’s that they’re brutally rational. The real difference between “doing the assignment” and “using AI” isn’t ethics; it’s time. Time is the most honest currency in your life. Ten hours grinding through a biography means ten hours you’re not at a party, a game, a date, or a job. Ten minutes with an AI summary buys you your evening back. Faced with that math, almost everyone chooses the shortcut—not because they’re dishonest, but because they live in the real world. This isn’t cheating; it’s survival economics.

    Then there’s the arms race. Your classmates are using AI. All of them. Competing against them without AI is like entering a bodybuilding contest while everyone else is juiced to the gills and you’re proudly “all natural.” You won’t be virtuous; you’ll be humiliated. Fairness collapses the moment one side upgrades, and pretending otherwise is naïve at best.

    AI also hooks you. Hsu admits that after a few uses of ChatGPT, he felt the “intoxication of hyper-efficiency.” That’s not a metaphor—it’s a chemical event. When a machine collapses hours of effort into seconds, your brain lights up like it just won a small lottery. The rush isn’t insight; it’s velocity. And once you’ve tasted that speed, slowness starts to feel like punishment.

    Writing instructors, finally awake, are adapting. Take-home essays are being replaced by in-class writing, blue books, and passage identification exams—formats designed to drag thinking back into the room and away from the cloud. These methods reward students who’ve spent years reading and writing the hard way. But for students who entered high school in 2022 or later—students raised on AI scaffolding—this shift feels like being dropped into deep water without a life vest. Many respond rationally: they avoid instructors who demand in-class thinking.

    Over time, something subtle happens. You don’t stop working; you change roles. You become, in Hsu’s phrase, a project manager—someone who coordinates machines rather than generating ideas. You collaborate, prompt, tweak, and oversee. And at some point, no one—not you, not your professor—can say precisely when the thinking stopped being yours. There is no clean border crossing, only a gradual fade.

    Institutions are paralyzed by this reality. Do they accept the transformation and train students to be elite project managers of knowledge? Or do they try to resurrect an older model of literacy, pretending that time, incentives, and technology haven’t changed? Neither option is comfortable, and both expose how fragile the old justifications for college have become.

    From the educator’s chair, the nightmare scenario is obvious. If AI can train competent project managers for coding, nursing, physical therapy, or business, why not skip college altogether? Why not certify skills directly? Why not let employers handle training in-house? It would be faster, cheaper, and brutally efficient.

    And efficiency always wins. When speed, convenience, and cost savings line up, they don’t politely coexist with tradition—they bulldoze it. AI doesn’t argue with the old vision of education. It replaces it. The question is no longer whether college will change, but whether it can explain why learning should be slower, harder, and less efficient than the machines insist it needs to be.

  • Academic Anedonia: A Tale in 3 Parts

    Academic Anedonia: A Tale in 3 Parts

    Academic Anhedonia

    noun

    Academic Anhedonia is the condition in which students retain the ability to do school but lose the capacity to feel anything about it. Assignments are completed, boxes are checked, credentials are pursued, yet curiosity never lights up and satisfaction never arrives. Learning no longer produces pleasure, pride, or even frustration—just a flat neurological neutrality. These students aren’t rebellious or disengaged; they’re compliant and hollow, moving through coursework like factory testers pressing buttons to confirm the machine still turns on. Years of algorithmic overstimulation, pandemic detachment, and frictionless AI assistance have numbed the internal reward system that once made discovery feel electric. The result is a classroom full of quiet efficiency and emotional frost: cognition without appetite, performance without investment, education stripped of its pulse.

    ***

    I started teaching college writing in the 80s under the delusion that I was destined to be the David Letterman of higher education—a twenty-five-year-old ham with a chalkboard, half-professor and half–late-night stand-up. For a while, the act actually worked. A well-timed deadpan joke could mesmerize a room of eighteen-year-olds and soften their outrage when I saddled them with catastrophically ill-chosen books (Ron Rosenbaum’s Explaining Hitler—a misfire so spectacular it deserves its own apology tour). My stories carried the class, and for decades I thought the laughter was evidence of learning. If I could entertain them, I told myself, I could teach them.

    Then 2012 hit like a change in atmospheric pressure. Engagement thinned. Phones glowed. Students behaved as though they were starring in their own prestige drama, and my classroom was merely a poorly lit set. I was no longer battling boredom—I was competing with the algorithm. This was the era of screen-mediated youth, the 2010–2021 cohort raised on the oxygen of performance. Their identities were curated in Instagram grids, maintained through Snapstreaks, and measured in TikTok microfame points. The students were not apathetic; they were overstimulated. Their emotional bandwidth was spent on self-presentation, comparison loops, and the endless scoreboard of online life. They were exhausted but wired, longing for authenticity yet addicted to applause. I felt my own attention-capture lose potency, but I still recognized those students. They were distracted, yes, but still alive.

    But in 2025, we face a darker beast: the academically anhedonic student. The screen-mediated generation ran hot; this one runs cold. Around 2022, a new condition surfaced—a collapse of the internal reward system that makes learning feel good, or at least worthwhile. Years of over-curation, pandemic detachment, frictionless AI answers, and dopamine-dense apps hollowed out the very circuits that spark curiosity. This isn’t laziness; it’s a neurological shrug. These students can perform the motions—fill in a template, complete a scaffold, assemble an essay like a flat-pack bookshelf—but they move through the work like sleepwalkers. Their curiosity is muted. Their persistence is brittle. Their critical thinking arrives pre-flattened. 

    My colleagues tell me their classrooms are filled with compliant but joyless learners checking boxes on their march toward a credential. The Before-Times students wrestled with ideas. The After-Times students drift through them without contact. It breaks our hearts because the contrast is stark: what was once noisy and performative has gone silent. Academic anhedonia names that silence—a crisis not of ability, but of feeling.

  • Humanification

    Humanification

    It is not my job to indoctrinate you into a political party, a philosophical sect, or a religious creed. I am not here to recruit. But it is my job to indoctrinate you about something—namely, how to think, why thinking matters, and what happens when you decide it doesn’t. I have an obligation to give you a language for understanding critical thinking and the dangers of surrendering it, a framework for recognizing the difference between a meaningful life and a comfortable one, and the warning signs that appear when convenience, short-term gratification, and ego begin quietly eating away at the soul. Some of you believe life is a high-stakes struggle over who you become. Others suspect the stakes are lower. A few—regrettably—flirt with nihilism and conclude there are no stakes at all. But whether you dramatize it or dismiss it, the “battle of the soul” is unavoidable. I teach it because I am not a vocational trainer turning you into a product. I am a teacher in the full, unfashionable sense of the word—even if many would prefer I weren’t.

    This battle became impossible to ignore when I returned to the classroom after the pandemic and met ChatGPT. On one side stood Ozempification: the seductive shortcut. It promises results without struggle, achievement without formation, output without growth. Why wrestle with ideas when a machine can spit out something passable in seconds? It’s academic fast food—calorie-dense, spiritually empty, and aggressively marketed. Excellence becomes optional. Effort becomes suspicious. Netflix beckons. On the other side stood Humanification: the old, brutal path that Frederick Douglass knew by heart. Literacy as liberation. Difficulty as transformation. Meaning earned the hard way. Cal Newport calls it deep work. Jordan Peele gives it a name—the escape from the Sunken Place. Humanification doesn’t chase comfort; it chases depth. The reward isn’t ease. It’s becoming someone.

    Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College” captures this split perfectly. Wandering Haverford’s manicured campus, he encounters English majors who treat ChatGPT not as a convenience but as a moral hazard. They recoil from it. “I prefer not to,” Bartleby-style. Their refusal is not naïveté; it’s identity. Writing, for them, is not a means to a credential but an act of fidelity—to language, to craft, to selfhood. But Harper doesn’t let this romanticism off the hook. He reminds us, sharply, that honor and curiosity are not evenly distributed virtues. They are nurtured—or crushed—by circumstance.

    That line stopped me cold. Was I guilty of preaching Humanification without acknowledging its price tag? Douglass pursued literacy under threat of death, but he is a hero precisely because he is rare. We cannot build an educational system that assumes heroic resistance as the norm. Especially not when the very architects of our digital dystopia send their own children to screen-free Waldorf schools, where cursive handwriting and root vegetables are treated like endangered species. The tech elite protect their children from the technologies they profit from. Everyone else gets dopamine.

    I often tell students this uncomfortable truth: it is easier to be an intellectual if you are rich. Wealth buys time, safety, and the freedom to fail beautifully. You can disappear to a cabin, read Dostoevsky, learn Schubert, and return enlightened. Most students don’t have that option. Harper is right—institutions like Haverford make Humanification easier. Small classes. Ample support. Unhurried faculty. But most students live elsewhere. My wife teaches in public schools where buildings leak, teachers sleep in cars, and safety is not guaranteed. Asking students in survival mode to honor an abstract code of intellectual purity borders on insult.

    Maslow understood this long ago. Self-actualization comes after food, shelter, and security. It’s hard to care about literary integrity when you’re exhausted, underpaid, and anxious. Which is why the Ozempic analogy matters. Just as expensive GLP-1 drugs make discipline easier for some bodies, elite educational environments make intellectual virtue easier for some minds. Character still matters—but it is never the whole story.

    Harper complicates things further by comparing Haverford to Stanford. At Stanford, honor codes collapse under scale; proctoring becomes necessary. Intimacy, not virtue alone, sustains integrity. Haverford begins to look less like a model and more like a museum—beautiful, instructive, and increasingly inaccessible. The humanities survive there behind velvet ropes.

    I teach at a community college. My students are training for nursing, engineering, business. They work multiple jobs. They sleep six hours if they’re lucky. They don’t have the luxury to marinate in ideas. Humanification gets respectful nods in class discussions, but Ozempification pays the rent. And pretending otherwise helps no one.

    This is the reckoning. We cannot shame students for using AI when AI is triage, not indulgence. But we also cannot pretend that a life optimized for convenience leads anywhere worth going. The challenge ahead is not to canonize the Humanified or condemn the Ozempified. It is to build an educational culture where aspiration is not a luxury good—where depth is possible without privilege, and where using AI does not require selling your soul for efficiency.

    That is the real battle. And it’s one we can’t afford to fight dishonestly.

  • Cognitive Thinning and Cognitve Load-Bearing Capacity

    Cognitive Thinning and Cognitve Load-Bearing Capacity

    In his bracing essay “Colleges Are Preparing to Self-Lobotomize,” Michael Clune accuses higher education of handling AI with the institutional equivalent of a drunk chainsaw. The subtitle gives away the indictment: “The skills that students will need in an age of automation are precisely those that are eroding by inserting AI into the educational process.” Colleges, Clune argues, spent the first three years of generative AI staring at the floor. Now they’re overcorrecting—embedding AI everywhere as if saturation were the same thing as competence. It isn’t. It’s panic dressed up as innovation.

    The prevailing fantasy is that if AI is everywhere, mastery will seep into students by osmosis. But the opposite is happening. Colleges are training students to rely on frictionless services while quietly abandoning the capacities that make AI usable in any serious way: judgment, learning agility, and flexible analysis. The tools are getting smarter. The users are getting thinner.

    That thinning has a name. Cognitive Thinning is the gradual erosion of critical thinking that occurs when sustained mental effort is replaced by convenience. It sets in when institutions assume that constant exposure to powerful tools will produce competence, even as they dismantle the practices that build it. As AI grows more capable, students are asked to do less thinking, tolerate less uncertainty, and carry less intellectual weight. The result is a widening imbalance: smarter systems paired with slimmer minds—efficient, polished, and increasingly unable to move beyond the surface of what machines provide.

    Clune wants students to avoid this fate, but he faces a rhetorical problem. He keeps insisting on abstractions—critical thinking, intellectual flexibility, judgment—in a culture trained to distrust anything abstract. Telling a screen-saturated society to imagine thinking outside screens is like telling a fish to imagine life outside water. The first task isn’t instruction. It’s translation.

    The fish analogy holds. A fish is aquatic; water isn’t a preference—it’s a prison. A young person raised entirely on screens, prompts, and optimization tools treats that ecosystem as reality itself. Like the fish, they know only one environment. We can name this condition precisely. They are cognitively outsourced, trained to delegate thought as if it were healthy. They are algovorous, endlessly stimulated by systems that quietly erode attention and resilience. They are digitally obligate, unable to function without mediation. By definition, these orientations crowd out critical thinking. They produce people who function smoothly inside digital systems and falter everywhere else.

    Drop such a person into a college that recklessly embeds AI into every course in the name of being “future-proof,” and you don’t produce adaptability—you produce fragility. In some fields, this fragility is fatal. Clune cites a telling statistic: history majors now have roughly half the unemployment rate of recent computer science graduates. The implication is blunt. Liberal education builds range. Narrow technical training builds specialists who snap when the environment shifts. As the New York Times put it in a headline Clune references: “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” AI is replacing coders. Life inside a tiny digital ecosystem does not prepare you for a world that mutates.

    Is AI the cause of this dysfunction? No. The damage predates ChatGPT. I use AI constantly—and enjoy it. It sharpens my curiosity. It helps me test ideas. It makes me smarter because I am not trapped inside it. I have a life beyond screens. I’ve read thousands of books. I can zoom in and out—trees and forest—without panic. I have language for my inner life, which means I can catch myself when I become maudlin, entropic, dissolute, misanthropic, lugubrious, or vainglorious. I have history, philosophy, and religion as reference points. We call this bundle “critical thinking,” but what it really amounts to is being fully human.

    Someone who has outsourced thought and imagination since childhood cannot suddenly use AI well. They aren’t liberated. They’re brittle—dependent, narrow, and easily replaced.

    Because I’m a lifelong weightlifter, let me be concrete. AI is a massive, state-of-the-art gym: barbells, dumbbells, Smith machines, hack squats, leg presses, lat pulldowns, pec decks, cable rows—the works. Now imagine you’ve never trained. You’re twenty-eight, inspired by Instagram physiques, vaguely determined to “get in shape.” You walk into this cathedral of iron with no plan, no understanding of recovery, nutrition, progressive overload, or discipline. You’re surrounded by equipment—and completely lost. Within a month, you quit. You join the annual migration of January optimists who vanish by February, leaving the gym to the regulars.

    AI is that gym. It doesn’t eject users out of malice. It ejects them because it demands capacities they never built. Some people learn isolated tricks—prompting here, automating there—but only the way someone learns to push a toaster lever. When these tasks define a person, the result is a Non Player Character: reactive, scripted, interchangeable.

    Students already understand what an NPC is. That’s why they fear becoming one.

    If colleges embed AI everywhere without building the human capacities required to use it, they aren’t educating thinkers. They’re manufacturing NPCs—and they deserve to be called out for it.

    Don’t wait for your institution to save you. Approach education the way you’d approach a gym. Learn how bodies actually grow before touching the weights. Know the muscle groups. Respect recovery. Understand volume, exhaustion, and nutrition. Do the homework so the gym doesn’t spit you out.

    The same rule applies to AI. To use it well, you need a specific kind of mental strength: Cognitive Load-Bearing Capacity. This is the ability to use AI without surrendering your thinking. You can see it in ordinary behaviors: reading before summarizing, drafting before prompting, distrusting answers that sound too smooth, and revising because an idea is weak—not because a machine suggested a synonym. It’s the capacity to sit with confusion, compare sources, and arrive at judgment rather than outsource it.

    This capacity isn’t innate, and it isn’t fast. It’s built through resistance: sustained reading, outlining by hand, struggling with unfamiliar ideas, revising after failure. Students with cognitive load-bearing capacity use AI to pressure-test their thinking. Students without it use AI to replace thinking. One group grows stronger and more adaptable. The other becomes dependent—and replaceable.

    Think of AI like a piano. You can sit down and bang out notes immediately, but you won’t produce music. Beautiful playing requires trained fingers, disciplined ears, and years of wrong notes. AI works the same way. Without cognitive load-bearing capacity, you get noise—technically correct, emotionally dead. With it, the tool becomes expressive. The difference isn’t the instrument. It’s the musician.

    If you want to build this capacity, forget grand reforms. Choose consistent resistance. Read an hour a day with no tabs open. Write before prompting. Ask AI to attack your argument instead of finishing it. Keep a notebook where you explain ideas in your own words, badly at first. Sit with difficulty instead of dodging it. These habits feel inefficient—and that’s the point. They’re the mental equivalent of scales and drills. Over time, they give you the strength to use powerful tools without being used by them.