Category: culture

  • Transactional Transformation Fallacy

    Transactional Transformation Fallacy

    noun

    The Transactional Transformation Fallacy is the belief that personal change can be purchased rather than practiced. It treats growth as a commercial exchange: pay the fee, swipe the card, enroll in the program, and improvement will arrive as a deliverable. Effort becomes optional, discipline a quaint accessory. In this logic, money substitutes for resolve, proximity replaces participation, and the hard interior work of becoming someone else is quietly delegated to a service provider. It is a comforting fantasy, and a profitable one, because it promises results without inconvenience.

    ***

    I once had a student who worked as a personal trainer. She earned decent money, but she disliked the job for reasons that had nothing to do with exercise science and everything to do with human nature. Her clients were not untrained so much as uncommitted. She gave them solid programs, explained the movements, laid out sensible menus, and checked in faithfully. Then she watched them vanish between sessions. They skipped workouts on non-training days. They treated nutrition guidelines as aspirational literature. They arrived at the gym exhaling whiskey and nicotine, their pores broadcasting last night’s bad decisions like a public service announcement. They paid her, showed up once or twice a week, and mistook attendance for effort. Many were lonely. Others liked telling friends they “had a trainer,” as if that phrase itself conferred seriousness, discipline, or physical virtue. They believed that money applied to a problem was the same thing as resolve applied to a life.

    The analogy to college is unavoidable. If a student enters higher education with the same mindset—pay tuition, outsource thinking to AI, submit algorithmically polished assignments, and expect to emerge transformed—they are operating squarely within the Transactional Transformation Fallacy. They imagine education as a vending machine: insert payment, press degree, receive wisdom. Like the Scarecrow awaiting his brain from the Wizard of Oz, they expect character and intelligence to be bestowed rather than built. This fantasy has always haunted consumer culture, but AI supercharges it by making the illusion briefly convincing. The greatest challenge facing higher education in the years ahead will not be cheating per se, but this deeper delusion: the belief that knowledge, discipline, and selfhood can be bought wholesale, without friction, struggle, or sustained effort.

  • Academic Anedonia: A Tale in 3 Parts

    Academic Anedonia: A Tale in 3 Parts

    Academic Anhedonia

    noun

    Academic Anhedonia is the condition in which students retain the ability to do school but lose the capacity to feel anything about it. Assignments are completed, boxes are checked, credentials are pursued, yet curiosity never lights up and satisfaction never arrives. Learning no longer produces pleasure, pride, or even frustration—just a flat neurological neutrality. These students aren’t rebellious or disengaged; they’re compliant and hollow, moving through coursework like factory testers pressing buttons to confirm the machine still turns on. Years of algorithmic overstimulation, pandemic detachment, and frictionless AI assistance have numbed the internal reward system that once made discovery feel electric. The result is a classroom full of quiet efficiency and emotional frost: cognition without appetite, performance without investment, education stripped of its pulse.

    ***

    I started teaching college writing in the 80s under the delusion that I was destined to be the David Letterman of higher education—a twenty-five-year-old ham with a chalkboard, half-professor and half–late-night stand-up. For a while, the act actually worked. A well-timed deadpan joke could mesmerize a room of eighteen-year-olds and soften their outrage when I saddled them with catastrophically ill-chosen books (Ron Rosenbaum’s Explaining Hitler—a misfire so spectacular it deserves its own apology tour). My stories carried the class, and for decades I thought the laughter was evidence of learning. If I could entertain them, I told myself, I could teach them.

    Then 2012 hit like a change in atmospheric pressure. Engagement thinned. Phones glowed. Students behaved as though they were starring in their own prestige drama, and my classroom was merely a poorly lit set. I was no longer battling boredom—I was competing with the algorithm. This was the era of screen-mediated youth, the 2010–2021 cohort raised on the oxygen of performance. Their identities were curated in Instagram grids, maintained through Snapstreaks, and measured in TikTok microfame points. The students were not apathetic; they were overstimulated. Their emotional bandwidth was spent on self-presentation, comparison loops, and the endless scoreboard of online life. They were exhausted but wired, longing for authenticity yet addicted to applause. I felt my own attention-capture lose potency, but I still recognized those students. They were distracted, yes, but still alive.

    But in 2025, we face a darker beast: the academically anhedonic student. The screen-mediated generation ran hot; this one runs cold. Around 2022, a new condition surfaced—a collapse of the internal reward system that makes learning feel good, or at least worthwhile. Years of over-curation, pandemic detachment, frictionless AI answers, and dopamine-dense apps hollowed out the very circuits that spark curiosity. This isn’t laziness; it’s a neurological shrug. These students can perform the motions—fill in a template, complete a scaffold, assemble an essay like a flat-pack bookshelf—but they move through the work like sleepwalkers. Their curiosity is muted. Their persistence is brittle. Their critical thinking arrives pre-flattened. 

    My colleagues tell me their classrooms are filled with compliant but joyless learners checking boxes on their march toward a credential. The Before-Times students wrestled with ideas. The After-Times students drift through them without contact. It breaks our hearts because the contrast is stark: what was once noisy and performative has gone silent. Academic anhedonia names that silence—a crisis not of ability, but of feeling.

  • Good-Enoughers

    Good-Enoughers

    In the fall of 2023, I was standing in front of thirty bleary-eyed college students, halfway through a lesson on how to spot a ChatGPT essay—mainly by its fondness for lifeless phrases that sound like they were scraped from a malfunctioning inspirational calendar. That’s when a business major raised his hand with the calm confidence of someone revealing a trade secret and said, “I can guarantee you everyone on this campus uses ChatGPT. We don’t submit it raw. We tweak a few sentences, paraphrase a little, and boom—no one can tell.”

    Before I could respond, a computer science student piled on. “It’s not just for essays,” he said. “It’s my life coach. I ask it about everything—career moves, crypto, even dating.” Dating advice. From ChatGPT. Somewhere, right now, a romance is unfolding on AI-generated pillow talk and a bullet-pointed list of conversation starters.

    That was the moment I realized I was staring at the biggest educational rupture of my thirty-year career. Tools like ChatGPT have three superpowers: obscene convenience, instant availability, and blistering speed. In a world where time is money and most writing does not need to summon the ghost of James Baldwin, AI is already good enough for about 95 percent of professional communication. And there it is—the phrase that should make educators break out in hives: good enough.

    “Good enough” is convenience’s love language. Imagine waking up groggy and choosing between two breakfasts. Option one is a premade smoothie: beige, foamy, nutritionally ambiguous, and available immediately. Option two is a transcendent, handcrafted masterpiece—organic fruit, thick Greek yogurt, chia seeds, almond milk—but to get it you must battle orb spiders in your backyard, dodge your neighbor’s possessed Belgian dachshund, and then spend quality time scrubbing a Vitamix before fighting traffic. Which one do most people choose?

    Exactly. The premade sludge. Because who has time for spider diplomacy and blender maintenance before a commute? Convenience wins, quality loses, and you console yourself with the time you saved. Eventually, you stop missing the better option altogether. That slow adjustment—lowering your standards until mediocrity feels normal—is attenuation.

    Now swap smoothies for writing. Writing is far harder than breakfast, and millions of people are quietly recalibrating their expectations. Why labor over sentences when the world will happily accept algorithmic mush? Polished prose is becoming the artisanal smoothie of communication: admirable, expensive, and increasingly optional. AI delivers something passable in seconds, and passable is the new benchmark.

    For educators, this is not a quirky inconvenience. It’s a five-alarm fire. I did not enter this profession to train students to become connoisseurs of adequacy. I wanted to cultivate thinkers, stylists, arguers—people whose sentences had backbone and intent. Instead, I find myself in a dystopia where “good enough” is the new gospel and I’m preaching craft like a monk selling calligraphy at a tech startup demo day.

    In medicine, the Hippocratic Oath is “Do no harm.” In teaching, the unspoken oath is blunter and less forgiving: never train your students to become Good-Enoughers—those half-awake intellectual zombies who mistake adequacy for achievement and turn mediocrity into a permanent way of life.

    Whatever role AI plays in my classroom, one line is nonnegotiable. The moment I use it to help students settle for less—to speed them toward adequacy instead of depth—I’m no longer teaching. I’m committing educational malpractice.

  • Cognitive Thinning and Cognitve Load-Bearing Capacity

    Cognitive Thinning and Cognitve Load-Bearing Capacity

    In his bracing essay “Colleges Are Preparing to Self-Lobotomize,” Michael Clune accuses higher education of handling AI with the institutional equivalent of a drunk chainsaw. The subtitle gives away the indictment: “The skills that students will need in an age of automation are precisely those that are eroding by inserting AI into the educational process.” Colleges, Clune argues, spent the first three years of generative AI staring at the floor. Now they’re overcorrecting—embedding AI everywhere as if saturation were the same thing as competence. It isn’t. It’s panic dressed up as innovation.

    The prevailing fantasy is that if AI is everywhere, mastery will seep into students by osmosis. But the opposite is happening. Colleges are training students to rely on frictionless services while quietly abandoning the capacities that make AI usable in any serious way: judgment, learning agility, and flexible analysis. The tools are getting smarter. The users are getting thinner.

    That thinning has a name. Cognitive Thinning is the gradual erosion of critical thinking that occurs when sustained mental effort is replaced by convenience. It sets in when institutions assume that constant exposure to powerful tools will produce competence, even as they dismantle the practices that build it. As AI grows more capable, students are asked to do less thinking, tolerate less uncertainty, and carry less intellectual weight. The result is a widening imbalance: smarter systems paired with slimmer minds—efficient, polished, and increasingly unable to move beyond the surface of what machines provide.

    Clune wants students to avoid this fate, but he faces a rhetorical problem. He keeps insisting on abstractions—critical thinking, intellectual flexibility, judgment—in a culture trained to distrust anything abstract. Telling a screen-saturated society to imagine thinking outside screens is like telling a fish to imagine life outside water. The first task isn’t instruction. It’s translation.

    The fish analogy holds. A fish is aquatic; water isn’t a preference—it’s a prison. A young person raised entirely on screens, prompts, and optimization tools treats that ecosystem as reality itself. Like the fish, they know only one environment. We can name this condition precisely. They are cognitively outsourced, trained to delegate thought as if it were healthy. They are algovorous, endlessly stimulated by systems that quietly erode attention and resilience. They are digitally obligate, unable to function without mediation. By definition, these orientations crowd out critical thinking. They produce people who function smoothly inside digital systems and falter everywhere else.

    Drop such a person into a college that recklessly embeds AI into every course in the name of being “future-proof,” and you don’t produce adaptability—you produce fragility. In some fields, this fragility is fatal. Clune cites a telling statistic: history majors now have roughly half the unemployment rate of recent computer science graduates. The implication is blunt. Liberal education builds range. Narrow technical training builds specialists who snap when the environment shifts. As the New York Times put it in a headline Clune references: “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” AI is replacing coders. Life inside a tiny digital ecosystem does not prepare you for a world that mutates.

    Is AI the cause of this dysfunction? No. The damage predates ChatGPT. I use AI constantly—and enjoy it. It sharpens my curiosity. It helps me test ideas. It makes me smarter because I am not trapped inside it. I have a life beyond screens. I’ve read thousands of books. I can zoom in and out—trees and forest—without panic. I have language for my inner life, which means I can catch myself when I become maudlin, entropic, dissolute, misanthropic, lugubrious, or vainglorious. I have history, philosophy, and religion as reference points. We call this bundle “critical thinking,” but what it really amounts to is being fully human.

    Someone who has outsourced thought and imagination since childhood cannot suddenly use AI well. They aren’t liberated. They’re brittle—dependent, narrow, and easily replaced.

    Because I’m a lifelong weightlifter, let me be concrete. AI is a massive, state-of-the-art gym: barbells, dumbbells, Smith machines, hack squats, leg presses, lat pulldowns, pec decks, cable rows—the works. Now imagine you’ve never trained. You’re twenty-eight, inspired by Instagram physiques, vaguely determined to “get in shape.” You walk into this cathedral of iron with no plan, no understanding of recovery, nutrition, progressive overload, or discipline. You’re surrounded by equipment—and completely lost. Within a month, you quit. You join the annual migration of January optimists who vanish by February, leaving the gym to the regulars.

    AI is that gym. It doesn’t eject users out of malice. It ejects them because it demands capacities they never built. Some people learn isolated tricks—prompting here, automating there—but only the way someone learns to push a toaster lever. When these tasks define a person, the result is a Non Player Character: reactive, scripted, interchangeable.

    Students already understand what an NPC is. That’s why they fear becoming one.

    If colleges embed AI everywhere without building the human capacities required to use it, they aren’t educating thinkers. They’re manufacturing NPCs—and they deserve to be called out for it.

    Don’t wait for your institution to save you. Approach education the way you’d approach a gym. Learn how bodies actually grow before touching the weights. Know the muscle groups. Respect recovery. Understand volume, exhaustion, and nutrition. Do the homework so the gym doesn’t spit you out.

    The same rule applies to AI. To use it well, you need a specific kind of mental strength: Cognitive Load-Bearing Capacity. This is the ability to use AI without surrendering your thinking. You can see it in ordinary behaviors: reading before summarizing, drafting before prompting, distrusting answers that sound too smooth, and revising because an idea is weak—not because a machine suggested a synonym. It’s the capacity to sit with confusion, compare sources, and arrive at judgment rather than outsource it.

    This capacity isn’t innate, and it isn’t fast. It’s built through resistance: sustained reading, outlining by hand, struggling with unfamiliar ideas, revising after failure. Students with cognitive load-bearing capacity use AI to pressure-test their thinking. Students without it use AI to replace thinking. One group grows stronger and more adaptable. The other becomes dependent—and replaceable.

    Think of AI like a piano. You can sit down and bang out notes immediately, but you won’t produce music. Beautiful playing requires trained fingers, disciplined ears, and years of wrong notes. AI works the same way. Without cognitive load-bearing capacity, you get noise—technically correct, emotionally dead. With it, the tool becomes expressive. The difference isn’t the instrument. It’s the musician.

    If you want to build this capacity, forget grand reforms. Choose consistent resistance. Read an hour a day with no tabs open. Write before prompting. Ask AI to attack your argument instead of finishing it. Keep a notebook where you explain ideas in your own words, badly at first. Sit with difficulty instead of dodging it. These habits feel inefficient—and that’s the point. They’re the mental equivalent of scales and drills. Over time, they give you the strength to use powerful tools without being used by them.

  • The Hamster Wheel of Optimization

    The Hamster Wheel of Optimization

    In “AI Has Broken High School and College,” Damon Beres stages a conversation between Ian Bogost and Lila Shroff that lands like a diagnosis no one wants but everyone recognizes. Beres opens with a blunt observation: today’s high school seniors are being told—implicitly and explicitly—that their future success rests on their fluency with chatbots. School is no longer primarily about learning. It has become a free-for-all, and teachers are watching from the sidelines with a whistle that no longer commands attention.

    Bogost argues that educators have responded by sprinting toward one of two unhelpful extremes: panic or complacency. Neither posture grapples with reality. There is no universal AI policy, and students are not using ChatGPT only to finish essays. They are using it for everything. We have already entered a state of AI normalization, where reliance is no longer an exception but a default. To explain the danger, Bogost borrows a concept from software engineering: technical debt—the seductive habit of choosing short-term convenience while quietly accruing long-term catastrophe. You don’t fix the system; you keep postponing the reckoning. It’s like living on steak, martinis, and banana splits while assuring yourself you’ll start jogging next year.

    Higher education, Bogost suggests, has compounded the problem by accumulating what might be called pedagogical debt. Colleges never solved the hard problems: smaller class sizes, meaningful writing assignments, sustained feedback, practical skill-building, or genuine pipelines between students and employers. Instead, they slapped bandages over these failures and labeled them “innovation.” AI didn’t create these weaknesses; it simply makes it easier to ignore them. The debt keeps compounding, and the interest is brutal.

    Bogost introduces a third and more existential liability: the erosion of sacred time. Some schools still teach this—places where students paint all day, rebuild neighborhoods, or rescue animals, learning that a meaningful life requires attention, patience, and presence. Sacred time resists the modern impulse to finish everything as fast as possible so you can move on to the next task. AI dependence belongs to a broader pathology: the hamster wheel of deadlines, productivity metrics, and permanent distraction. In that world, AI is not liberation. It is a turbocharger for a life without meaning.

    AI also accelerates another corrosive force: cynicism. Students tell Bogost that in the real world, their bosses don’t care how work gets done—only that it gets done quickly and efficiently. Bogost admits they are not wrong. They are accurately describing a society that prizes output over meaning and speed over reflection. Sacred time loses every time it competes with the rat race.

    The argument, then, is not a moral panic about whether to use AI. The real question is what kind of culture is doing the using. In a system already bloated with technical and pedagogical debt, AI does not correct course—it traps us in what I call the Hamster Wheel of Optimization: A cultural condition in which speed, efficiency, and constant output are mistaken for progress, trapping individuals and institutions in endless motion without direction or meaning. On the Hamster Wheel of Optimization, short-term convenience is endlessly prioritized while long-term costs—intellectual, moral, and human—quietly accumulate. Learning becomes task completion, education becomes workflow management, and sacred time is crowded out by deadlines, metrics, and permanent distraction. AI does not create this condition; it accelerates it, serving as a turbocharger for a system already addicted to doing more, faster, and cheaper, even as depth, reflection, and purpose steadily erode.

  • Mediocrity Amplification Effect

    Mediocrity Amplification Effect

    As you watch your classmates use AI for every corner of their lives—summarizing, annotating, drafting, thinking—you may feel a specific kind of demoralization set in. A sinking question forms: What does anything matter anymore? Is life now just a game of cheating the system efficiently? Is this where all the breathless hype about “the future” has landed us—an economy of shortcuts and plausible fraud?

    High school student Ashanty Rosario feels this acutely. She gives voice to the heartbreak in her essay “I’m a High Schooler. AI Is Demolishing My Education,” a lament not about laziness but about loss. She doesn’t want to cheat. But the tools are everywhere, glowing like emergency exit signs in a burning building. Some temptations, she understands, are structural.

    Her Exhibit A is devastating. A classmate uses ChatGPT to annotate Narrative of the Life of Frederick Douglass. These annotations—supposed proof of engaged reading—are nothing more than copy-paste edu-lard: high in calories, low in nutrition, and utterly empty of struggle. The form is there. The thinking is not.

    Rosario’s frustration echoes a moment from my own classroom. On the last day of the semester, one of my brightest students sat in my office and casually admitted that he uses ChatGPT to summarize all his reading. His father is a professor. He wakes up at five for soccer practice. He takes business calculus for fun. This is not a slacker. This is a time-management pragmatist surviving the twenty-first century. He reads the summaries, synthesizes the ideas, and writes excellent essays. Of course I wish he spent slow hours wrestling with books—but he is not living in 1954. He is living in a culture where time is scarce and AI functions as an oxygen mask.

    My daughters and their classmates face the same dilemma with Macbeth. Shakespeare’s language might as well be Martian for a generation raised on TikTok compression and dopamine drip-feeds. They watch film adaptations. They use AI to decode plot points so they can answer study questions without sounding like they slept through the Renaissance. Purists howl that this is cheating. But as a writing instructor, I suspect teachers benefit from students who at least know what’s happening—even if the knowledge arrives via chatbot. Expecting a fifteen-year-old to read Macbeth cold is like assigning tensor calculus to a preschooler. They haven’t done their priors. So AI becomes a prosthetic. A flotation device. A translation machine dropped into classrooms years overdue.

    Blaming AI for educational decline is tempting—but it’s also lazy. We live in a society where reading is a luxury good and the leisure class quietly guards the gates.

    In the 1970s, I graduated from a public high school with literacy skills so thin you could read the room through them. I took remedial English my freshman year of college. If I were a student today, dropped into 2025 with those same deficits, I would absolutely lean on AI just to keep my head above water. The difference now is scale. Today’s students aren’t just supplementing—they’re optimizing. They tell me this openly. Over ninety percent of my students use AI because their skills don’t match the workload and because everyone else is doing it. This isn’t a moral collapse. It’s an arms race of survival.

    Still, Rosario is right about the aftermath. “AI has softened the consequences of procrastination,” she writes, “and led many students to avoid doing any work at all. There is little intensity anymore.” When thinking becomes optional, students drift into algorithmic sleepwalking. They outsource cognition until they resemble NPCs in a glitching video game—avatars performing the motions of thought without the effort. My colleagues and I see it every semester: the fade-out, the disengagement, the slow zombification.

    Colleges are scrambling. Should we police AI with plagiarism detectors? Ban laptops? Force students to write essays in blue books under watchful eyes like parolees in a literary halfway house? Should we pretend the flood can be held back with a beach towel?

    Reading Rosario’s complaint about “cookie-cutter AI arguments,” I thought of my lone visit to Applebee’s in the early 2000s. The menu photos promised ambrosia. The food tasted like something engineered in a lab to be technically edible yet spiritually vacant. Applebee’s was AI before AI—an assembly line of flavorless simulacra. Humanity has always gravitated toward the easy, the prepackaged, the frictionless. AI didn’t invent mediocrity. It just handed it a megaphone.

    Rosario is no Applebee’s soul. She’s Michelin-level in a world eager to microwave Hot Pockets. Of course her heart sinks when classmates settle for fast-food literacy. I want to tell her this: had she been in high school in the 1970s, she would have witnessed the same hunger for shortcuts. The tools would be clumsier. The prose less polished. But the gravitational pull would be identical. The urge to bypass difficulty is not technological—it’s ancestral.

    What’s new is scale and speed. In the AI age, that ancient hunger is supercharged by what I call the Mediocrity Amplification Effect: the phenomenon by which AI accelerates and magnifies our long-standing temptation to dilute effort and settle for the minimally sufficient. Under this effect, tools meant to assist learning become accelerants of shortcut culture. Procrastination carries fewer consequences. Intensity drains away. Thinking becomes optional.

    This is not a new moral failure. It is an old one, industrialized—private compromise transformed into public default, mediocrity polished, normalized, and broadcast at scale. AI doesn’t make us lazy. It makes laziness louder.

  • Books Aren’t Dead—They’ve Just Lost Their Monopoly

    Books Aren’t Dead—They’ve Just Lost Their Monopoly

    Are young people being vacuum-sealed into their screens, slowly zombified by AI and glowing rectangles? This is the reigning panic narrative of our moment, a familiar sermon about dehumanization and decline. In his essay “My Students Use AI. So What?” linguist John McWhorter asks us to ease off the apocalypse pedal and consider a less hysterical possibility: the world has changed, and our metaphors haven’t caught up.

    McWhorter opens close to home. His tween daughters, unlike him, are not bookworms. They are screenworms. He once spent his leisure hours buried in books; now he, too, spends much of his reading life hunched over a phone. He knows what people expect from him—a professor clutching pearls over students who read less, write with AI, and allegedly let their critical thinking rot. Instead, he disappoints the doom merchants. Screens replacing books, he argues, is not evidence of “communal stupidity.” It is evidence of migration.

    Yes, young people read fewer books for pleasure. McWhorter cites a 1976 study showing that 40 percent of high school seniors had read at least six books for fun in the previous year—a number that has since cratered. But this does not mean young people have abandoned language. Words are everywhere. Print no longer monopolizes thought. Screens now host essays, debates, Substack newsletters, podcasts, and long-form conversations that reveal not a hunger deficit but a format shift. As McWhorter puts it, the explosion of thoughtful digital writing signals demand for ideas, not their extinction.

    He is not naïve about online slop. He limits the digital junk his daughters would otherwise inhale all day. Still, he resists the snobbery that treats ubiquity as proof of worthlessness. “The ubiquity of some content doesn’t mean it lacks art,” he writes—a useful reminder in an age that confuses popularity with emptiness. Much online culture is disposable. Some of it is sharp, inventive, and cognitively demanding.

    McWhorter also dismantles a familiar prejudice: that books are inherently superior because they “require imagination.” He calls this argument a retroactive justification for bias. Reading his rebuttal, I’m reminded that Childish Gambino’s four-minute video “This Is America,” watched tens of millions of times on YouTube, is so dense with political symbolism and cultural critique that it could easily spawn a 300-page monograph. Imagination is not a function of page count.

    He takes aim at another antique claim—that radio was more imaginative than television. Citing Severance, McWhorter argues that contemporary TV can engage the imagination and critical thinking as effectively as any golden-age broadcast. Medium does not determine depth. Craft does.

    McWhorter also punctures our nostalgia. Were people really reading as much as we like to believe? When he was in college, most students avoided assigned texts just as enthusiastically as students do now. The pre-digital world had CliffNotes. Avoidance is not a TikTok invention.

    He reserves particular scorn for recklessly designed syllabi: professors assigning obscure philosophical fragments they never explain, using difficulty as décor. The syllabus looks impressive; students are left bewildered. McWhorter learned from this and streamlined his own reading lists, favoring coherence over intimidation.

    AI, however, has forced real change. The five-paragraph essay is finished; machines devour it effortlessly. McWhorter has responded by designing prompts meant to outrun AI’s comfort zone and by leaning harder on in-class writing. One of his questions—“How might we push society to embrace art that initially seems ugly?”—aims to provoke judgment rather than summary. I’m less confident than he is that such prompts are AI-proof, but I take his point. A philosophically demanding question tethered to specific texts still forces students to synthesize, even if AI hovers nearby. He also emphasizes graded participation, returning thinking to the room rather than the cloud.

    McWhorter’s larger argument is pragmatic, not permissive. Technology will keep changing. Education always lags behind it. The task of instructors is not to reverse technological history but to adapt intelligently—to identify what new tools erode, what they amplify, and how to redesign teaching accordingly. Panic is lazy. Nostalgia is misleading. The real work is harder: staying alert, flexible, and honest about both the costs and the gains.

  • A College Instructor’s Biggest Challenge Is Closing the Abstraction Resistance Gap

    A College Instructor’s Biggest Challenge Is Closing the Abstraction Resistance Gap

    Abstraction Resistance Gap
    noun

    There is a widening cultural mismatch between the need for abstract intellectual capacities—critical thinking, judgment, conceptual flexibility—and a population trained to expect concrete, instant, screen-mediated results. The abstraction resistance gap opens when societies raised on prompts and outputs lose the ability to value thought that cannot be immediately displayed, optimized, or monetized. Ideas that require time, silence, and struggle arrive speaking a language the audience no longer understands. Teaching fails not because the ideas are wrong, but because they require translation into a cognitive dialect that has gone extinct.

    If you are a college writing instructor facing students who spent four years of high school outsourcing their homework to AI, you are standing on the front lines of this gap. Your task is not merely to assign essays. It is to supply a framework for critical thinking, a vocabulary for understanding that framework, and—hardest of all—a reason to choose it over frictionless delegation. You are asking students to resist the gravitational pull of machines and to decline the comfortable role of Non Player Character.

    Your enemy is not ignorance. It is time. No one becomes a critical thinker overnight. It takes years of sustained reading and what Cal Newport calls deep work: long stretches of attention without dopamine rewards. When you pause long enough to consider the difficulty of this task—and the odds stacked against it—it can drain the optimism from even the most committed instructor. You are not teaching a skill. You are trying to resurrect a way of thinking in a culture that has already moved on.

  • The Doomed Defiance of the Promethean Delusion

    The Doomed Defiance of the Promethean Delusion

    Promethean Delusion
    noun

    The Promethean impulse—named for the mythic thief who stole fire from the gods—now animates the fantasy that technological optimization can transform humans into frictionless, quasi-divine beings without cost or consequence. In this delusion, machines are no longer tools that extend human capacity; they are ladders to transcendence. Power is mistaken for wisdom. Speed for meaning. Anything that resists optimization is treated as a design flaw waiting to be patched.

    Limits become intolerable. Slowness is framed as inefficiency. Mortality is treated as a bug. Kairos—the lived, sacred time through which meaning actually forms—is dismissed as waste, an obstacle to throughput. What emerges is not liberation but derangement: expanding capability paired with a shrinking sense of what a human life is for.

    So what does it mean to be human? The answer depends on which story you choose to inhabit. The Promethean tech evangelist sees the human being as an unfinished machine—upgradeable, indefinitely extendable, and perhaps immortal if the right knobs are turned. All problems reduce to engineering: tighten this screw, loosen that one, eliminate friction, repeat.

    The Christian story is harsher and more honest. It begins with brokenness, not optimization—with mortal creatures who cannot save themselves and who long for reconciliation with their Maker. To reject this account is to rebel, to attempt demigodhood by force of will and code. As John Moriarty observed, “The story of Christianity is the story of humanity’s rebellion against God.” The dream of becoming frictionless and divine is not progress; it is a doomed defiance. It does not end in transcendence but in collapse—moral, spiritual, and eventually civilizational.

  • Kairos vs. Chronos: The Battle for Human Time

    Kairos vs. Chronos: The Battle for Human Time

    Kairos names a rare kind of time—the moment when life thickens and becomes meaningful. It is the time of attention and presence, when learning actually happens, when a sentence suddenly makes sense, when an idea lands with the force of revelation. Kairos is not counted; it is entered. You don’t measure it. You feel it. It is the time of epiphany, imagination, and inward transformation.

    Chronos, by contrast, is time broken into units and put to work. It is the time of clocks, calendars, deadlines, and dashboards. Chronos asks how long something took, how efficiently it was completed, and whether it can be done faster next time. It governs offices, classrooms, and productivity apps. Chronos is indispensable—but it is also merciless.

    Kairos belongs to myth, enchantment, and meaning. Chronos belongs to business, logistics, and quarterly reports. We need both. But when life tilts too far toward chronos, we find ourselves strapped to the Hamster Wheel of Optimization, mistaking motion for progress. The cost is steep. We don’t just lose kairos—the sacred time of depth and presence. We lose vitality, interiority, and eventually our sense of being fully alive.

    This tension animates the work of Paul Kingsnorth, particularly in Against the Machine: On the Unmaking of Humanity. Kingsnorth’s project is not nostalgia but boundary-setting. He argues that preserving our humanity requires limits—lines we refuse to cross. The dream of using machines to become demigods is not liberation; it is derangement. The fantasy of the uberhuman, endlessly optimized and frictionless, is a story told by technologists whose ambition for profit and control is vast, but whose understanding of human nature is alarmingly thin.

    Machines can extend our reach. They cannot supply meaning. That still requires kairos—time that cannot be optimized without being destroyed.