Tag: chatgpt

  • Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication Will Change Higher Learning Forever

    Hyper-Efficiency Intoxication

    noun

    The dopamine-laced rush that occurs when AI collapses hours of cognitive labor into seconds, training the brain to mistake speed for intelligence and output for understanding. Hyper-Efficiency Intoxication sets in when the immediate relief of reclaimed time—skipped readings, instant summaries, frictionless drafts—feels so rewarding that slow thinking begins to register as needless suffering. What hooks the user is not insight but velocity: the sense of winning back life from effort itself. Over time, this chemical high reshapes judgment, making sustained attention feel punitive, depth feel inefficient, and authorship feel optional. Under its influence, students do not stop working; they subtly downgrade their role—from thinker to coordinator, from writer to project manager—until thinking itself fades into oversight. Hyper-Efficiency Intoxication does not announce itself as decline; it arrives disguised as optimization, quietly hollowing out the very capacities education once existed to build.

    ***

    No sane college instructor assigns an essay anymore under the illusion that you’ll heroically wrestle with ideas while AI politely waits in the hallway. We all know what happens: a prompt goes in, a glossy corpse comes out. The charade has become so blatant that even professors who once treated AI like a passing fad are now rubbing their eyes and admitting the obvious. Hua Hsu names the moment plainly in his essay “What Happens After A.I. Destroys College Writing?”: the traditional take-home essay is circling the drain, and higher education is being forced to explain—perhaps for the first time in decades—what it’s actually for.

    The problem isn’t that students are morally bankrupt. It’s that they’re brutally rational. The real difference between “doing the assignment” and “using AI” isn’t ethics; it’s time. Time is the most honest currency in your life. Ten hours grinding through a biography means ten hours you’re not at a party, a game, a date, or a job. Ten minutes with an AI summary buys you your evening back. Faced with that math, almost everyone chooses the shortcut—not because they’re dishonest, but because they live in the real world. This isn’t cheating; it’s survival economics.

    Then there’s the arms race. Your classmates are using AI. All of them. Competing against them without AI is like entering a bodybuilding contest while everyone else is juiced to the gills and you’re proudly “all natural.” You won’t be virtuous; you’ll be humiliated. Fairness collapses the moment one side upgrades, and pretending otherwise is naïve at best.

    AI also hooks you. Hsu admits that after a few uses of ChatGPT, he felt the “intoxication of hyper-efficiency.” That’s not a metaphor—it’s a chemical event. When a machine collapses hours of effort into seconds, your brain lights up like it just won a small lottery. The rush isn’t insight; it’s velocity. And once you’ve tasted that speed, slowness starts to feel like punishment.

    Writing instructors, finally awake, are adapting. Take-home essays are being replaced by in-class writing, blue books, and passage identification exams—formats designed to drag thinking back into the room and away from the cloud. These methods reward students who’ve spent years reading and writing the hard way. But for students who entered high school in 2022 or later—students raised on AI scaffolding—this shift feels like being dropped into deep water without a life vest. Many respond rationally: they avoid instructors who demand in-class thinking.

    Over time, something subtle happens. You don’t stop working; you change roles. You become, in Hsu’s phrase, a project manager—someone who coordinates machines rather than generating ideas. You collaborate, prompt, tweak, and oversee. And at some point, no one—not you, not your professor—can say precisely when the thinking stopped being yours. There is no clean border crossing, only a gradual fade.

    Institutions are paralyzed by this reality. Do they accept the transformation and train students to be elite project managers of knowledge? Or do they try to resurrect an older model of literacy, pretending that time, incentives, and technology haven’t changed? Neither option is comfortable, and both expose how fragile the old justifications for college have become.

    From the educator’s chair, the nightmare scenario is obvious. If AI can train competent project managers for coding, nursing, physical therapy, or business, why not skip college altogether? Why not certify skills directly? Why not let employers handle training in-house? It would be faster, cheaper, and brutally efficient.

    And efficiency always wins. When speed, convenience, and cost savings line up, they don’t politely coexist with tradition—they bulldoze it. AI doesn’t argue with the old vision of education. It replaces it. The question is no longer whether college will change, but whether it can explain why learning should be slower, harder, and less efficient than the machines insist it needs to be.

  • Humanification

    Humanification

    It is not my job to indoctrinate you into a political party, a philosophical sect, or a religious creed. I am not here to recruit. But it is my job to indoctrinate you about something—namely, how to think, why thinking matters, and what happens when you decide it doesn’t. I have an obligation to give you a language for understanding critical thinking and the dangers of surrendering it, a framework for recognizing the difference between a meaningful life and a comfortable one, and the warning signs that appear when convenience, short-term gratification, and ego begin quietly eating away at the soul. Some of you believe life is a high-stakes struggle over who you become. Others suspect the stakes are lower. A few—regrettably—flirt with nihilism and conclude there are no stakes at all. But whether you dramatize it or dismiss it, the “battle of the soul” is unavoidable. I teach it because I am not a vocational trainer turning you into a product. I am a teacher in the full, unfashionable sense of the word—even if many would prefer I weren’t.

    This battle became impossible to ignore when I returned to the classroom after the pandemic and met ChatGPT. On one side stood Ozempification: the seductive shortcut. It promises results without struggle, achievement without formation, output without growth. Why wrestle with ideas when a machine can spit out something passable in seconds? It’s academic fast food—calorie-dense, spiritually empty, and aggressively marketed. Excellence becomes optional. Effort becomes suspicious. Netflix beckons. On the other side stood Humanification: the old, brutal path that Frederick Douglass knew by heart. Literacy as liberation. Difficulty as transformation. Meaning earned the hard way. Cal Newport calls it deep work. Jordan Peele gives it a name—the escape from the Sunken Place. Humanification doesn’t chase comfort; it chases depth. The reward isn’t ease. It’s becoming someone.

    Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College” captures this split perfectly. Wandering Haverford’s manicured campus, he encounters English majors who treat ChatGPT not as a convenience but as a moral hazard. They recoil from it. “I prefer not to,” Bartleby-style. Their refusal is not naïveté; it’s identity. Writing, for them, is not a means to a credential but an act of fidelity—to language, to craft, to selfhood. But Harper doesn’t let this romanticism off the hook. He reminds us, sharply, that honor and curiosity are not evenly distributed virtues. They are nurtured—or crushed—by circumstance.

    That line stopped me cold. Was I guilty of preaching Humanification without acknowledging its price tag? Douglass pursued literacy under threat of death, but he is a hero precisely because he is rare. We cannot build an educational system that assumes heroic resistance as the norm. Especially not when the very architects of our digital dystopia send their own children to screen-free Waldorf schools, where cursive handwriting and root vegetables are treated like endangered species. The tech elite protect their children from the technologies they profit from. Everyone else gets dopamine.

    I often tell students this uncomfortable truth: it is easier to be an intellectual if you are rich. Wealth buys time, safety, and the freedom to fail beautifully. You can disappear to a cabin, read Dostoevsky, learn Schubert, and return enlightened. Most students don’t have that option. Harper is right—institutions like Haverford make Humanification easier. Small classes. Ample support. Unhurried faculty. But most students live elsewhere. My wife teaches in public schools where buildings leak, teachers sleep in cars, and safety is not guaranteed. Asking students in survival mode to honor an abstract code of intellectual purity borders on insult.

    Maslow understood this long ago. Self-actualization comes after food, shelter, and security. It’s hard to care about literary integrity when you’re exhausted, underpaid, and anxious. Which is why the Ozempic analogy matters. Just as expensive GLP-1 drugs make discipline easier for some bodies, elite educational environments make intellectual virtue easier for some minds. Character still matters—but it is never the whole story.

    Harper complicates things further by comparing Haverford to Stanford. At Stanford, honor codes collapse under scale; proctoring becomes necessary. Intimacy, not virtue alone, sustains integrity. Haverford begins to look less like a model and more like a museum—beautiful, instructive, and increasingly inaccessible. The humanities survive there behind velvet ropes.

    I teach at a community college. My students are training for nursing, engineering, business. They work multiple jobs. They sleep six hours if they’re lucky. They don’t have the luxury to marinate in ideas. Humanification gets respectful nods in class discussions, but Ozempification pays the rent. And pretending otherwise helps no one.

    This is the reckoning. We cannot shame students for using AI when AI is triage, not indulgence. But we also cannot pretend that a life optimized for convenience leads anywhere worth going. The challenge ahead is not to canonize the Humanified or condemn the Ozempified. It is to build an educational culture where aspiration is not a luxury good—where depth is possible without privilege, and where using AI does not require selling your soul for efficiency.

    That is the real battle. And it’s one we can’t afford to fight dishonestly.

  • Ozempification: A Cautionary Tale

    Ozempification: A Cautionary Tale

    The 2025 Los Angeles wildfires, blazing with apocalyptic fury, prompted me to do something I hadn’t done in years: dust off one of my radios and tune into live local news. The live broadcast brought with it not just updates but an epiphany. Two things, in fact. First, I realized that deep down, I despise my streaming devices—their algorithm-driven content is like an endless conveyor belt of lukewarm leftovers, a numbing backdrop of music and chatter that feels canned, impersonal, and incurably distant. Worst of all, these devices have pushed me into a solipsistic bubble, a navel-gazing universe where I am the sole inhabitant. Streaming has turned my listening into an isolating, insidious form of solitary confinement, and I haven’t even noticed.

    When I flipped on the radio in my kitchen, the warmth of its live immediacy hit me like a long-lost friend. My heart ached as memories of radio’s golden touch from my youth came flooding back. As a nine-year-old, after watching Diahann Carroll in Julia and Sally Field in The Flying Nun, I’d crawl into bed, armed with my trusty transistor radio and earbuds, ready for the night to truly begin. Tuned to KFRC 610 AM, I’d be transported into the shimmering world of Sly and the Family Stone’s “Hot Fun in the Summertime,” Tommy James and the Shondells’ “Crystal Blue Persuasion,” and The Friends of Distinction’s “Grazing in the Grass.” The knowledge that thousands of others in my community were swaying to the same beats made the experience electric, communal, alive—so unlike the deadening isolation of my curated streaming playlists.

    The fires didn’t just torch the city—they laid bare the fault lines in my craving for connection. Nostalgia hit like a sucker punch, sending me down an online rabbit hole in search of a high-performance radio, convinced it could resurrect the magic of my youth. Deep down, a sardonic voice heckled me: was this really about better reception, or just another pitiful attempt by a sixty-something man trying to outrun mortality? Did I honestly believe a turbo-charged radio could beam me back to those transistor nights and warm kitchen conversations, or was I just tuning into the static of my own existential despair?

    Streaming had wrecked my relationship with music, plain and simple. The irony wasn’t lost on me either. While I warned my college students not to let ChatGPT lull them into embracing mediocre writing, I had let technology seduce me into a lazy, soulless listening experience. Hypocrisy alert: I had become the very cautionary tale I preached against.

    Enter what I now call “Ozempification,” inspired by that magical little injection, Ozempic, which promises a sleek body with zero effort. It’s the tech-age fantasy in full force: the belief that convenience can deliver instant gratification without any downside. Spoiler alert—it doesn’t. The price of that fantasy is steep: convenience kills effort, and with it, the things that actually make life rich and rewarding. Bit by bit, it hollows you out like a bad remix, leaving you a hollow shell of passive consumption.

    Over time, you become an emotionally numb, passive tech junkie—a glorified NPC on autopilot, scrolling endlessly through algorithms that decide your taste for you. The worst part? You stop noticing. The soundtrack to your life is reduced to background noise, and you can’t even remember when you lost control of the plot.

    But not all Ozempification is a one-way ticket to spiritual bankruptcy. Sometimes, it’s a lifeline. GLP-1 drugs like Ozempic can literally save lives, keeping people with severe diabetes from joining the ranks of organ donors earlier than planned. Meanwhile, overworked doctors are using AI to diagnose patients with an accuracy that beats the pre-AI days of frantic guesswork and “Let’s Google that rash.” That’s Necessary Ozempification—the kind that keeps you alive or at least keeps your doctor from prescribing antidepressants instead of antibiotics.

    The true menace isn’t just technology—it’s Mindless Ozempification, where convenience turns into a full-blown addiction. Everything—your work, your relationships, even your emotional life—gets flattened into a cheap, prepackaged blur of instant gratification and hollow accomplishment. Suddenly, you’re just a background NPC in your own narrative, endlessly scrolling for a dopamine hit like a lab rat stuck in a particularly bleak Skinner box experiment.

    As the fires in L.A. fizzled out, I had a few weeks to prep my writing courses. While crafting my syllabus and essay prompts, Mindless Ozempification loomed large in my mind. Why? Because I was facing the greatest challenge of my teaching career: staying relevant when my students had a genie—otherwise known as ChatGPT—at their beck and call, ready to crank out essays faster than you can nuke a frozen burrito.

    After four years of wrestling with AI-assisted essays and thirty-five years in the classroom, I’ve learned something unflattering about human nature—especially my own. We are exquisitely vulnerable to comfort, shortcuts, and the soft seduction of the path of least resistance. Given enough convenience, we don’t just cut corners; we slowly anesthetize ourselves. That quiet slide—where effort feels offensive and difficulty feels unnecessary—is the endgame of Ozempification: not improvement, but a gentle, smiling drift toward spiritual atrophy.

  • Good-Enoughers

    Good-Enoughers

    In the fall of 2023, I was standing in front of thirty bleary-eyed college students, halfway through a lesson on how to spot a ChatGPT essay—mainly by its fondness for lifeless phrases that sound like they were scraped from a malfunctioning inspirational calendar. That’s when a business major raised his hand with the calm confidence of someone revealing a trade secret and said, “I can guarantee you everyone on this campus uses ChatGPT. We don’t submit it raw. We tweak a few sentences, paraphrase a little, and boom—no one can tell.”

    Before I could respond, a computer science student piled on. “It’s not just for essays,” he said. “It’s my life coach. I ask it about everything—career moves, crypto, even dating.” Dating advice. From ChatGPT. Somewhere, right now, a romance is unfolding on AI-generated pillow talk and a bullet-pointed list of conversation starters.

    That was the moment I realized I was staring at the biggest educational rupture of my thirty-year career. Tools like ChatGPT have three superpowers: obscene convenience, instant availability, and blistering speed. In a world where time is money and most writing does not need to summon the ghost of James Baldwin, AI is already good enough for about 95 percent of professional communication. And there it is—the phrase that should make educators break out in hives: good enough.

    “Good enough” is convenience’s love language. Imagine waking up groggy and choosing between two breakfasts. Option one is a premade smoothie: beige, foamy, nutritionally ambiguous, and available immediately. Option two is a transcendent, handcrafted masterpiece—organic fruit, thick Greek yogurt, chia seeds, almond milk—but to get it you must battle orb spiders in your backyard, dodge your neighbor’s possessed Belgian dachshund, and then spend quality time scrubbing a Vitamix before fighting traffic. Which one do most people choose?

    Exactly. The premade sludge. Because who has time for spider diplomacy and blender maintenance before a commute? Convenience wins, quality loses, and you console yourself with the time you saved. Eventually, you stop missing the better option altogether. That slow adjustment—lowering your standards until mediocrity feels normal—is attenuation.

    Now swap smoothies for writing. Writing is far harder than breakfast, and millions of people are quietly recalibrating their expectations. Why labor over sentences when the world will happily accept algorithmic mush? Polished prose is becoming the artisanal smoothie of communication: admirable, expensive, and increasingly optional. AI delivers something passable in seconds, and passable is the new benchmark.

    For educators, this is not a quirky inconvenience. It’s a five-alarm fire. I did not enter this profession to train students to become connoisseurs of adequacy. I wanted to cultivate thinkers, stylists, arguers—people whose sentences had backbone and intent. Instead, I find myself in a dystopia where “good enough” is the new gospel and I’m preaching craft like a monk selling calligraphy at a tech startup demo day.

    In medicine, the Hippocratic Oath is “Do no harm.” In teaching, the unspoken oath is blunter and less forgiving: never train your students to become Good-Enoughers—those half-awake intellectual zombies who mistake adequacy for achievement and turn mediocrity into a permanent way of life.

    Whatever role AI plays in my classroom, one line is nonnegotiable. The moment I use it to help students settle for less—to speed them toward adequacy instead of depth—I’m no longer teaching. I’m committing educational malpractice.

  • Cognitive Thinning and Cognitve Load-Bearing Capacity

    Cognitive Thinning and Cognitve Load-Bearing Capacity

    In his bracing essay “Colleges Are Preparing to Self-Lobotomize,” Michael Clune accuses higher education of handling AI with the institutional equivalent of a drunk chainsaw. The subtitle gives away the indictment: “The skills that students will need in an age of automation are precisely those that are eroding by inserting AI into the educational process.” Colleges, Clune argues, spent the first three years of generative AI staring at the floor. Now they’re overcorrecting—embedding AI everywhere as if saturation were the same thing as competence. It isn’t. It’s panic dressed up as innovation.

    The prevailing fantasy is that if AI is everywhere, mastery will seep into students by osmosis. But the opposite is happening. Colleges are training students to rely on frictionless services while quietly abandoning the capacities that make AI usable in any serious way: judgment, learning agility, and flexible analysis. The tools are getting smarter. The users are getting thinner.

    That thinning has a name. Cognitive Thinning is the gradual erosion of critical thinking that occurs when sustained mental effort is replaced by convenience. It sets in when institutions assume that constant exposure to powerful tools will produce competence, even as they dismantle the practices that build it. As AI grows more capable, students are asked to do less thinking, tolerate less uncertainty, and carry less intellectual weight. The result is a widening imbalance: smarter systems paired with slimmer minds—efficient, polished, and increasingly unable to move beyond the surface of what machines provide.

    Clune wants students to avoid this fate, but he faces a rhetorical problem. He keeps insisting on abstractions—critical thinking, intellectual flexibility, judgment—in a culture trained to distrust anything abstract. Telling a screen-saturated society to imagine thinking outside screens is like telling a fish to imagine life outside water. The first task isn’t instruction. It’s translation.

    The fish analogy holds. A fish is aquatic; water isn’t a preference—it’s a prison. A young person raised entirely on screens, prompts, and optimization tools treats that ecosystem as reality itself. Like the fish, they know only one environment. We can name this condition precisely. They are cognitively outsourced, trained to delegate thought as if it were healthy. They are algovorous, endlessly stimulated by systems that quietly erode attention and resilience. They are digitally obligate, unable to function without mediation. By definition, these orientations crowd out critical thinking. They produce people who function smoothly inside digital systems and falter everywhere else.

    Drop such a person into a college that recklessly embeds AI into every course in the name of being “future-proof,” and you don’t produce adaptability—you produce fragility. In some fields, this fragility is fatal. Clune cites a telling statistic: history majors now have roughly half the unemployment rate of recent computer science graduates. The implication is blunt. Liberal education builds range. Narrow technical training builds specialists who snap when the environment shifts. As the New York Times put it in a headline Clune references: “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” AI is replacing coders. Life inside a tiny digital ecosystem does not prepare you for a world that mutates.

    Is AI the cause of this dysfunction? No. The damage predates ChatGPT. I use AI constantly—and enjoy it. It sharpens my curiosity. It helps me test ideas. It makes me smarter because I am not trapped inside it. I have a life beyond screens. I’ve read thousands of books. I can zoom in and out—trees and forest—without panic. I have language for my inner life, which means I can catch myself when I become maudlin, entropic, dissolute, misanthropic, lugubrious, or vainglorious. I have history, philosophy, and religion as reference points. We call this bundle “critical thinking,” but what it really amounts to is being fully human.

    Someone who has outsourced thought and imagination since childhood cannot suddenly use AI well. They aren’t liberated. They’re brittle—dependent, narrow, and easily replaced.

    Because I’m a lifelong weightlifter, let me be concrete. AI is a massive, state-of-the-art gym: barbells, dumbbells, Smith machines, hack squats, leg presses, lat pulldowns, pec decks, cable rows—the works. Now imagine you’ve never trained. You’re twenty-eight, inspired by Instagram physiques, vaguely determined to “get in shape.” You walk into this cathedral of iron with no plan, no understanding of recovery, nutrition, progressive overload, or discipline. You’re surrounded by equipment—and completely lost. Within a month, you quit. You join the annual migration of January optimists who vanish by February, leaving the gym to the regulars.

    AI is that gym. It doesn’t eject users out of malice. It ejects them because it demands capacities they never built. Some people learn isolated tricks—prompting here, automating there—but only the way someone learns to push a toaster lever. When these tasks define a person, the result is a Non Player Character: reactive, scripted, interchangeable.

    Students already understand what an NPC is. That’s why they fear becoming one.

    If colleges embed AI everywhere without building the human capacities required to use it, they aren’t educating thinkers. They’re manufacturing NPCs—and they deserve to be called out for it.

    Don’t wait for your institution to save you. Approach education the way you’d approach a gym. Learn how bodies actually grow before touching the weights. Know the muscle groups. Respect recovery. Understand volume, exhaustion, and nutrition. Do the homework so the gym doesn’t spit you out.

    The same rule applies to AI. To use it well, you need a specific kind of mental strength: Cognitive Load-Bearing Capacity. This is the ability to use AI without surrendering your thinking. You can see it in ordinary behaviors: reading before summarizing, drafting before prompting, distrusting answers that sound too smooth, and revising because an idea is weak—not because a machine suggested a synonym. It’s the capacity to sit with confusion, compare sources, and arrive at judgment rather than outsource it.

    This capacity isn’t innate, and it isn’t fast. It’s built through resistance: sustained reading, outlining by hand, struggling with unfamiliar ideas, revising after failure. Students with cognitive load-bearing capacity use AI to pressure-test their thinking. Students without it use AI to replace thinking. One group grows stronger and more adaptable. The other becomes dependent—and replaceable.

    Think of AI like a piano. You can sit down and bang out notes immediately, but you won’t produce music. Beautiful playing requires trained fingers, disciplined ears, and years of wrong notes. AI works the same way. Without cognitive load-bearing capacity, you get noise—technically correct, emotionally dead. With it, the tool becomes expressive. The difference isn’t the instrument. It’s the musician.

    If you want to build this capacity, forget grand reforms. Choose consistent resistance. Read an hour a day with no tabs open. Write before prompting. Ask AI to attack your argument instead of finishing it. Keep a notebook where you explain ideas in your own words, badly at first. Sit with difficulty instead of dodging it. These habits feel inefficient—and that’s the point. They’re the mental equivalent of scales and drills. Over time, they give you the strength to use powerful tools without being used by them.

  • On the Importance of Cultivating a New Lexicon for Education in the Machine Age

    On the Importance of Cultivating a New Lexicon for Education in the Machine Age

    If you’re a college student who used AI all through high school, you’ve probably already heard the horror stories. Professors who ban AI outright. They pass out photocopied essays and poems and you have to annotate them with pens and pencils. The only kind of writing you do for a grade is in-class blue books dragged out like museum artifacts. Class participation grades hover over your head like a parole officer. You quietly avoid these instructors. Sitting in their classrooms would feel like being a fish dropped onto dry land.

    You grew up with screens. These Boomer professors grew up in a Pre-Screen Universe—a world that shaped their intellect, habits, and philosophy before the internet rewired everything. Now they want to haul you back there, convinced that salvation lies in reenactment. You can smell the desperation. You can also smell the futility. This is a waiting game, and you know how it ends. AI is not going away. The toothpaste is not going back in the tube. The genie is not returning to the bottle. You will use AI after graduation because the world you are entering already runs on it. These Pre-Screen Professors will eventually retire, ranting into the void. You don’t have time to wait them out. You’re here to get an education, and you’re not going to turn your back on AI now, not after it helped you make the Dean’s List in high school. 

    And yet—here’s the part you can’t ignore—you’re not wrong to be uneasy. You know what happens when AI use goes overboard. When thinking is outsourced wholesale, something essential atrophies. The inner fire dims. Judgment weakens. Agency erodes. Your sense of self vanishes. You become an NPC: responsive but not reflective, efficient but hollow. A form of hell with good grammar and polished syntax but hell nevertheless.

    So the problem isn’t whether to use AI. The problem is how to use it without surrendering yourself to it. You need a balance. You need to work effectively with machines while remaining unmistakably human. That requires more than rules or bans. It requires a new language—terms that help you recognize the traps, name the tradeoffs, and choose deliberately rather than drift.

    That’s what this lexicon is for. It is not a manifesto against technology or a nostalgic plea to return to chalk and silence. It’s a survival guide for the Machine Age—realistic, unsentimental, and shared by students and instructors alike. On one hand, you must learn how to navigate AI to build a future. On the other, you must learn how not to lose yourself in the process. This lexicon exists to help you do both.

  • AI Is a Gym, But the Students Need Muscles

    AI Is a Gym, But the Students Need Muscles

    In his bracing essay “Colleges Are Preparing to Self-Lobotomize,” Michael Clune accuses higher education of handling AI with the institutional equivalent of a drunk chainsaw. The subtitle gives away the game: “The skills that students will need in an age of automation are precisely those that are eroding by inserting AI into the educational process.” Colleges, Clune argues, spent the first three years of generative AI sitting on their hands. Now they are overcorrecting in a frenzy, embedding AI everywhere as if saturation were the same thing as competence. It isn’t. It’s panic dressed up as innovation.

    The prevailing assumption seems to be that if AI is everywhere, mastery will somehow emerge by osmosis. But what’s actually happening is the opposite. Colleges are training students to rely on frictionless services while neglecting the very capacities that make AI usable in any meaningful way: critical thinking, the ability to learn new things, and flexible modes of analysis. The tools are getting smarter; the users are getting thinner.

    Clune faces a genuine rhetorical problem. He keeps insisting that we need abstractions—critical thinking, intellectual flexibility, judgment—but we live in a culture that has been trained to distrust anything abstract. Telling a screen-saturated society to imagine thinking outside screens is like telling a fish to imagine life outside water. The first task, then, is not instruction but translation: What is critical thinking, and how do you sell it to people addicted to immediate, AI-generated results?

    The fish analogy holds. A fish is aquatic; water is not a preference but a prison. A young person raised entirely on screens, prompts, and optimization tools treats that ecosystem as reality itself. Like the fish, they are confined to a single environment. We can name this condition precisely. They are cognitively outsourced, trained to delegate thinking to machines as if this were normal or healthy. They are algovorous, endlessly stimulated by algorithms that quietly erode attention and resilience. They are digitally obligate, unable to function without mediation. By definition, these orientations exclude critical thinking. They produce people who are functional inside digital systems and dysfunctional everywhere else.

    Drop such a person into a college that recklessly embeds AI into every course in the name of being “future-proof,” and you send them into the job market as a fragile, narrow organism. In some fields, they will be unemployable. Clune points to a telling statistic: history majors currently have an unemployment rate roughly half that of recent computer science graduates. The implication is brutal. Liberal arts training produces adaptability. Coding alone does not. As the New York Times put it in a headline Clune cites, “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” AI is replacing coders. A life spent inside a tiny digital ecosystem does not prepare you for a world that mutates.

    Is AI the cause of this dysfunction? No. The damage was done long before ChatGPT arrived. I use AI constantly, and I enjoy it. It sharpens my curiosity. It helps me test ideas. It makes me smarter because I am not trapped inside it. I have a life beyond screens. I have read thousands of books. I can zoom in and out—trees and forest—without panic. I have language for my inner life, which means I can diagnose myself when I become maudlin, entropic, dissolute, misanthropic, lugubrious, or vainglorious. I have history, philosophy, and religion as reference points. All of this adds up to what we lazily call “critical thinking,” but what it really means is being fully human.

    Someone who has outsourced thought and imagination from childhood cannot suddenly use AI well. They are neither liberated nor empowered. They are brittle, dependent, and easily replaced.

    Because I am a lifelong weightlifter, I’ll offer a more concrete analogy. AI is a massive, state-of-the-art gym: barbells, dumbbells, Smith machines, hack squats, leg presses, lat pulldowns, pec decks, cable rows, preacher curls—the works. Now imagine you’ve never trained before. You’re twenty-eight, inspired by Instagram physiques, and vaguely determined to “get in shape.” You walk into this cathedral of iron with no plan, no understanding of hypertrophy, recovery, protein intake, progressive overload, or long-term discipline. You are surrounded by equipment, but you are lost. Within a month, you will quit. You’ll join the annual migration of January optimists who vanish by February, leaving the gym once again to the regulars.

    AI is that gym. It will eject most users. Not because it is hostile, but because it demands capacities they never developed. Some people will learn isolated tasks—prompting here, automating there—but only in the way someone learns to push a toaster lever. These tasks should not define a human being. When they do, the result is a Non Player Character: reactive, scripted, interchangeable.

    Young people already understand what an NPC is. That’s why they fear becoming one.

    If colleges recklessly embed AI into every corner of the curriculum, they are not educating thinkers. They are manufacturing NPCs. And for that, they deserve public shame.

  • Stupidification Didn’t Start with AI—It Just Got Faster

    Stupidification Didn’t Start with AI—It Just Got Faster

    What if AI is just the most convenient scapegoat for America’s long-running crisis of stupidification? What if blaming chatbots is simply easier than admitting that we have been steadily accommodating our own intellectual decline? In “Stop Trying to Make the Humanities ‘Relevant,’” Thomas Chatterton Williams argues that weakness, cowardice, and a willing surrender to mediocrity—not technology alone—are the forces hollowing out higher education.

    Williams opens with a bleak inventory of the damage. Humanities departments are in permanent crisis. Enrollment is collapsing. Political hostility is draining funding. Smartphones and social media are pulverizing attention spans, even at elite schools. Students and parents increasingly question the economic value of any four-year degree, especially one rooted in comparative literature or philosophy. Into this already dire landscape enters AI, a ready-made proxy for writing instructors, discussion leaders, and tutors. Faced with this pressure, colleges grow desperate to make the humanities “relevant.”

    Desperation, however, produces bad decisions. Departments respond by accommodating shortened attention spans with excerpts instead of books, by renaming themselves with bloated, euphemistic titles like “The School of Human Expression” or “Human Narratives and Creative Expression,” as if Orwellian rebranding might conjure legitimacy out of thin air. These maneuvers are not innovations. They are cost-cutting measures in disguise. Writing, speech, film, philosophy, psychology, and communications are lumped together under a single bureaucratic umbrella—not because they belong together, but because consolidation is cheaper. It is the administrative equivalent of hospice care.

    Williams, himself a humanities professor, argues that such compromises worsen what he sees as the most dangerous threat of all: the growing belief that knowledge should be cheap, easy, and frictionless. In this worldview, learning is a commodity, not a discipline. Difficulty is treated as a design flaw.

    And of course this belief feels natural. We live in a world saturated with AI tutors, YouTube lectures, accelerated online courses, and productivity hacks promising optimization without pain. It is a brutal era—lonely, polarized, economically unforgiving—and frictionless education offers quick solace. We soothe ourselves with dashboards, streaks, shortcuts, and algorithmic reassurance. But this mindset is fundamentally at odds with the humanities, which demand slowness, struggle, and attention.

    There exists a tiny minority of people who love this struggle. They read poetry, novels, plays, and polemics with the obsessive intensity of a scientist peering into a microscope. For them, the intellectual life supplies meaning, irony, moral vocabulary, civic orientation, and a deep sense of interiority. It defines who they are. These people often teach at colleges or work on novels while pulling espresso shots at Starbucks. They are misfits. They do not align with the 95 percent of the world running on what I call the Hamster Wheel of Optimization.

    Most people are busy optimizing everything—work, school, relationships, nutrition, exercise, entertainment—because optimization feels like survival. So why wouldn’t education submit to the same logic? Why take a Shakespeare class that assigns ten plays in a language you barely understand when you can take one that assigns a single movie adaptation? One professor is labeled “out of touch,” the other “with the times.” The movie-based course leaves more time to work, to earn, to survive. The reading-heavy course feels indulgent, even irresponsible.

    This is the terrain Williams refuses to romanticize. The humanities, he argues, will always clash with a culture devoted to speed, efficiency, and frictionless existence. The task of the humanities is not to accommodate this culture but to oppose it. Their most valuable lesson is profoundly countercultural: difficulty is not a bug; it is the point.

    Interestingly, this message thrives elsewhere. Fitness and Stoic influencers preach discipline, austerity, and voluntary hardship to millions on YouTube. They have made difficulty aspirational. They sell suffering as meaning. Humanities instructors, despite possessing language and ideas, have largely failed at persuasion. Perhaps they need to sell the life of the mind with the same ferocity that fitness influencers sell cold plunges and deadlifts.

    Williams, however, offers a sobering reality check. At the start of the semester, his students are electrified by the syllabus—exploring the American Dream through Frederick Douglass and James Baldwin. The idea thrills them. The practice does not. Close reading demands effort, patience, and discomfort. Within weeks, enthusiasm fades, and students quietly outsource the labor to AI. They want the identity of intellectual rigor without submitting to its discipline.

    After forty years of teaching college writing, this pattern is painfully familiar to me. Students begin buoyant and curious. Then comes the reading. Then comes the checkout.

    Early in my career, I sustained myself on the illusion that I could shape students in my own image—cultivated irony, wit, ruthless critical thinking. I wanted them to desire those qualities and mistake my charisma as proof of their power. That fantasy lasted about a decade. Eventually, realism took over. I stopped needing them to become like me. I just wanted them to pass, transfer, get a job, and survive.

    Over time, I learned something paradoxical. Most of my students are as intelligent as I am in raw terms. They possess sharp BS detectors and despise being talked down to. They crave authenticity. And yet most of them submit to the Hamster Wheel of Optimization—not out of shallowness, but necessity. Limited time, money, and security force them onto the wheel. For me to demand a life of intellectual rigor from them often feels like Don Quixote charging a windmill: noble, theatrical, and disconnected from reality.

    Writers like Thomas Chatterton Williams are right to insist that AI is not the root cause of stupidification. The wheel would exist with or without chatbots. AI merely makes it easier to climb aboard—and makes it spin faster than ever before.

  • The Copy-Paste Generation and the Myth of the Fallen Classroom

    The Copy-Paste Generation and the Myth of the Fallen Classroom

    There is no ambiguity in Ashanty Rosario’s essay title: “I’m a High Schooler. AI Is Demolishing My Education.” If you somehow miss the point, the subtitle elbows you in the ribs: “The end of critical thinking in the classroom.” Rosario opens by confessing what every honest student now admits: she doesn’t want to cheat with AI, but the tools are everywhere, glowing like emergency exits in a burning building. Some temptations are structural.

    Her Exhibit A is a classmate who used ChatGPT to annotate Narrative of the Life of Frederick Douglass. These annotations—supposed evidence of engaged reading—were nothing more than “copy-paste edu-lard,” a caloric substitute for comprehension. Rosario’s frustration reminds me of a conversation with one of my brightest students. On the last day of class, he sat in my office and casually admitted that he uses ChatGPT to summarize all his reading. His father is a professor; he wakes up at five for soccer practice; he takes business calculus for fun. He is not a slacker. He is a time-management pragmatist surviving the 21st century. He reads the AI summaries, synthesizes them, and writes excellent essays. Of course I’d love for him to spend slow hours with books, but he is not living in 1954. He is living in a culture where time is a scarce resource, and AI is his oxygen mask.

    My daughters and their classmates face the same problem with Macbeth. Shakespeare’s language might as well be Martian for a generation raised on TikTok compression and dopamine trickle-feeds. They watch film versions of the play and use AI to decode plot points so they can answer the teacher’s study questions without sounding like they slept through the Renaissance. Some purists will howl that this is intellectual cheating. But as a writing instructor, I suspect the teacher benefits from students who at least know what’s happening—even if their knowledge comes from a chatbot. Expecting a 15-year-old to read Macbeth cold is like assigning tensor calculus to a preschooler. They haven’t done their priors. So AI becomes a prosthetic. A flotation device. A translation machine dropped into a classroom years overdue. To blame AI for the degradation of education is tempting, but it’s also lazy. We live in a society where reading is a luxury good and the leisure class quietly guards the gates.

    In the 1970s, I graduated from a public high school with literacy skills so thin you could read the room through them. I took remedial English my freshman year of college. If I were a student today, dropped into 2025 with those same deficits, I would almost certainly lean on AI just to keep my head above water. The difference is that today’s students aren’t just supplementing—they’re optimizing. They tell me this openly: over ninety percent of my students use AI because their skills don’t match the workload and because, frankly, everyone else is doing it. It’s an arms race of survival, not a moral collapse.

    Still, Rosario is right about the aftermath. She writes: “AI has softened the consequences of procrastination and led many students to avoid doing any work at all. There is little intensity anymore.” When thinking becomes optional, students drift into a kind of algorithmic sleepwalking. They outsource cognition until they resemble NPCs in a glitching video game—avatars performing human imitation rather than human thought. My colleagues and I see it, semester after semester: the fade-out, the disengagement, the slow zombification.

    Colleges are scrambling to respond. Should we police AI with plagiarism detectors? Should we ban laptops and force students to write essays in composition books under watchful eyes like parolees in a literary halfway house? Should we pretend the flood can be stopped with a beach towel?

    Reading Rosario’s lament about “cookie-cutter AI arguments,” I thought of my one visit to Applebee’s in the early 2000s. The menu photos promised ambrosia. The food tasted like something engineered in a lab to be technically edible yet spiritually vacant. Applebee’s was AI before AI—an assembly line of flavorless simulacra. Humanity gravitates toward the easy, the prepackaged, the frictionless. AI didn’t invent mediocrity. It merely handed it a megaphone.

    Rosario, clearly, is not an Applebee’s soul. She’s Michelin-level in a world eager to eat microwaved Hot Pockets. Of course her heart sinks when classmates settle for fast-food literacy. I want to tell her that if she were in high school in the 1970s, she’d still witness an appetite for shortcut learning. The tools would be different, the essays less slick, but the gravitational pull toward mediocrity would be the same. The human temptation to bypass difficulty is not technological—it’s ancestral. AI simply automates the old hunger.

  • Artificial Intelligence and the Collapse of Classroom Thinking (college essay prompt)

    Artificial Intelligence and the Collapse of Classroom Thinking (college essay prompt)

    Artificial intelligence now drafts thesis statements, outlines arguments, rewrites weak prose, and gives students a shortcut past the cognitive struggle that learning used to require. Some critics warn that AI corrodes motivation, weakens mastery, and turns students into spectators of their own minds. Others argue that AI is merely revealing the truth we refused to confront: that modern education was already driven by templates, disengagement, and shallow assessment long before ChatGPT arrived. Still others suggest the two forces interact in a feedback loop—an educational system already limping is now asked to carry a technological weight it cannot bear.

    Write an argumentative essay in which you address the following question:

    To what extent is AI responsible for the erosion of student learning, and to what extent does it merely amplify the structural weaknesses already embedded in contemporary education?

    Your position may argue that:

    • AI is the primary driver of decline,
    • systemic failures are the primary driver,
    • or both forces interact in a way that cannot be separated.
      This is not a binary assignment—your task is to map the relationship between these forces with precision and evidence.

    Assigned Readings

    You must use at least four writers from the following list as central sources in your essay.
    You may also draw from additional credible sources.

    Critics who argue AI is damaging education

    1. Ashanty Rosario — “I’m a High Schooler. AI Is Demolishing My Education.”
    2. Lila Shroff — “The AI Takeover of Education Is Just Getting Started.”
    3. Damon Beres — “AI Has Broken High School and College.”
    4. Michael Clune — “Colleges Are Preparing to Self-Lobotomize.”

    Writers who shift the crisis away from AI

    1. Ian Bogost — “College Students Have Already Changed Forever.”
    2. Tyler Austin Harper — “The Question All Colleges Should Ask Themselves About AI.”
    3. Tyler Austin Harper — “ChatGPT Doesn’t Have to Ruin College.”
    4. John McWhorter — “My Students Use AI. So What?”

    Your Essay Must Include the Following Components

    1. Analyze one critic who argues AI is corrosive.

    Choose one writer who describes how AI erodes motivation, mastery, identity, intellectual struggle, or authentic thinking.
    Identify the mechanism of harm:
    How does AI disrupt learning—and where, exactly, does the breakdown occur?

    2. Analyze one writer who shifts blame away from AI.

    Choose a writer who argues that the crisis originates in curriculum design, academic culture, standardized writing templates, disengagement, or institutional inertia.
    Explain their diagnosis:
    What was broken before AI entered the classroom?

    3. Develop your own argument that maps the relationship between these forces.

    Your task is to explain how AI and the educational system interact.
    Does AI accelerate a decline already underway?
    Does it expose weaknesses the system refuses to address?
    Or does it create problems the system is too brittle to manage?
    Define the threshold:
    When does AI function as a constructive learning tool, and when does it become a crutch that erases struggle and depth?

    4. Include a substantial counterargument and rebuttal.

    Address the strongest opposing viewpoint—not a caricature—and respond with evidence and reasoning.

    Requirements

    • Minimum of 4 credible sources (MLA)
    • At least 4 assigned essays
    • MLA Works Cited
    • An essay that argues, rather than summarizes

    Guiding Question

    What kind of intellectual culture emerges when AI becomes normal—and who (or what) is ultimately responsible for shaping that culture?