Tag: teaching

  • The New Role of the College Instructor: Disruption Interpreter

    The New Role of the College Instructor: Disruption Interpreter

    Disruption Interpreter

    noun

    A Disruption Interpreter is a teacher who does not pretend the AI storm will pass quickly, nor claim to possess a laminated map out of the wreckage. Instead, this instructor helps students read the weather. A Disruption Interpreter names what is happening, explains why it feels destabilizing, and teaches students how to think inside systems that no longer reward certainty or obedience. In the age of AI, this role replaces the old fantasy of professorial authority with something more durable: interpretive judgment under pressure. The Disruption Interpreter does not sell reassurance. He sells literacy in chaos.

    ***

    In his essay “The World Still Hasn’t Made Sense of ChatGPT,” Charlie Warzel describes OpenAI as a “chaos machine,” and the phrase lands because it captures the feeling precisely. These systems are still young, still mutating, constantly retraining themselves to score higher on benchmarks, sound more fluent, and edge out competitors like Gemini. They are not stabilizing forces; they are accelerants. The result is not progress so much as disruption.

    That disruption is palpable on college campuses. Faculty and administrators are not merely unsure about policy; they are unsure about identity. What is a teacher now? What is an exam? What is learning when language itself can be summoned instantly, convincingly, and without understanding? Lurking beneath those questions is a darker one: is the institution itself becoming an endangered species, headed quietly toward white-rhino status?

    Warzel has written that one of AI’s enduring impacts is to make people feel as if they are losing their grip, confronted with what he calls a “paradigm-shifting, society-remaking superintelligence.” That feeling of disorientation is not a side effect; it is the main event. We now live in the Age of Precariousness—a world perpetually waiting for a shoe to drop. Students have no clear sense of what to study when career paths evaporate mid-degree. Older generations watch familiar structures dissolve and struggle to recognize the world they helped build. Even the economy feels suspended between extremes. Will the AI bubble burst and drag markets down with it? Or will it continue inflating the NASDAQ while hollowing out everything beneath it?

    Amid this turbulence, Warzel reminds us of something both obvious and unsettling: technology has never really been about usefulness. It has been about selling transformation. A toothbrush is useful, but it will never dominate markets or colonize minds. Build something, however, that makes professors wonder if they will still have jobs, persuades millions to confide in chatbots instead of therapists, hijacks attention, rearranges spreadsheets, and rewires expectations—and you are no longer making a tool. You are remaking reality.

    In a moment where disruption matters more than solutions, college instructors cannot credibly wear the old costume of authority and claim to know where this all ends. We do not have a clean exit strategy or a proven syllabus that leads safely out of the jungle. We are more like Special Ops units cut off from command, scavenging parts, building and dismantling experimental aircraft while under fire, hoping the thing flies before it catches fire. Students are not passengers on this flight; they are co-builders. This is why the role of the Disruption Interpreter matters. It names the condition honestly. It helps students translate chaos machines into intelligible frameworks without pretending the risks are smaller than they are or the answers more settled than they feel.

    In a college writing class, this shift has immediate consequences. A Disruption Interpreter redesigns the course around friction, transparency, and judgment rather than polished output. Assignments that reward surface-level fluency are replaced with ones that expose thinking: oral defenses, annotated drafts, revision histories, in-class writing. These structures make it difficult to silently outsource cognition to AI without consequence. The instructor also teaches students how AI functions rhetorically, treating large language models not as neutral helpers but as persuasive systems that generate plausible language without understanding. Students must analyze and revise AI-generated prose, learning to spot its evasions, its false confidence, and its tendency to sound authoritative while saying very little.

    Most importantly, evaluation itself is recalibrated. Correctness becomes secondary to agency. Students are graded on the quality of their decisions: what they chose to argue, what they rejected, what they revised, and why. Writing becomes less about producing clean text and more about demonstrating authorship in an age where text is cheap and judgment is scarce. One concrete example is the Decision Rationale Portfolio. Alongside an argumentative essay, students submit a short dossier documenting five deliberate choices: a claim abandoned after research, a source rejected and justified, a paragraph cut or reworked, a moment when they overruled an AI suggestion, and a risk that made the essay less safe but more honest. A mechanically polished essay paired with thin reasoning earns less credit than a rougher piece supported by clear, defensible decisions. The grade reflects discernment, not sheen.

    The Disruption Interpreter does not rescue students from uncertainty; he teaches them how to function inside it. In an era defined by chaos machines, precarious futures, and seductive shortcuts, the task of education is no longer to transmit stable knowledge but to cultivate judgment under unstable conditions. Writing classes, reimagined this way, become training grounds for intellectual agency rather than production lines for compliant prose. AI can assist with language, speed, and simulation, but it cannot supply discernment. That remains stubbornly human. The Disruption Interpreter’s job is to make that fact unavoidable, visible, and finally—inescapable.

  • Gollumification

    Gollumification

    Gollumification

    noun

    Gollumification names the slow moral and cognitive decay that occurs when a person repeatedly chooses convenience over effort and optimization over growth. It is what happens when tools designed to assist quietly replace the very capacities they were meant to strengthen. Like Tolkien’s Gollum, the subject does not collapse all at once; he withers incrementally, outsourcing judgment, agency, and struggle until what remains is a hunched creature guarding shortcuts and muttering justifications. Gollumification is not a story about evil intentions. It is a story about small evasions practiced daily until the self grows thin, brittle, and dependent.

    ***

    Washington Post writer Joanna Slater reports in “Professors Are Turning to This Old-School Method to Stop AI Use on Exams” that some instructors are abandoning written exams in favor of oral ones, forcing students to demonstrate what they actually know without the benefit of algorithmic ventriloquism. At the University of Wyoming, religious studies professor Catherine Hartmann now seats students in her office and questions them directly, Socratic-style, with no digital intermediaries to run interference. Her rationale is blunt and bracing. Using AI on exams, she tells students, is like bringing a forklift to the gym when your goal is to build muscle. “The classroom is a gymnasium,” she explains. “I am your personal trainer. I want you to lift the weights.” Hartmann is not being punitive; she is being realistic about human psychology. Given a way to cheat ourselves out of effort—or out of a meaningful life—we will take it, not because we are corrupt, but because we are wired to conserve energy. That instinct once helped us survive. Now it quietly betrays us. A cheated education becomes a squandered one, and a squandered life does not merely stagnate; it decays. This is how Gollumification begins: not with villainy, but with avoidance.

    I agree entirely with Hartmann’s impulse, even if my method would differ. I would require students to make a fifteen-minute YouTube video in which they deliver their argument as a formal speech. I know from experience that translating a written argument into an oral one exposes every hollow sentence and every borrowed idea. The mind has nowhere to hide when it must speak coherently, in sequence, under the pressure of time and presence. Oral essays force students to metabolize their thinking instead of laundering it through a machine. They are a way of banning forklifts from the gym—not out of nostalgia, but out of respect for the human organism. If education is meant to strengthen rather than simulate intelligence, then forcing students to lift their own cognitive weight is not cruelty. It is preventive medicine against the slow, tragic, and all-too-modern disease of Gollumification.

  • Academic Anedonia: A Tale in 3 Parts

    Academic Anedonia: A Tale in 3 Parts

    Academic Anhedonia

    noun

    Academic Anhedonia is the condition in which students retain the ability to do school but lose the capacity to feel anything about it. Assignments are completed, boxes are checked, credentials are pursued, yet curiosity never lights up and satisfaction never arrives. Learning no longer produces pleasure, pride, or even frustration—just a flat neurological neutrality. These students aren’t rebellious or disengaged; they’re compliant and hollow, moving through coursework like factory testers pressing buttons to confirm the machine still turns on. Years of algorithmic overstimulation, pandemic detachment, and frictionless AI assistance have numbed the internal reward system that once made discovery feel electric. The result is a classroom full of quiet efficiency and emotional frost: cognition without appetite, performance without investment, education stripped of its pulse.

    ***

    I started teaching college writing in the 80s under the delusion that I was destined to be the David Letterman of higher education—a twenty-five-year-old ham with a chalkboard, half-professor and half–late-night stand-up. For a while, the act actually worked. A well-timed deadpan joke could mesmerize a room of eighteen-year-olds and soften their outrage when I saddled them with catastrophically ill-chosen books (Ron Rosenbaum’s Explaining Hitler—a misfire so spectacular it deserves its own apology tour). My stories carried the class, and for decades I thought the laughter was evidence of learning. If I could entertain them, I told myself, I could teach them.

    Then 2012 hit like a change in atmospheric pressure. Engagement thinned. Phones glowed. Students behaved as though they were starring in their own prestige drama, and my classroom was merely a poorly lit set. I was no longer battling boredom—I was competing with the algorithm. This was the era of screen-mediated youth, the 2010–2021 cohort raised on the oxygen of performance. Their identities were curated in Instagram grids, maintained through Snapstreaks, and measured in TikTok microfame points. The students were not apathetic; they were overstimulated. Their emotional bandwidth was spent on self-presentation, comparison loops, and the endless scoreboard of online life. They were exhausted but wired, longing for authenticity yet addicted to applause. I felt my own attention-capture lose potency, but I still recognized those students. They were distracted, yes, but still alive.

    But in 2025, we face a darker beast: the academically anhedonic student. The screen-mediated generation ran hot; this one runs cold. Around 2022, a new condition surfaced—a collapse of the internal reward system that makes learning feel good, or at least worthwhile. Years of over-curation, pandemic detachment, frictionless AI answers, and dopamine-dense apps hollowed out the very circuits that spark curiosity. This isn’t laziness; it’s a neurological shrug. These students can perform the motions—fill in a template, complete a scaffold, assemble an essay like a flat-pack bookshelf—but they move through the work like sleepwalkers. Their curiosity is muted. Their persistence is brittle. Their critical thinking arrives pre-flattened. 

    My colleagues tell me their classrooms are filled with compliant but joyless learners checking boxes on their march toward a credential. The Before-Times students wrestled with ideas. The After-Times students drift through them without contact. It breaks our hearts because the contrast is stark: what was once noisy and performative has gone silent. Academic anhedonia names that silence—a crisis not of ability, but of feeling.

  • Pedagogical Liminality

    Pedagogical Liminality

    Lila Shroff argues that education has entered its Wild West phase in her essay “The AI Takeover of Education Is Just Getting Started,” and she’s right in the way that makes administrators nervous and instructors quietly exhausted. Most of you are not stumbling innocents. You are veterans of four full years of AI high school. You no longer engage in crude copy-and-paste plagiarism. That’s antique behavior. You’ve learned to stitch together outputs from multiple models, then instruct the chatbot to scuff the prose with a few grammatical imperfections so it smells faintly human and slips past detection software. This is not cheating as shortcut; it is cheating as workflow optimization.

    Meanwhile, many high school teachers congratulate themselves for assigning Shakespeare, Keats, and Dostoevsky while willfully ignoring the obvious. Students are using AI constantly—for summaries, study guides, feedback, and comprehension scaffolding. AI is CliffsNotes on growth hormones, and pretending otherwise is an exercise in institutional denial.

    Educators, of course, are not standing outside the saloon wagging a finger. We are inside, ordering fizzy drinks. Shroff notes that teachers now use AI to design assignments, align curriculum to standards, grade against rubrics, and complete the paperwork that keeps schools legally hydrated. Nearly a third of K–12 teachers reported weekly AI use last year, and that number has only climbed as profession-specific tools like MagicSchool AI churn out rubrics, worksheets, and report-card comments on demand. The teacher as craftsman is quietly mutating into the teacher as editor.

    AI tightens its grip most aggressively where schools are already bleeding resources. In districts short on tutors and counselors, AI steps in as a substitute for services that were never funded in the first place. This is not reform; it is triage. And once institutions develop a taste for saving money by not hiring tutors and counselors, it is naïve to think teaching positions will remain sacred. Cost-cutting rarely stops at the first ethical boundary it crosses.

    That is why this moment feels like the Wild West. There is no shared map. Some schools welcome AI like a messiah. Others quarantine it like a contagious disease. Many simply shrug and admit they are baffled. Policy is reactive, inconsistent, and often written by people who do not understand the technology well enough to regulate it intelligently.

    I see the consequences every week in my college classroom. I read plenty of AI slop—essays with flawless grammar and no pulse, paragraphs that gesture toward ideas they never quite touch. Some students have checked out entirely, outsourcing not just sentences but thinking itself. And yet AI is also an undeniable equalizer. Students emerging from underfunded schools with sixth-grade literacy levels are now submitting essays with clean syntax and logical structure. They use AI to outline arguments, test thesis ideas, and stabilize skills they were never taught. The tool giveth, and the tool holloweth out.

    People like to invoke “too big to fail,” but the analogy doesn’t hold. We don’t know which AI—ChatGPT, Gemini, Claude, or some yet-unseen contender—will dominate. What we do know is that AI is already embedded in education, culture, and the economy. There is no reversing this process. The toothpaste is not going back in the tube, no matter how sternly we lecture it.

    So understand this about me and my fellow instructors: we don’t know what we’re doing. Our roles are unsettled. Our identities are unstable. We are feeling our way through a dark cave without a map and without guarantees. There may be light ahead, or there may not.

    The only sane posture is humility—paired with curiosity, caution, and a sober gratitude that even a force this disruptive may yield benefits we are not yet wise enough to recognize. The name for this condition is Pedagogical Liminality: the in-between state educators now inhabit as teaching crosses from the pre-AI world into an uncharted machine age. Old rules no longer hold. New ones have not yet solidified. The ground keeps shifting under our feet.

    In this state, arrogance is dangerous. Despair is paralyzing. Certainty is counterfeit. Pedagogical Liminality is not failure; it is the honest middle passage—awkward, uncertain, and unavoidable—before a new educational order can be named.

  • Mediocrity Amplification Effect

    Mediocrity Amplification Effect

    As you watch your classmates use AI for every corner of their lives—summarizing, annotating, drafting, thinking—you may feel a specific kind of demoralization set in. A sinking question forms: What does anything matter anymore? Is life now just a game of cheating the system efficiently? Is this where all the breathless hype about “the future” has landed us—an economy of shortcuts and plausible fraud?

    High school student Ashanty Rosario feels this acutely. She gives voice to the heartbreak in her essay “I’m a High Schooler. AI Is Demolishing My Education,” a lament not about laziness but about loss. She doesn’t want to cheat. But the tools are everywhere, glowing like emergency exit signs in a burning building. Some temptations, she understands, are structural.

    Her Exhibit A is devastating. A classmate uses ChatGPT to annotate Narrative of the Life of Frederick Douglass. These annotations—supposed proof of engaged reading—are nothing more than copy-paste edu-lard: high in calories, low in nutrition, and utterly empty of struggle. The form is there. The thinking is not.

    Rosario’s frustration echoes a moment from my own classroom. On the last day of the semester, one of my brightest students sat in my office and casually admitted that he uses ChatGPT to summarize all his reading. His father is a professor. He wakes up at five for soccer practice. He takes business calculus for fun. This is not a slacker. This is a time-management pragmatist surviving the twenty-first century. He reads the summaries, synthesizes the ideas, and writes excellent essays. Of course I wish he spent slow hours wrestling with books—but he is not living in 1954. He is living in a culture where time is scarce and AI functions as an oxygen mask.

    My daughters and their classmates face the same dilemma with Macbeth. Shakespeare’s language might as well be Martian for a generation raised on TikTok compression and dopamine drip-feeds. They watch film adaptations. They use AI to decode plot points so they can answer study questions without sounding like they slept through the Renaissance. Purists howl that this is cheating. But as a writing instructor, I suspect teachers benefit from students who at least know what’s happening—even if the knowledge arrives via chatbot. Expecting a fifteen-year-old to read Macbeth cold is like assigning tensor calculus to a preschooler. They haven’t done their priors. So AI becomes a prosthetic. A flotation device. A translation machine dropped into classrooms years overdue.

    Blaming AI for educational decline is tempting—but it’s also lazy. We live in a society where reading is a luxury good and the leisure class quietly guards the gates.

    In the 1970s, I graduated from a public high school with literacy skills so thin you could read the room through them. I took remedial English my freshman year of college. If I were a student today, dropped into 2025 with those same deficits, I would absolutely lean on AI just to keep my head above water. The difference now is scale. Today’s students aren’t just supplementing—they’re optimizing. They tell me this openly. Over ninety percent of my students use AI because their skills don’t match the workload and because everyone else is doing it. This isn’t a moral collapse. It’s an arms race of survival.

    Still, Rosario is right about the aftermath. “AI has softened the consequences of procrastination,” she writes, “and led many students to avoid doing any work at all. There is little intensity anymore.” When thinking becomes optional, students drift into algorithmic sleepwalking. They outsource cognition until they resemble NPCs in a glitching video game—avatars performing the motions of thought without the effort. My colleagues and I see it every semester: the fade-out, the disengagement, the slow zombification.

    Colleges are scrambling. Should we police AI with plagiarism detectors? Ban laptops? Force students to write essays in blue books under watchful eyes like parolees in a literary halfway house? Should we pretend the flood can be held back with a beach towel?

    Reading Rosario’s complaint about “cookie-cutter AI arguments,” I thought of my lone visit to Applebee’s in the early 2000s. The menu photos promised ambrosia. The food tasted like something engineered in a lab to be technically edible yet spiritually vacant. Applebee’s was AI before AI—an assembly line of flavorless simulacra. Humanity has always gravitated toward the easy, the prepackaged, the frictionless. AI didn’t invent mediocrity. It just handed it a megaphone.

    Rosario is no Applebee’s soul. She’s Michelin-level in a world eager to microwave Hot Pockets. Of course her heart sinks when classmates settle for fast-food literacy. I want to tell her this: had she been in high school in the 1970s, she would have witnessed the same hunger for shortcuts. The tools would be clumsier. The prose less polished. But the gravitational pull would be identical. The urge to bypass difficulty is not technological—it’s ancestral.

    What’s new is scale and speed. In the AI age, that ancient hunger is supercharged by what I call the Mediocrity Amplification Effect: the phenomenon by which AI accelerates and magnifies our long-standing temptation to dilute effort and settle for the minimally sufficient. Under this effect, tools meant to assist learning become accelerants of shortcut culture. Procrastination carries fewer consequences. Intensity drains away. Thinking becomes optional.

    This is not a new moral failure. It is an old one, industrialized—private compromise transformed into public default, mediocrity polished, normalized, and broadcast at scale. AI doesn’t make us lazy. It makes laziness louder.

  • AI High School Graduates Are Here—and They’re Better Cheaters Than We Are Teachers

    AI High School Graduates Are Here—and They’re Better Cheaters Than We Are Teachers

    Lila Shroff argues that education has entered its Wild West phase in her essay “The AI Takeover of Education Is Just Getting Started,” and she’s right in the way that makes administrators nervous and instructors tired. Our incoming college students are not stumbling innocents. They are veterans of four full years of AI high school. They no longer dabble in crude copy-and-paste plagiarism. That’s antique behavior. Today’s students stitch together outputs from multiple AI models, then instruct the chatbot to scuff the prose with a few grammatical missteps so it smells faintly human and slips past detection software. This is not cheating as shortcut; it is cheating as workflow optimization.

    Meanwhile, high school teachers may be congratulating themselves for assigning Shakespeare, Keats, and Dostoevsky, but many are willfully ignoring the obvious. Students are using AI constantly—for summaries, study guides, feedback, and comprehension scaffolding. AI is CliffsNotes on growth hormones, and pretending otherwise is an exercise in institutional denial.

    Educators, of course, are not standing outside the saloon wagging a finger. They are inside, ordering drinks. Shroff notes that teachers now use AI to design assignments, align curriculum to standards, grade against rubrics, and complete the paperwork that keeps schools legally hydrated. Nearly a third of K–12 teachers reported weekly AI use last year, and that number has only climbed as profession-specific tools like MagicSchool AI churn out rubrics, worksheets, and report-card comments on demand. The teacher as craftsman is quietly becoming the teacher as editor.

    AI’s grip tightens most aggressively where schools are already bleeding resources. In districts short on tutors and counselors, AI steps in as a substitute for services that were never funded in the first place. It is not reform; it is triage. And once institutions get a taste of saving money by not hiring tutors and counselors, it is naïve to think that teaching positions will remain untouchable. Cost-saving rarely stops at the first ethical boundary it crosses.

    That is why this feels like the Wild West. There is no shared map. Some schools welcome AI like a messiah. Others quarantine it like a contagious disease. Many simply shrug and admit they are baffled. Policy is reactive, inconsistent, and often written by people who do not understand the technology well enough to regulate it intelligently.

    I see the consequences weekly in my college classroom. I read plenty of AI slop—essays with perfect grammar and no pulse, paragraphs that gesture toward ideas they never quite touch. Some students have clearly checked out, outsourcing not just sentences but thinking itself. And yet AI is also an undeniable equalizer. Students emerging from underfunded schools with sixth-grade literacy levels are submitting essays with clean syntax and logical structure. They are using AI to outline arguments, test thesis ideas, and stabilize skills they were never taught. The tool giveth and the tool holloweth out.

    People like to invoke “too big to fail,” but the analogy is incomplete. We do not know which AI—ChatGPT, Gemini, Claude, or some yet-unseen contender—will dominate. What we do know is that AI is already embedded in education, culture, and the economy. There is no reversing this process. The toothpaste is not going back in the tube, no matter how sternly we lecture it.

    So I tell my students the only honest thing left to say: we don’t know what we’re doing. Our roles are unsettled. Our identities are unstable. We are feeling our way through a dark cave without a map and without guarantees. There may be light ahead, or there may not. The only sane posture is humility—paired with curiosity, caution, and a sober gratitude that even a force this disruptive may yield benefits we are not yet wise enough to recognize.

  • The Rotator Cuff, the Honda Dealership, and the Human Soul

    The Rotator Cuff, the Honda Dealership, and the Human Soul

    Life has a way of mocking our plans. You stride in with a neat blueprint, and the universe responds by flinging marbles under your feet. My shoulder rehab, for instance, was supposed to be a disciplined, daily ritual: the holy grail of recovering from a torn rotator cuff. Instead, after one enthusiastic session, both shoulders flared with the kind of throbbing soreness reserved for muscles resurrected from the dead (though after walking home from Honda, it occurred to me that my right shoulder soreness is probably the result of a tetanus shot). So much for the doctor’s handouts of broomstick rotations and wall flexions. Today, the new fitness plan is modest: drop off the Honda for service, walk two miles home, and declare that my workout. Tomorrow: to be determined by the whims of my tendons and sore muscles.

    Teaching is no different. I’ve written my entire Spring 2026 curriculum, but then I read about humanities professor Alan Jacobs—our pedagogical monk—who has ditched computers entirely. Students handwrite every assignment in composition books; they read photocopied essays with wide margins, scribbling annotations in ink. According to Jacobs, with screens removed and the “LLM demons” exorcised, students rediscover themselves as human beings. They think again. They care again. I can see the appeal. They’re no longer NPCs feeding essays into the AI maw.

    But then I remembered who I am. I’m not a parchment-and-fountain-pen professor any more than I’m a pure vegan. I am a creature of convenience, pragmatism, and modern constraints. My students live in a world of laptops, apps, and algorithms; teaching them only quills and notebooks would be like handing a medieval knight a lightsaber and insisting he fight with a broomstick. I will honor authenticity another way—through the power of my prompts, the relevance of my themes, and the personal narratives that force students to confront their own thoughts rather than outsource them. My job is to balance the human soul with the tools of the age, not to bury myself—and my students—in nostalgia cosplay.

  • Has AI Broken Education—or Did We Break It First?

    Has AI Broken Education—or Did We Break It First?

    Argumentative Essay Prompt: AI, Education, and the Future of Human Thinking (1,700 words)

    Artificial intelligence has entered classrooms, study sessions, and homework routines with overwhelming speed. Some commentators argue that this shift is not just disruptive but disastrous. Ashanty Rosario, a high school student, warns in “I’m a High Schooler. AI Is Demolishing My Education” that AI encourages passivity, de-skills students, and replaces authentic learning with the illusion of competence. Lila Shroff, in “The AI Takeover of Education Is Just Getting Started,” argues that teachers and institutions are unprepared, leaving students to navigate a digital transformation with no guardrails. Damon Beres claims in “AI Has Broken High School and College” that classrooms are devolving into soulless content factories in which students outsource both thought and identity. These writers paint a bleak picture: AI is not just a tool—it is a force accelerating the decay of intellectual life.

    Other commentators take a different approach. Ian Bogost’s “College Students Have Already Changed Forever” argues that the real transformation happened long before AI—students have already become transactional, disengaged, and alienated, and AI simply exposes a preexisting wound. Meanwhile, Tyler Austin Harper offers two counterpoints: in “The Question All Colleges Should Ask Themselves About AI,” he insists that institutions must rethink how assignments function in the age of automation; and in “ChatGPT Doesn’t Have to Ruin College,” he suggests that AI could amplify human learning if courses are redesigned to reward original thinking, personal insight, and intellectual ambition rather than formulaic output.

    In a 1,700-word argumentative essay, defend, refute, or complicate the claim that AI is fundamentally damaging education. Your essay must:

    • Take a clear position on whether AI erodes learning, enhances it, or transforms it in ways that require new pedagogical strategies.
    • Analyze how Rosario, Shroff, and Beres frame the dangers of AI for intellectual development, motivation, and classroom culture.
    • Compare their views with Bogost and Harper, who argue that education itself—not AI—is the root of the crisis, or that educators must adapt rather than resist.
    • Include a counterargument–rebuttal section that addresses the strongest argument you disagree with.
    • Use at least four credible sources in MLA format, including at least three of the essays listed above.

    Your goal is not to summarize the articles but to evaluate what they reveal about the future of learning: Is AI the villain, the scapegoat, or a tool we have not yet learned to use wisely?

  • Does AI Destroy or Redefine Learning?

    Does AI Destroy or Redefine Learning?

    Argumentative Essay Prompt: The Effects of AI on Education (1,700 words)

    Artificial intelligence has raised alarm bells in education. Critics argue that students now rely so heavily on AI tools that they are becoming users rather than thinkers—outsourcing curiosity, creativity, and problem-solving to machines. In this view, the classroom is slowly deteriorating into a culture of passivity, distraction, and what some call a form of “communal stupidity.”

    In his Atlantic essay “My Students Use AI. So What?” linguist and educator John McWhorter challenges this narrative. Instead of treating AI as a threat to intelligence, he examines the everyday media consumption of his tween daughters. They spend little time reading traditional books, yet their time online exposes them to sophisticated humor, stylized language, and clever cultural references. Rather than dulling their minds, McWhorter argues, certain forms of media sharpen them—and occasionally reach the level of genuine artistic expression.

    McWhorter anticipates objections. Books demand imagination, concentration, and patience. He does not deny this. But he asks whether we have elevated books into unquestioned sacred objects. Human creativity has always existed in visual, auditory, and performative arts—not exclusively on the printed page.

    Like many educators, McWhorter also acknowledges that schooling must adapt. Just as no teacher today would demand students calculate square roots without a calculator, he recognizes that assigning a formulaic five-paragraph essay invites AI to automate it. Teaching must evolve, not retreat. He concludes that educators and parents must create new forms of engagement that work within the technological environment students actually inhabit.

    Is McWhorter persuasive? In a 1,700-word argumentative essay, defend, refute, or complicate his central claim that AI is not inherently corrosive to thinking, and that education must evolve rather than resist technological realities. Your essay should:
    • Make a clear, debatable thesis about AI’s influence on learning, creativity, and critical thinking.
    • Analyze how McWhorter defines intelligence, skill, and engagement in digital environments.
    • Include a counterargument–rebuttal section in which you address why some technologies may be so disruptive that adapting to them becomes impossible—or whether that fear misunderstands how students actually learn.
    • Use evidence from McWhorter and at least two additional credible sources.
    • Include a Works Cited page in MLA format with at least four sources total.

    Your goal is not to simply summarize McWhorter, but to weigh his claims against reality. Does AI open new modes of literacy, or does it train us into passive consumption? What does responsible adaptation look like, and where do we draw the line between embracing tools and surrendering agency?

    Building Block 1: Introduction Paragraph:

    Write a 300-word paragraph describing a non-book activity—such as a specific YouTube channel, a TikTok creator, an online gaming stream, or a subreddit—that entertains you while also requiring real engagement and intellectual effort. Do not speak in broad generalities; focus on one example. Describe what drew you to that content and what makes it more than passive consumption. If you choose a subreddit, explain how it operates: Do members debate technical details, challenge arguments, post layered memes that reference politics or philosophy, or analyze social behavior that demands you understand context and nuance? If you choose a video or stream, describe how its pacing, humor, visual cues, or language force you to track patterns, notice subtle callbacks, or recognize sarcasm and satire. Show how your brain works to interpret signals, anticipate moves, decode cultural references, or evaluate whether the creator is being sincere, ironic, or manipulative. Explain how this activity cultivates cognitive skills—pattern recognition, strategic thinking, language sensitivity, humor literacy, or cultural analysis—that are not identical to reading but still intellectually substantial. Then connect your experience to John McWhorter’s argument in “My Students Use AI. So What?” by explaining how your engagement challenges the assumption that screen-based media turns young people into passive consumers. McWhorter claims that digital content can sharpen minds by exposing viewers to stylized language, comedic timing, and creative expression; show how your chosen activity illustrates (or complicates) this point. Conclude by reflecting on whether the skills you are developing—whether from decoding layered Reddit discussions or following complex video essays—are simply different from the skills cultivated by books, or whether they offer alternative paths to intelligence that schools and parents should take seriously.

    Building Block 2: Conclusion

    Write a 250-word conclusion in which you step back from your argument and explain what your thesis reveals about the broader social implications of online entertainment. Do not summarize your paper. Instead, reflect on how your analysis has changed the way you think about digital media and your own habits as a viewer, gamer, or participant. Explain how your chosen example—whether a subreddit, a content creator, a gaming channel, or another digital space—demonstrates that online entertainment is not automatically a form of distraction or intellectual decay. Discuss how interacting with this media has trained you to interpret tone, decode humor or irony, follow complex narratives, or understand cultural signals that are easy to miss if you are not paying attention. Then consider what this means for society: If students are learning language, timing, persuasion, and nuance in digital environments, how should teachers, parents, and institutions respond? Should they continue to treat online entertainment as a threat to literacy, or as an alternate path to it? Draw a connection between your growth as a thinker and the larger question of where intelligence is cultivated in the 21st century. End your paragraph with a reflection on how your relationship to digital media has changed: Do you now view certain forms of online entertainment as trivial distractions, or as unexpected arenas where people practice rhetorical agility, cultural awareness, and cognitive skill?

  • The Age of Academic Anhedonia

    The Age of Academic Anhedonia

    I started teaching college writing in the 80s under the delusion that I was destined to be the David Letterman of higher education—a twenty-five-year-old ham with a chalkboard, half-professor and half–late-night stand-up. For a while, the act actually worked. A well-timed deadpan joke could mesmerize a room of eighteen-year-olds and soften their outrage when I saddled them with catastrophically ill-chosen books (Ron Rosenbaum’s Explaining Hitler—a misfire so spectacular it deserves its own apology tour). My stories carried the class, and for decades I thought the laughter was evidence of learning. If I could entertain them, I told myself, I could teach them.

    Then 2012 hit like a change in atmospheric pressure. Engagement thinned. Phones glowed. Students behaved as though they were starring in their own prestige drama, and my classroom was merely a poorly lit set. I was no longer battling boredom—I was competing with the algorithm. This was the era of screen-mediated youth, the 2010–2021 cohort raised on the oxygen of performance. Their identities were curated in Instagram grids, maintained through Snapstreaks, and measured in TikTok microfame points. The students were not apathetic; they were overstimulated. Their emotional bandwidth was spent on self-presentation, comparison loops, and the endless scoreboard of online life. They were exhausted but wired, longing for authenticity yet addicted to applause. I felt my own attention-capture lose potency, but I still recognized those students. They were distracted, yes, but still alive.

    But in 2025, we face a darker beast: the academically anhedonic student. The screen-mediated generation ran hot; this one runs cold. Around 2022, a new condition surfaced—a collapse of the internal reward system that makes learning feel good, or at least worthwhile. Years of over-curation, pandemic detachment, frictionless AI answers, and dopamine-dense apps hollowed out the very circuits that spark curiosity. This isn’t laziness; it’s a neurological shrug. These students can perform the motions—fill in a template, complete a scaffold, assemble an essay like a flat-pack bookshelf—but they move through the work like sleepwalkers. Their curiosity is muted. Their persistence is brittle. Their critical thinking arrives pre-flattened. 

    My colleagues tell me their classrooms are filled with compliant but joyless learners checking boxes on their march toward a credential. The Before-Times students wrestled with ideas. The After-Times students drift through them without contact. It breaks our hearts because the contrast is stark: what was once noisy and performative has gone silent. Academic anhedonia names that silence—a crisis not of ability, but of feeling.