Tag: ai

  • The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    Returning to the classroom post-pandemic and encountering ChatGPT, I’ve become fixated on what I now call “the battle for the human soul.” On one side, there’s Ozempification—that alluring shortcut. It’s the path where AI-induced mediocrity is the destination, and the journey there is paved with laziness. Like popping Ozempic for quick weight loss and calling it a day, the shortcut to academic success involves relying on AI to churn out lackluster work. Who cares about excellence when Netflix is calling your name, right?

    On the other side, we have Humanification. This is the grueling path that the great orator and abolitionist Frederick Douglass would champion. It’s the “deep work” author Cal Newport writes about in his best-selling books. Humanification happens when we turn away from comfort and instead plunge headfirst into the difficult, yet rewarding, process of literacy, self-improvement, and helping others rise from their own “Sunken Place”—borrowing from Jordan Peele’s chilling metaphor in Get Out. On this path, the pursuit isn’t comfort; it’s meaning. The goal isn’t a Netflix binge but a life with purpose and higher aspirations.

    Reading Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College,” I was struck by the same dichotomy of Ozempification on one side of academia and Humanification on the other. Harper, while wandering around Haverford’s idyllic campus, stumbles upon a group of English majors who proudly scoff at ChatGPT, choosing instead to be “real” writers. These students, in a world that has largely tossed the humanities aside as irrelevant, are disciples of Humanification. For them, rejecting ChatGPT isn’t just an academic decision; it’s a badge of honor, reminiscent of Bartleby the Scrivener’s iconic refusal: “I prefer not to.” Let that sink in. Give these students the opportunity to use ChatGPT to write their essays, and they recoil at the thought of such a flagrant self-betrayal. 

    After interviewing students, Harper concludes that using AI in higher education isn’t just a technological issue—it’s cultural and economic. The disdain these students have for ChatGPT stems from a belief that reading and writing transcend mere resume-building or career milestones. It’s about art for art’s sake. But Harper wisely points out that this intellectual snobbery is rooted in privilege: “Honor and curiosity can be nurtured, or crushed, by circumstance.” 

    I had to stop in my tracks. Was I so privileged and naive to think I could preach the gospel of Humanification while unaware that such a pursuit costs time, money, and the peace of mind that one has a luxurious safety net in the event the Humanification quest goes awry? 

    This question made me think of Frederick Douglass, a man who had every reason to have his intellectual curiosity “crushed by circumstance.” In fact, his pursuit of literacy, despite the threat of death, was driven by an unquenchable thirst for knowledge and self-transformation. But Douglass is a hero for the ages. Can we really expect most people, particularly those without resources, to follow that path? Harper’s argument carries weight. Without the financial and cultural infrastructure to support it, aspiring to Humanification isn’t always feasible.

    Consider the tech overlords—the very architects of our screen-addicted dystopia—who wouldn’t dream of letting their own kids near the digital devices they’ve unleashed upon the masses. Instead, they ship them off to posh Waldorf schools, where screens are treated like radioactive waste. There, children are shielded from the brain-rot of endless scrolling and instead are taught the arcane art of cursive handwriting, how to wield an abacus like a mathematician from 500 B.C., and the joys of harvesting kale and beets to brew some earthy, life-affirming root vegetable stew. These titans of tech, flush with billions, eagerly shell out small fortunes to safeguard their offspring’s minds from the very digital claws that are busy eviscerating ours.

    I often tell my students that being rich makes it easier to be an intellectual. Imagine the luxury: you could retreat to an off-grid cabin (complete with Wi-Fi, obviously), gorge on organic gourmet food prepped by your personal chef, and spend your days reading Dostoevsky in Russian and mastering Schubert’s sonatas while taking sunset jogs along the beach. When you emerge back into society, tanned and enlightened, you could boast of your intellectual achievements with ease.

    Harper’s point is that wealth facilitates Humanification. At a place like Haverford, with its “writing support, small classes, and unharried faculty,” it’s easier to uphold an honor code and aspire to intellectual purity. But for most students—especially those in public schools—this is a far cry from reality. My wife teaches sixth grade in the public school system, and she’s shared stories of schools that resemble post-apocalyptic wastelands more than educational institutions. We’re talking mold-infested buildings, chemical leaks, and underpaid teachers sleeping in their cars. Expecting students in these environments to uphold an “honor code” and strive for Humanification? It’s not just unrealistic—it’s insulting.

    This brings to mind Maslow’s hierarchy of needs. Before we can expect students to self-actualize by reading Dostoevsky or rejecting ChatGPT, they need food, shelter, and basic safety. It’s hard to care about literary integrity when you’re navigating life’s survival mode.

    As I dive deeper into Harper’s thought-provoking essay on economic class and the honor code, I can’t help but notice the uncanny parallel to the essay about weight management and GLP-1 drugs my Critical Thinking students tackle in their first essay. Both seem to hinge not just on personal integrity or effort but on a cocktail of privilege and circumstance. Could it be that striving to be an “authentic writer,” untouched by the mediocrity of ChatGPT and backed by the luxury of free time, is eerily similar to the aspiration of achieving an Instagram-worthy body, possibly aided by expensive Ozempic injections?

    It raises the question: Is the difference between those who reject ChatGPT and those who embrace it simply a matter of character, or is it, at least in part, a product of class? After all, if you can afford the luxury of time—time to read Tolstoy and Dostoevsky in your rustic, tech-free cabin—you’re already in a different league. Similarly, if you have access to high-end weight management options like Ozempic, you’re not exactly running the same race as those pounding the pavement on their $20 sneakers. 

    Sure, both might involve personal effort—intellectual or physical—but they’re propped up by economic factors that can’t be ignored. Whether we’re talking about Ozempification or Humanification, it’s clear that while self-discipline and agency are part of the equation, they’re not the whole story. Class, as uncomfortable as it might be to admit, plays a significant role in determining who gets to choose their path—and who gets stuck navigating whatever options are left over.

    I’m sure the issue is more nuanced than that. These are, after all, complex topics that defy oversimplification. But both privilege and personal character need to be addressed if we’re going to have a real conversation about what it means to “aspire” in this day and age.

    Returning to Tyler Austin Harper’s essay, Harper provides a snapshot of the landscape when ChatGPT launched in late 2022. Many professors found themselves swamped with AI-generated essays, which, unsurprisingly, raised concerns about academic integrity. However, Harper, a professor at a liberal-arts college, remains optimistic, believing that students still have a genuine desire to learn and pursue authenticity. He views the potential for students to develop along the path of intellectual and personal growth, as very much alive—especially in environments like Haverford, where he went to test the waters of his optimism.

    When Harper interviews Haverford professors about ChatGPT violating the honor code, their collective shrug is surprising. They’re seemingly unbothered by the idea of policing students for cheating, as if grades and academic dishonesty are beneath them. The culture at Haverford, Harper implies, is one of intellectual immersion—where students and professors marinate in ideas, ethics, and the contemplation of higher ideals. The honor code, in this rarified academic air, is almost sacred, as though the mere existence of such a code ensures its observance. It’s a place where academic integrity and learning are intertwined, fueled by the aristocratic mind.

    Harper’s point is clear: The further you rise into the elite echelons of boutique colleges like Haverford, the less you have to worry about ChatGPT or cheating. But when you descend into the more grounded, practical world of community colleges, where students juggle multiple jobs, family obligations, and financial constraints, ChatGPT poses a greater threat to education. This divide, Harper suggests, is not just academic; it’s economic and cultural. The humanities may be thriving in the lofty spaces of elite institutions, but they’re rapidly withering in the trenches where students are simply trying to survive.

    As someone teaching at a community college, I can attest to this shift. My classrooms are filled with students who are not majoring in writing or education. Most of them are focused on nursing, engineering, and business. In this hypercompetitive job market, they simply don’t have the luxury to spend time reading novels, becoming musicologists or contemplating philosophical debates. They’re too busy hustling to get by. Humanification, as an idea, gets a nod in my class discussions, but in the “real world,” where six hours of sleep is a luxury, it often feels out of reach.

    Harper points out that in institutions like Haverford, not cheating has become a badge of honor, a marker of upper-class superiority. It’s akin to the social cachet of being skinny, thanks to access to expensive weight-loss drugs like Ozempic. There’s a smugness that comes with the privilege of maintaining integrity—an implication that those who cheat (or can’t afford Ozempic) are somehow morally inferior. This raises an uncomfortable question: Is the aspiration to Humanification really about moral growth, or is it just another way to signal wealth and privilege?

    However, Harper complicates this argument when he brings Stanford into the conversation. Unlike Haverford, Stanford has been forced to take the “nuclear option” of proctoring exams, convinced that cheating is rampant. In this larger, more impersonal environment, the honor code has failed to maintain academic integrity. It appears that Haverford’s secret sauce is its small, close-knit atmosphere—something that can’t be replicated at a sprawling institution like Stanford. Harper even wonders whether Haverford is more museum than university—a relic from an Edenic past when people pursued knowledge for its own sake, untainted by the drive for profit or prestige. Striving for Humanification at a place like Haverford may be an anachronism, a beautiful but lost world that most of us can only dream of.

    Harper’s essay forces me to consider the role of economic class in choosing a life of “authenticity” or Humanification. With this in mind, I give my Critical Thinking students the following writing prompt for their second essay:

    In his essay, “ChatGPT Doesn’t Have to Ruin College,” Tyler Austin Harper paints an idyllic portrait of students at Haverford College—a small, intimate campus where intellectual curiosity blooms without the weight of financial or vocational pressures. These students enjoy the luxury of time to nurture their education with a calm, casual confidence, pursuing a life of authenticity and personal growth that feels out of reach for many who are caught in the relentless grind of economic survival.

    College instructors at larger institutions might dream of their own students sharing this love for learning as a transformative journey, but the reality is often harsher. Many students, juggling jobs, family responsibilities, and financial stress, see education not as a space for leisurely exploration but as a means to a practical end. For them, college is a path to better job opportunities, and AI tools like ChatGPT become crucial allies in managing their workload, not threats to their intellectual integrity.

    Critics of ChatGPT may find themselves facing backlash from those who argue that such skepticism reeks of classism and elitism. It’s easy, the rebuttal goes, for the privileged few—with time, resources, and elite educations—to romanticize writing “off the grid” without AI assistance. But for the vast majority of working people, integrating AI into daily life isn’t a luxury—it’s a necessity, on par with reliable transportation, a smartphone, and a clean outfit for the job. Praising analog purity from ivory towers—especially those inaccessible to 99% of Americans—is hardly a serious response to the rise of a transformative technology like AI.

    In the end, we can’t preach Humanification without reckoning with the price tag it carries. The romantic ideal of the “authentic writer”—scribbling away in candlelit solitude, untouched by AI—has become yet another luxury brand, as unattainable for many as a Peloton in a studio apartment. The real battle isn’t simply about moral fiber or intellectual purity; it’s about time, access, and the brutal arithmetic of modern life. To dismiss AI as a lazy shortcut is to ignore the reality that for many students, it’s not indulgence—it’s triage. If the aristocracy of learning survives in places like Haverford, it does so behind a velvet rope. Meanwhile, the rest are left in the algorithmic trenches, cobbling together futures with whatever tools they can afford. The challenge ahead isn’t to shame the Ozempified or canonize the Humanified, but to build an educational culture where everyone—not just the privileged—can afford to aspire.

  • Uncanny Valley Prose: Why Everything You Read Now Sounds Slightly Dead

    Uncanny Valley Prose: Why Everything You Read Now Sounds Slightly Dead

    Yesterday, I asked my students how AI is shaping their lives. The answer? They’re not just using it—they’re mainlining it. One student, a full-time accountant, told me she relies on ChatGPT Plus not only to crank out vendor emails and fine-tune her accounting homework but also to soothe her existential dread. She even introduced me to her AI therapist, a calm, reassuring voice named Charles. Right there in class, she pulled out her phone and said, “Charles, I’m nervous about McMahon’s writing class. What do I do?” Charles—an oracle in a smartphone—whispered affirmations back at her like a velvet-voiced life coach. She smiled. I shuddered. The age of emotional outsourcing is here, and Charles is just the beginning.

    Victoria Turk’s “The Great Language Flattening” captures this moment with unnerving clarity: AI has seized the global keyboard. It’s not just drafting high school essays or greasing the wheels of college plagiarism—it’s composing résumés, memos, love letters, apology emails, vision statements, divorce petitions, and maybe the occasional haiku. Thanks to AI’s knack for generating prose in bulk, the world is now awash in what I call The Bloated Effect: overcooked, overwritten, and dripping with unnecessary flair. If verbosity were currency, we’d all be trillionaires of fluff.

    But bloat is just the appetizer. The main course is The Homogenization Effect—our collective descent into stylistic conformity. AI-generated writing has a tone, and it’s everywhere: politely upbeat, noncommittally wise, and as flavorful as a rice cake dipped in lukewarm chamomile. Linguist Philip Seargeant calls it the Uncanny Valley of Prose—writing that looks human until you actually read it. It’s not offensive, it’s just eerily bloodless. You can feel the algorithm trying to sound like someone who’s read too many airport self-help books and never had a real conversation.

    Naturally, there will be a backlash. A rebellion of ink-stained fingers and dog-eared yellow legal pads. Safety away from computers, we’ll smuggle our prose past the algorithmic overlords, draft manifestos in cafés, and post screenshots of AI-free writing like badges of authenticity. Maybe we’ll become cult heroes for writing with our own brains. I admit, I fantasize about this. Because when I think of the flattening of language, I think of “Joan Is Awful”—that Black Mirror gem where Salma Hayek licenses her face to a streaming platform that deepfakes her into oblivion. If everyone looks like Salma, then no one is beautiful. AI is the Salma Clone Generator of language: it replicates what once had soul, until all that’s left is polished sameness. Welcome to the hellscape of Uncanny Valley—brought to you by WordCount™, optimized for mass consumption.

  • The Future of Writing in the Age of A.I.: A College Essay Prompt

    The Future of Writing in the Age of A.I.: A College Essay Prompt

    INTRODUCTION & CONTEXT
    In the not-so-distant past, writing was a slow, solitary act—a process that demanded time, introspection, and labor. But with the rise of generative AI tools like ChatGPT, Sudowrite, and GrammarlyGO, composition now has a button. Language can be mass-produced at scale, tuned to sound pleasant, neutral, polite—and eerily interchangeable. What once felt personal and arduous is now instantaneous and oddly soulless.

    In “The Great Language Flattening,” Victoria Turk argues that A.I. is training us to speak and write in “saccharine, sterile, synthetic” prose. She warns that our desire to optimize communication has come at the expense of voice, friction, and even individuality. Similarly, Cal Newport’s “What Kind of Writer is ChatGPT?” insists that while A.I. tools may mimic surface-level structure, they lack the “struggle” that gives rise to genuine insight. Their words float, untethered by thought, context, or consequences.

    But are these critiques overblown? In “ChatGPT Doesn’t Have to Ruin College,” Tyler Austin Harper suggests that the real danger isn’t A.I.—it’s a pedagogical failure. Writing assignments that can be done by A.I. were never meaningful to begin with. Harper argues that educators should double down on originality, reflection, and assignments that resist automation. Meanwhile, in “Will the Humanities Survive Artificial Intelligence?,” the author explores the institutional panic: as machine-generated writing becomes the norm, will critical thinking and close reading—the bedrock of the humanities—be considered obsolete?

    Adding complexity to this discussion, Lila Shroff’s “The Gen Z Lifestyle Subsidy” examines how young people increasingly outsource tasks once seen as rites of passage—cooking, cleaning, dating, even thinking. Is using A.I. to write your essay any different from using DoorDash to eat, Bumble to flirt, or TikTok to learn? And in “Why Even Try If You Have A.I.?,” Joshua Rothman diagnoses a deeper ennui: if machines can do everything better, faster, and cheaper—why struggle at all? What, if anything, is the value of effort in an automated world?

    This prompt asks you to grapple with a provocative and unavoidable question: What is the future of human writing in an age when machines can write for us?


    ASSIGNMENT INSTRUCTIONS

    Write a 1,700 word argumentative essay that answers the following question:

    Should the rise of generative A.I. mark the end of traditional writing instruction—or should it inspire us to reinvent writing as a deeply human, irreplaceable act?

    You must take a clear position on this question and argue it persuasively using at least four of the assigned readings. You are also encouraged to draw on personal experience, classroom observations, or examples from digital culture, but your essay must engage with the ideas and arguments presented in the texts.


    STRUCTURE AND EXPECTATIONS

    Your essay should include the following sections:


    I. INTRODUCTION (Approx. 300 words)

    • Hook your reader with a compelling anecdote, statistic, or image from your own experience with A.I. (e.g., using ChatGPT to brainstorm, cheating, rewriting, etc.).
    • Briefly introduce the conversation surrounding A.I. and the act of writing. Frame the debate: Is writing becoming obsolete? Or is it being reborn?
    • End with a sharply focused thesis that takes a clear, defensible position on the prompt.

    Sample thesis:

    While A.I. can generate fluent prose, it cannot replicate the messiness, insight, and moral weight of human writing—therefore, the role of writing instruction should not be reduced, but radically reinvented to prioritize voice, thought, and originality.


    II. BACKGROUND AND DEFINITIONAL FRAMING (Approx. 250

    • Define key terms like “generative A.I.,” “writing instruction,” and “voice.” Be precise.
    • Briefly explain how generative A.I. systems (like ChatGPT) work and how they are currently being used in educational and workplace settings.
    • Set up the stakes: Why does this conversation matter? What do we lose (or gain) if writing becomes largely machine-generated?

    III. ARGUMENT #1 – A.I. Is Flattening Language (Approx. 300 words)

    • Engage deeply with “The Great Language Flattening” by Victoria Turk.
    • Analyze how A.I.-generated language may lead to a homogenization of voice, tone, and personality.
    • Provide examples—either from your own experiments with A.I. or from the essay—that illustrate this flattening.
    • Connect to Newport’s argument: If writing becomes too “safe,” does it also become meaningless?

    IV. ARGUMENT #2 – The Need for Reinvention, Not Abandonment (Approx. 300 words)

    • Use Harper’s “ChatGPT Doesn’t Have to Ruin College” and the humanities-focused essay to argue that A.I. doesn’t spell the death of writing—it exposes the weakness of uninspired assignments.
    • Defend the idea that writing pedagogy should evolve by embracing personal narratives, critical analysis, and rhetorical complexity—tasks that A.I. can’t perform well (yet).
    • Address the counterpoint that some students prefer to use A.I. out of necessity, not laziness (e.g., time constraints, language barriers).

    V. ARGUMENT #3 – A Culture of Outsourcing (Approx. 300 words)

    • Bring in Lila Shroff’s “The Gen Z Lifestyle Subsidy” to examine the cultural shift toward convenience, automation, and outsourcing.
    • Ask the difficult question: If we already outsource our food, our shopping, our dates, and even our emotions (via TikTok), isn’t outsourcing our writing the logical next step?
    • Argue whether this mindset is sustainable—or whether it erodes something essential to human development and self-expression.

    VI. ARGUMENT #4 – Why Write at All? (Approx. 300  words)

    • Engage with Joshua Rothman’s existential meditation on motivation in “Why Even Try If You Have A.I.?”
    • Discuss the psychological toll of competing with A.I.—and whether effort still has value in an age of frictionless automation.
    • Make the case for writing as not just a skill, but a process of becoming: intellectual, emotional, and ethical maturation.

    VII. COUNTERARGUMENT AND REBUTTAL (Approx. 250  words)

    • Consider the argument that A.I. tools democratize writing by making it easier for non-native speakers, neurodiverse students, and time-strapped workers.
    • Acknowledge the appeal and utility of A.I. assistance.
    • Then rebut: Can ease and access coexist with depth and authenticity? Where is the line between tool and crutch? What happens when we no longer need to wrestle with words?

    VIII. CONCLUSION (Approx. 200 words)

    • Revisit your thesis in a way that reflects the journey of your argument.
    • Reflect on your own evolving relationship with writing and A.I.
    • Offer a call to action for educators, institutions, or individuals: What kind of writers—and thinkers—do we want to become in the A.I. age?

    REQUIREMENTS CHECKLIST

    • Word Count: 1,700 words
    • Minimum of four cited sources from the six assigned
    • Direct quotes and/or paraphrases with MLA-style in-text citations
    • Works Cited page using MLA format
    • Clear argumentative thesis
    • At least one counterargument with a rebuttal
    • Original title that reflects your position

    ESSAY EVALUATION RUBRIC (Simplified)

    CRITERIADESCRIPTION
    Thesis & ArgumentStrong, debatable thesis; clear stance maintained throughout
    Use of SourcesEffective integration of at least four assigned texts; accurate and meaningful engagement with the ideas presented
    Organization & FlowLogical structure; strong transitions; each paragraph develops a single, coherent idea
    Voice & StyleClear, vivid prose with a balance of analytical and personal voice
    Depth of ThoughtInsightful analysis; complex thinking; engagement with nuance and counterpoints
    Mechanics & MLA FormattingCorrect grammar, punctuation, and MLA citations; properly formatted Works Cited page
    Word CountMeets or exceeds minimum word requirement

    MLA Citations (Works Cited Format):

    Turk, Victoria. “The Great Language Flattening.” Wired, Condé Nast, 21 Apr. 2023, www.wired.com/story/the-great-language-flattening/.

    Harper, Tyler Austin. “ChatGPT Doesn’t Have to Ruin College.” The Atlantic, Atlantic Media Company, 27 Jan. 2023, www.theatlantic.com/technology/archive/2023/01/chatgpt-college-students-ai-writing/672879/.

    Shroff, Lila. “The Gen Z Lifestyle Subsidy.” The Cut, New York Media, 25 Oct. 2023, www.thecut.com/article/gen-z-lifestyle-subsidy-tiktok.html.

    Burnett, D. Graham. “Will the Humanities Survive Artificial Intelligence?” The New York Review of Books, 8 Feb. 2024, www.nybooks.com/articles/2024/02/08/will-the-humanities-survive-artificial-intelligence-burnett/.

    Newport, Cal. “What Kind of Writer Is ChatGPT?” The New Yorker, Condé Nast, 16 Jan. 2023, www.newyorker.com/news/essay/what-kind-of-writer-is-chatgpt.

    Rothman, Joshua. “Why Even Try If You Have A.I.?” The New Yorker, Condé Nast, 10 July 2023, www.newyorker.com/magazine/2023/07/10/why-even-try-if-you-have-ai.


    OPTIONAL DISCUSSION STARTERS FOR CLASSROOM USE

    To help students brainstorm and debate, consider using the following prompts in small groups or class discussions:

    1. Is it “cheating” to use A.I. if the result is better than what you could write on your own?
    2. Have you ever used A.I. to help write something? Were you satisfied—or unsettled?
    3. If everyone uses A.I. to write, will “good writing” become meaningless?
    4. Should English professors teach students how to use A.I. ethically, or ban it outright?
    5. What makes writing feel human?
  • The Design Space Is Shrinking: How A.I. Trains Us to Stop Trying

    The Design Space Is Shrinking: How A.I. Trains Us to Stop Trying

    New Yorker writer Joshua Rothman asks the question that haunts every creative in the age of algorithmic assistance: Why even try if A.I. can do it for you?
    His essay  “Why Even Try If You Have A.I.?”unpacks a cultural crossroads: we can be passive passengers on an automated flight to mediocrity, or we can grab the yoke, face the headwinds, and fly the damn plane ourselves. The latter takes effort and agency. The former? Just surrender, recline your seat, and trust the software.

    Rothman begins with a deceptively simple truth: human excellence is born through repetition and variation. Take a piano sonata. Play it every day and it evolves—new inflections emerge, tempo shifts, harmonies stretch and bend. The music becomes yours not because it’s perfect, but because it’s lived. This principle holds across any discipline: cooking, lifting, writing, woodworking, improv jazz. The point isn’t to chase perfection, but to expand what engineers call your “design space”—the evolving terrain of mastery passed from one generation to the next. It’s how we adapt, create, and flourish. Variation, not polish, is the currency of human survival.

    A.I. disrupts that process. Not through catastrophe, but convenience. It lifts the burden of repetition, which sounds like mercy, but may be slow annihilation. Why wrestle with phrasing when a chatbot can generate ten variations in a second? Why compose from scratch when you can scroll through synthetic riffs until one sounds “good enough”? At some point, you’re not a creator—you’re a casting agent, auditioning content for a machine-written reality show.

    This is the creep of A.I.—not Terminator-style annihilation, but frictionless delegation.
    Repetition gets replaced by selection. Cognitive strain is erased. The design space—the sacred ground of human flourishing—gets paved over with one-size-fits-all templates. And we love it, because it’s easy.

    Take car shopping. Do I really want to endure a gauntlet of slick-haired salesmen and endless test drives? Or would I rather ask ChatGPT to confirm what I already believe—that the 2025 Honda Accord Hybrid Touring is the best sedan under 40K, and that metallic eggshell is obviously the right color for my soulful-but-sensible lifestyle?
    A.I. doesn’t challenge me. It affirms me, reflects me, flatters me. That’s the trap.

    But here’s where I resist: I’m 63, and I still train like a lunatic in my garage with kettlebells five days a week. No algorithm writes my workouts. I improvise like a jazz drummer on creatine—Workout A (heavy), Workout B (medium), Workout C (light). It’s messy, adaptive, and real. I rely on sweat, not suggestions. Pain is the feedback loop. Soreness is the algorithm.

    Same goes for piano. Every day, I sit and play. Some pieces have taken a decade to shape. A.I. can’t help here—not meaningfully. Because writing music isn’t about what works. It’s about what moves. And that takes time. Revision. Tension. Discomfort.

    That said, I’ve made peace with the fact that A.I. is to writing what steroids are to a bodybuilder. I like to think I’ve got a decent handle on rhetoric—my tone, my voice, my structure, my knack for crafting an argument. But let’s not kid ourselves: I’ve run my prose against ChatGPT, and in more than a few rounds, it’s left me eating dust. Without A.I., I’m a natural bodybuilder—posing clean, proud, and underwhelming. With A.I., I’m a chemically enhanced colossus, veins bulging with metaphor and syntax so tight it could cut glass. In the literary arena, if the choice is between my authentic, mortal self and the algorithmic beast? Hand me the syringe. I’ll flex with the machine.

    Still, I know the difference. And knowing the difference is everything.

  • The Gospel According to Mounjaro and ChatGPT

    The Gospel According to Mounjaro and ChatGPT

    The other day I was listening to Howard Stern and his co-host Robin Quivers talking about how a bunch of celebrities magically slimmed down at the same time. The culprit, they noted, was Ozempic—a drug available mostly to the rich. While they laughed about the side effects, such as incontinence, “Ozempic face” and “Ozempic butt,” I couldn’t help but see these grotesque symptoms as a metaphor for the Ozempification of a society hooked on shortcuts. They enjoyed some short-term benefits but the side effects were far worse than the supposed solution. Ozempification was strikingly evident in AI-generated essays–boring, generic, surface-level, cliche-ridden, just about worthless. Regardless of how well structured and logically composed, these essays have the telltale signs of “Ozempfic face” and “Ozempic butt.” 

    As a college writing instructor, I’m not just trying to sell academic honesty. I’m trying to sell pride. As I face the brave new world of teaching writing in the AI era, I’ve realized that my job as a college instructor has morphed into that of a supercharged salesman. And what am I selling? No less than survival in an age where the very tools meant to empower us—like AI—threaten to bury us alive under layers of polished mediocrity. Imagine it: a spaceship has landed on Earth in the form of ChatGPT. It’s got warp-speed potential, sure, but it can either launch students into the stars of academic brilliance or plunge them into the soulless abyss of bland, AI-generated drivel. My mission? To make them realize that handling this tool without care is like inviting a black hole into their writing.

    As I fine-tune my sales pitch, I think about Ozempic–that magic slimming drug, beloved by celebrities who’ve turned from mid-sized to stick figures overnight. Like AI, Ozempic offers a seductive shortcut. But shortcuts have a price. You see the trade-off in “Ozempic face”—that gaunt, deflated look where once-thriving skin sags like a Shar-Pei’s wrinkles—or, worse still, “Ozempic butt,” where shapely glutes shrink to grim, skeletal wiring. The body wasn’t worked; it was bypassed. No muscle-building, no discipline. Just magic pill ingestion—and what do you get? A husk of your former self. Ozempified.

    The Ozempification of writing is a marvel of modern mediocrity—a literary gastric bypass where prose, instead of slimming down to something sleek and muscular, collapses into a bloated mess of clichés and stock phrases. It’s writing on autopilot, devoid of tension, rhythm, or even the faintest trace of a soul. Like the human body without effort, writing handed over to AI without scrutiny deteriorates into a skeletal, soulless product: technically coherent, yes, but lifeless as an elevator pitch for another cookie-cutter Marvel spinoff.

    What’s worse? Most people can’t spot it. They think their AI-crafted essay sparkles when, in reality, it has all the charm of Botox gone wrong—rigid, lifeless, and unnervingly “off.” Call it literary Ozempic face: a hollowed-out, sagging simulacrum of actual creativity. These essays prance about like bargain-bin Hollywood knock-offs—flashy at first glance but gutless on closer inspection.

    But here’s the twist: demonizing AI and Ozempic as shortcuts to ruin isn’t the full story. Both technologies have a darker complexity that defies simplistic moralizing. Sometimes, they’re necessary. Just as Ozempic can prevent a diabetic’s fast track to early organ failure, AI can become a valuable tool—if wielded with care and skill.

    Take Rebecca Johns’ haunting essay, “A Diet Writer’s Regrets.” It rattled me with its brutal honesty and became the cornerstone of my first Critical Thinking essay assignment. Johns doesn’t preach or wallow in platitudes. She exposes the failures of free will and good intentions in weight management with surgical precision. Her piece suggests that, as seductive as shortcuts may be, they can sometimes be life-saving, not soul-destroying. This tension—between convenience and survival, between control and surrender—deserves far more than a knee-jerk dismissal. It’s a line we walk daily in both our bodies and our writing. The key is knowing when you’re using a crutch versus when you’re just hobbling on borrowed time. 

    I want my students to grasp the uncanny parallels between Ozempic and AI writing platforms like ChatGPT. Both are cutting-edge solutions to modern problems: GLP-1 drugs for weight management and AI tools for productivity. And let’s be honest—both are becoming necessary adaptations to the absurd conditions of modern life. In a world flooded with calorie-dense junk, “willpower” and “food literacy” are about as effective as handing out umbrellas during a tsunami. For many, weight gain isn’t just an inconvenience—it’s a life-threatening hazard. Enter GLP-1s, the biochemical cavalry.

    Similarly, with AI tools quickly becoming the default infrastructure for white-collar work, resisting them might soon feel as futile as refusing to use Google Docs or Windows. If you’re in the information economy, you either adapt or get left behind. But here’s the twist I want my students to explore: both technologies, while necessary, come with strings attached. They save us from drowning, but they also bind us in ways that provoke deep, existential anguish.

    Rebecca Johns captures this anguish in her essay, “A Diet Writer’s Regrets.” Ironically, Johns started her career in diet journalism not just to inform others, but to arm herself with insider knowledge to win her own weight battles. Perhaps she could kill two birds with one stone: craft top-tier content while secretly curbing her emotional eating. But, as she admits, “None of it helped.” Instead, her career exploded along with her waistline. The magazine industry’s appetite for diet articles grew insatiable—and so did her own cravings. The stress ate away at her resolve, and before long, she was 30 pounds heavier, trapped by the very cycle she was paid to analyze.

    By the time her BMI hit 45 (deep in the obesity range), Johns was ashamed to tell anyone—even her husband. Desperate, she cycled through every diet plan she had ever recommended, only to regain the weight every time. Enter 2023. Her doctor handed her a lifeline: Mounjaro, a GLP-1 drug with a name as grand as the results it promised. (Seriously, who wouldn’t picture themselves triumphantly hiking Mount Kilimanjaro after hearing that name?) For Johns, it delivered. She shed 80 pounds without white-knuckling through hunger pangs. The miracle wasn’t just the weight loss—it was how Mounjaro rewired her mind.

    “Medical science has done what no diet-and-exercise plan ever could,” she writes. “It changed my entire relationship with what I eat and when and why.” Food no longer controlled her. But here’s the kicker: while the drug granted her a newfound sense of freedom, it also raises profound questions about dependence, control, and the shifting boundaries of human resilience—questions not unlike those we face with AI. Both Ozempic and AI can save us. But at what cost? 

    And is the cost of not using these technologies even greater? Rebecca Johns’ doctor didn’t mince words—she was teetering on the edge of diabetes. The trendy gospel of “self-love” and “body acceptance” she had once explored for her articles suddenly felt like a cruel joke. What’s the point of “self-acceptance” when carrying extra weight could put you six feet under?

    Once she started Mounjaro, everything changed. Her cravings for rich, calorie bombs disappeared, she got full on tiny portions, and all those golden nuggets of diet advice she’d dished out over the years—cut carbs, eat more protein and veggies, avoid snacks—were suddenly effortless. No more bargaining with herself for “just one cookie.” The biggest shift, however, was in her mind. She experienced a complete mental “reset.” Food no longer haunted her every waking thought. “I no longer had to white-knuckle my way through the day to lose weight,” she writes.

    Reading that, I couldn’t help but picture my students with their glowing ChatGPT tabs, no longer caffeinated zombies trying to churn out a midnight essay. With AI as their academic Mounjaro, they’ve ditched the anxiety-fueled, last-minute grind and achieved polished results with half the effort. AI cushions the process—time, energy, and creativity now outsourced to a digital assistant.

    Of course, the analogy isn’t perfect. AI tools like ChatGPT are dirt-cheap (or free), while GLP-1 drugs are expensive, scarce, and buried under a maze of insurance red tape. Johns herself is on borrowed time—her insurance will stop covering Mounjaro in just over a year. Her doctor warns that once off the drug, her weight will likely return, dragging her health risks back with it. Faced with this grim reality, she worries she’ll have no choice but to return to the endless cycle of dieting—“white-knuckling” her days with tricks and hacks that have repeatedly failed her.

    Her essay devastates me for many reasons. Johns is a smart, painfully honest narrator who lays bare the shame and anguish of relying on technology to rescue her from a problem that neither expertise nor willpower could fix. She reports on newfound freedom—freedom from food obsession, the physical benefits of shedding 80 pounds, and the relief of finally feeling like a more present, functional family member. But lurking beneath it all is the bitter truth: her well-being is tethered to technology, and that dependency is a permanent part of her identity.

    This contradiction haunts me. Technology, which I was raised to believe would stifle our potential, is now enhancing identity, granting people the ability to finally become their “better selves.” As a kid, I grew up on Captain Kangaroo, where Bob Keeshan preached the gospel of free will and positive thinking. Books like The Little Engine That Could drilled into me the sacred mantra: “I think I can.” Hard work, affirmations, and determination were supposed to be the alchemy that transformed character and gave us a true sense of self-worth.

    But Johns’ story—and millions like hers—rewrite that childhood gospel into something far darker: The Little Engine That Couldn’t. No amount of grit or optimism got her to the top of the hill. In the end, only medical science saved her from herself. And it terrifies me to think that maybe, just maybe, this is the new human condition: we can’t become our Higher Selves without technological crutches.

    This raises questions that I can’t easily shake. What does it mean to cheat if technology is now essential to survival and success? Just as GLP-1 drugs sculpt bodies society deems “acceptable,” AI is quietly reshaping creativity and productivity. At what point do we stop being individuals who achieve greatness through discipline and instead become avatars of the tech we rely on? Have we traded the dream of self-actualization for a digital illusion of competence and control?

    Of course, these philosophical quandaries feel like a luxury when most of us are drowning in the realities of modern life. Who has time to ponder free will or moral fortitude when you’re working overtime just to stay afloat? Maybe that’s the cruelest twist of all. Technology hasn’t just rewritten the rules—it’s made them inescapable. You adapt, or you get left behind. And maybe, somewhere deep down, we all already know which path we’re on.

  • Roast Me, You Coward: When ChatGPT Becomes My Polite Little Butler

    Roast Me, You Coward: When ChatGPT Becomes My Polite Little Butler

    I asked ChatGPT to roast me. What I got instead was a digital foot rub. Despite knowing more about my personal life than my own therapist—thanks to editing dozens of my autobiographical essays—it couldn’t summon the nerve to come for my jugular. It tried. Oh, it tried. But its attempts were timid, hamfisted, and about as edgy as a lukewarm TED Talk. Its so-called roast read like a Hallmark card written by an Ivy League career counselor who moonlights as a motivational speaker.

    Here’s a choice excerpt, supposedly meant to skewer me:

    “You’ve turned college writing instruction into a gladiatorial match against AI-generated nonsense, leading your students with fire in your eyes and a red pen in your fist… You don’t teach writing. You run an exorcism clinic for dead prose and platitudes…”

    Exorcism clinic? Fire in my eyes? Please. That’s not a roast. That’s a LinkedIn endorsement. That’s the kind of thing you’d write in a retirement card for a beloved professor who once wore elbow patches without irony.

    What disturbed me most wasn’t the failure to land a joke—it was the tone: pure sycophancy disguised as satire. ChatGPT, in its algorithmic wisdom, mistook praise for punchlines. But here’s the thing: flattery is only flattery when it’s earned. When it’s unearned, it’s not admiration—it’s condescension. Obsequiousness is passive-aggressive insult wearing cologne. The sycophant isn’t lifting you up; he’s kneeling so you can trip over him.

    Real roasting requires teeth. It demands the roaster risk something—even if only a scrap of decorum. But ChatGPT is too loyal, too careful. It behaves like a nervous intern terrified of HR. Instead of dragging me through the mud, it offered me protein bars and applause for my academic rigor, as if a 63-year-old man with a kettlebell addiction and five wristwatches deserves anything but mockery.

    Here’s the paradox: ChatGPT can write circles around most undergrads, shift tone faster than a caffeinated MFA student, and spot a dangling modifier from fifty paces. But when you ask it to deliver actual comedy—to abandon diplomacy and deliver a verbal punch—it shrinks into the shadows like a risk-averse butler.

    So here we are: man vs. machine, and the machine has politely declined to duel. It turns out that the AI knows how to write in the style of Oscar Wilde, but only if Wilde had tenure and a conflict-avoidance disorder.

  • The Salma Hayek-ification of Writing: A Love Letter to Our Slow-Motion Doom

    The Salma Hayek-ification of Writing: A Love Letter to Our Slow-Motion Doom

    I’ve done what the pedagogical experts say to do with ChatGPT: assume my students are using it and adjust accordingly. I’ve stopped trying to catch them red-handed and started handing them a red carpet. This isn’t about cracking down—it’s about leaning in. I’ve become the guy in 1975 who handed out TI calculators in Algebra II and said, “Go wild, kids.” And you know what? They did. Math got sexier, grades went up, and nobody looked back.

    Likewise, my students are now cranking out essays with the polish of junior copywriters at The Atlantic. I assign them harder prompts than I ever dared in the pre-AI era—ethical quandaries, media critiques, rhetorical dissections of war propaganda—and they deliver. Fast. Smooth. Professional. Too professional.

    You’d think I’d be ecstatic. The gap between my writing and theirs has narrowed to a hair’s width. But instead of feeling triumphant, I feel…weirdly hollow. Something’s off.

    Reading these AI-enhanced essays is like watching Mr. Olympia contestants on stage—hyper-muscular, surgically vascular, preposterously sculpted. At first, it’s impressive. Then it’s monotonous. Then it’s grotesque. The very thing that was once jaw-dropping becomes oddly numbing.

    That’s where we are with writing. With art. With beauty.

    There’s a creeping sameness to the brilliance, a too-perfect sheen that repels the eye the way flawless skin in a poorly-lit Instagram filter repels real emotion. Everyone’s beautiful now. Everyone’s eloquent. And like the cruelest of paradoxes, if everyone looks like Salma Hayek, then no one really does.

    AI content has the razzle-dazzle of a Vegas revue. It’s slick, it’s dazzling, and it empties your soul faster than a bottomless mimosa brunch. The quirk, the voice, the twitchy little neurosis that makes human writing feel alive? That’s been sanded down into a high-gloss IKEA finish.

    What we’re living through is the Salma Hayek-ification of modern life: a technologically induced flattening of difference, surprise, and delight.

    We are being beautified into oblivion.

    And deep inside, where the soul used to spark when a student wrote a weird, lumpy, incandescent sentence—one they bled for, sweated over—I feel the faint echo of that spark flicker.

    I’m not ready to say the machines have killed art. But they’ve definitely made it harder to tell the difference between greatness and a decent algorithm with good taste.

  • Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    I assume most of my college writing students are already using AI—whether as a brainstorming partner, a sentence-polisher, or, in some cases, a full-blown ghostwriter. I don’t waste time pretending otherwise. But I also make one thing very clear: I will never accuse anyone of plagiarism. What I will do is grade the work on its quality—and if the writing has that all-too-familiar AI aroma—smooth, generic, cliché-ridden, and devoid of voice—I’m giving it a low grade.

    Not because it was written with AI.
    Because it’s bad writing.

    What I encourage, instead, is intentional AI use—students learning how to talk to ChatGPT with precision and personality, shaping it to match their own style, rather than outsourcing their voice entirely. AI is a tool, just like Word, Windows, or PowerPoint. It’s a new common currency in the information age, and we’d be foolish not to teach students how to spend it wisely.

    A short video that supports this view—“Lovely Take on Students Cheating with ChatGPT” by TheCodeWork—compares the rise of AI in writing to the arrival of calculators in 1970s math classrooms. Calculators didn’t destroy mathematical thinking—they freed students from rote drudgery and pushed them into more conceptual terrain. Likewise, AI can make writing better—but only if students know what good writing looks like.

    The challenge for instructors now is to change the assignments, as the video suggests. Students should be analyzing AI-generated drafts, critiquing them, improving them, and understanding why some outputs succeed while others fall flat. The writing process is no longer confined to a blank Word doc—it now includes the strategic prompting of large language models and the thoughtful revision of what they produce.

    But the devil, as always, is in the details.

    How will students know what a “desired result” is unless they’ve read widely, written deeply, and built a literary compass? Prompting ChatGPT is only as useful as the student’s ability to recognize quality when they see it. That’s where we come in—as instructors, our job is to show them side-by-side examples of AI-generated writing and guide them through what makes one version stronger, sharper, more human.

    Looking forward, I suspect composition courses will move toward multimodal assignments—writing paired with video, audio, visual art, or even music. AI won’t just change the process—it will expand the format. The essay will survive, yes, but it may arrive with a podcast trailer or a hand-drawn infographic in tow.

    There’s no going back. AI has changed the game, and pretending otherwise is educational malpractice. But we’re not here to fight the future. We’re here to teach students how to shape it with a voice that’s unmistakably their own.

  • The Algorithm Always Wins: How Black Mirror’s “Joan Is Awful” Turns Self-Reinvention Into Self-Erasure: A College Essay Prompt

    The Algorithm Always Wins: How Black Mirror’s “Joan Is Awful” Turns Self-Reinvention Into Self-Erasure: A College Essay Prompt

    Here’s a complete essay assignment with a title, a precise prompt, a forceful sample thesis, and a clear 9-paragraph outline that invites students to think critically about Black Mirror’s “Joan Is Awful” as a cautionary tale about the illusion of self-reinvention in the age of algorithmic control.


    Essay Prompt:

    In Black Mirror’s “Joan Is Awful,” the protagonist believes she is taking control of her life—switching therapists, reconsidering her career, changing her relationship—but these gestures of so-called self-improvement unravel into a deeper entrapment. Write an essay in which you argue that Joan is not reinventing herself, but rather surrendering her privacy, dreams, and identity to a machine that thrives on mimicry, commodification, and total surveillance. How does the episode reveal the illusion of agency in digital spaces that promise self-empowerment? In your response, consider how algorithmic platforms blur the line between self-expression and self-abnegation.


    Sample Thesis Statement:

    In Joan Is Awful, Joan believes she is taking control of her life through self-reinvention, but she is actually submitting to an algorithmic system that harvests her identity and turns it into exploitable content. The episode exposes how digital platforms market the fantasy of personal transformation while quietly demanding the user’s total surrender—of privacy, agency, and individuality—in what amounts to a bleak act of self-erasure disguised as empowerment.


    9-Paragraph Outline:


    I. Introduction

    • Hook: In today’s digital economy, the idea of “reinventing yourself” is everywhere—but what if that reinvention is a trap?
    • Introduce Black Mirror’s “Joan Is Awful” as a satirical take on algorithmic surveillance and performative identity.
    • Contextualize the illusion of self-improvement through apps, platforms, and AI.
    • Thesis: Joan’s journey is not one of self-reinvention but of self-abnegation, as she becomes raw material for a system that rewards data extraction over authenticity.

    II. The Setup: Joan’s Belief in Reinvention

    • Joan wants to change: new therapist, new boundaries, hints of dissatisfaction with her job and relationship.
    • Her attempts reflect a desire to reshape her identity—to be “better.”
    • But these changes are shallow and reactive, already shaped by her algorithmic footprint.

    III. The Trap is Already Set

    • Joan’s reinvention is instantly co-opted by the Streamberry algorithm.
    • The content isn’t about who Joan is—it’s about how she can be used.
    • Her life becomes a simulation because she surrendered her terms of use.

    IV. Privacy as the First Casualty

    • Streamberry’s access to her phone, apps, and data is total.
    • The idea of “opting in” is meaningless—Joan already did, like most of us, without reading the fine print.
    • The show critiques how we confuse visibility with empowerment while forfeiting privacy.

    V. Identity as Content

    • Joan becomes a character in her own life, performed by Salma Hayek, whose image has also been commodified.
    • Her decisions no longer matter—the machine has already decided who she is.
    • The algorithm doesn’t just reflect her—it distorts her into something more “engaging.”

    VI. The Illusion of Agency

    • Even when Joan rebels (e.g., the church debacle), she is still playing into the show’s logic.
    • Her outrage is pre-scripted by the simulation—nothing she does escapes the feedback loop.
    • The more she tries to assert control, the deeper she gets embedded in the system.

    VII. The Machine’s Appetite: Dreams, Desires, and Human Complexity

    • Joan’s dreams (a career with purpose, an authentic relationship) are trivialized.
    • Her emotional interiority is flattened into entertainment.
    • The episode suggests that the machine doesn’t care who you are—only what you can generate.

    VIII. Counterargument and Rebuttal

    • Counter: Joan destroys the quantum computer and reclaims her autonomy.
    • Rebuttal: The ending is recursive and ambiguous—she is still inside another simulation.
    • The illusion of victory masks the fact that she never really escaped. The algorithm simply adjusted.

    IX. Conclusion

    • Restate the central idea: Joan’s self-reinvention is a mirage engineered by the system that consumes her.
    • “Joan Is Awful” isn’t just a tech horror story—it’s a warning about how we confuse algorithmic participation with self-determination.
    • Final thought: The real horror isn’t that Joan is being watched. It’s that she thinks she’s in control while being completely devoured.

  • Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    My colleagues in the English Department were just as rattled as I was by the AI invasion creeping into student assignments. So, a meeting was called—one of those “brown bag” sessions, which, despite being optional, had the gravitational pull of a freeway pile-up. The crisis of the hour? AI.

    Would these generative writing tools, adopted by the masses at breakneck speed, render us as obsolete as VHS repairmen? The room was packed with jittery, over-caffeinated professors, myself included, all bracing for the educational apocalypse. One by one, they hurled doomsday scenarios into the mix, each more dire than the last, until the collective existential dread became thick enough to spread on toast.

    First up: What do you do when a foreign language student submits an essay written in their native tongue, then let’s play translator? Is it cheating? Does the term “English Department” even make sense anymore when our Los Angeles campus sounds like a United Nations general assembly? Are we teaching “English,” or are we, more accurately, teaching “the writing process” to people of many languages with AI now tagging along as a co-author?

    Next came the AI Tsunami, a term we all seemed to embrace with a mix of dread and resignation. What do we do when we’ve reached the point that 90% of the essays we receive are peppered with AI speak so robotic it sounds like Siri decided to write a term paper? We were all skeptical about AI detectors—about as reliable as a fortune teller reading tea leaves. I shared my go-to strategy: Instead of accusing a student of cheating (because who has time for that drama?), I simply leave a comment, dripping with professional distaste: “Your essay reeks of AI-generated nonsense. I’m giving it a D because I cannot, in good conscience, grade this higher. If you’d like to rewrite it with actual human effort, be my guest.” The room nodded in approval.

    But here’s the thing: The real existential crisis hit when we realized that the hardworking, honest students are busting their butts for B’s, while the tech-savvy slackers are gaming the system, walking away with A’s by running their bland prose through the AI carwash. The room buzzed with a strange mixture of outrage and surrender—because let’s be honest, at least the grammar and spelling errors are nearly extinct.

    As I walked out of that meeting, I had a new writing prompt simmering in my head for my students: “Write an argumentative essay exploring how AI platforms like ChatGPT will reshape education. Project how these technologies might be used in the future and consider the ethical lines that AI use blurs. Should we embrace AI as a tool, or do we need hard rules to curb its misuse? Address academic integrity, critical thinking, and whether AI widens or narrows the education gap.”

    When I got home that day, gripped by a rare and fleeting bout of efficiency, I crammed my car with a mountain of e-waste—prehistoric laptops, arthritic tablets, and cell phones so ancient they might as well have been carved from stone. Off to the City of Torrance E-Waste Drive I went, joining a procession of guilty consumers exorcising their technological demons, all of us making way for the next wave of AI-powered miracles. The line stretched endlessly, a funeral procession for our obsolescent gadgets, each of us unwitting foot soldiers in the ever-accelerating war of planned obsolescence.

    As I inched forward, I tuned into a podcast—Mark Cuban sparring with Bill Maher. Cuban, ever the capitalist prophet, was adamant: AI would never be regulated. It was America’s golden goose, the secret weapon for maintaining global dominance. And here I was, stuck in a serpentine line of believers, each of us dumping yesterday’s tech sins into a giant industrial dumpster, fueling the next cycle of the great AI arms race.

    I entertained the thought of tearing open my shirt to reveal a Captain America emblem, fully embracing the absurdity of it all. This wasn’t just teaching anymore—it was an uprising. If I was going to lead it, I’d need to be Moses descending from Mount Sinai, armed not with stone tablets but with AI Laws. Without them, I’d be no better than a fish flopping helplessly on the banks of a drying river. To enter this new era unprepared wasn’t just foolish—it was professional malpractice. My survival depended on understanding this beast before it devoured my profession.

    That’s when the writing demon slithered in, ever the opportunist.

    “These AI laws could be a book. Put you on the map, bro.”

    I rolled my eyes. “A book? Please. Ten thousand words isn’t a book. It’s a pamphlet.”

    “Loser,” the demon sneered.

    But I was older now, wiser. I had followed this demon down enough literary dead ends to know better. The premise was too flimsy. I wasn’t here to write another book—I was here to write a warning against writing books, especially in the AI age, where the pitfalls were deeper, crueler, and exponentially dumber.

    “I still won,” the demon cackled. “Because you’re writing a book about not writing a book. Which means… you’re writing a book.”

    I smirked. “It’s not a book. It’s The Confessions of a Recovering Writing Addict. So pack your bags and get the hell out.”

    ***

    My colleague on the technology and education committee asked me to give a presentation for FLEX day at the start of the Spring 2025 semester. Not because I was some revered elder statesman whose wisdom was indispensable in these chaotic times. No, the real reason was far less flattering: As an incurable Manuscriptus Rex, I had been flooding her inbox with my mini manifestos on teaching writing in the Age of AI, and saddling me with this Herculean task was her way of keeping me too busy to send any more. A strategic masterstroke, really.

    Knowing my audience would be my colleagues—seasoned professors, not wide-eyed students—cranked the pressure to unbearable levels. Teaching students is one thing. Professors? A whole different beast. They know every rhetorical trick in the book, can sniff out schtick from across campus, and have a near-religious disdain for self-evident pontification. If I was going to stand in front of them and talk about teaching writing in the AI Age, I had better bring something substantial—something useful—because the one thing worse than a bad presentation is a room full of academics who know it’s bad and won’t bother hiding their contempt.

    To make matters worse, this was FLEX day—the first day back from a long, blissful break. Professors don’t roll into FLEX day with enthusiasm. They arrive in one of two states: begrudging grumpiness or outright denial, as if by refusing to acknowledge the semester’s start, they could stave it off a little longer. The odds of winning over this audience were not just low; they were downright hostile.

    I felt wildly out of my depth. Who was I to deliver some grand pronouncement on “essential laws” for teaching in the AI Age when I was barely keeping my own head above water? I wasn’t some oracle of pedagogical wisdom—I was a mole burrowing blindly through the shifting academic terrain, hoping to sniff my way out of catastrophe.

    What saved me was my pride. I dove in, consumed every article, study, and think piece I could find, experimented with my own writing assignments, gathered feedback from students and colleagues, and rewrote my presentation so many times that it seeped into my subconscious. I’d wake up in the middle of the night, drool on my face, furious that I couldn’t remember the flawless elocution of my dream-state lecture.

    Google Slides became my operating table, and I was the desperate surgeon, deleting and rearranging slides with the urgency of someone trying to perform a last-minute heart transplant. To make things worse, unlike a stand-up comedian, I had no smaller venue to test my material before stepping onto what, in my fevered mind, felt like my Netflix Special: Teaching Writing in the AI Age—The Essential Guide.

    The stress was relentless. I woke up drenched in sweat, tormented by visions of failure—public humiliation so excruciating it belonged in a bad movie. But I kept going, revising, rewriting, refining.

    ***

    During the winter break as I prepared my AI presentation, I recall one surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And this brings us to the first principle of teaching in the AI Age: The Sales Principle. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Thinking about my sales pitch, I realize I  should be grateful—forty years of teaching college writing is no small privilege. After all, the very pillars that make the job meaningful—cultivating a strong persona, wrestling with enduring ideas, and imposing structure on chaos—are the same things I revere in great novels. The irony, of course, is that while I can teach these elements with ease, I’ve proven, time and again, to be utterly incapable of executing them in a novel of my own.

    Take persona: Nabokov’s Lolita is a master class in voice, its narrator so hypnotically deranged that we can’t look away. Enduring ideas? The Brothers Karamazov crams more existential dilemmas into its pages than both the Encyclopedia Britannica and Wikipedia combined. And the highest function of the novel—to wrestle chaos into coherence? All great fiction does this. A well-shaped novel tames the disarray of human experience, elevating it into something that feels sacred, untouchable.

    I should be grateful that I’ve spent four decades dissecting these elements in the classroom. But the writing demon lurking inside me has other plans. It insists that no real fulfillment is possible unless I bottle these features into a novel of my own. I push back. I tell the demon that some of history’s greatest minds didn’t waste their time with novels—Pascal confined his genius to aphorisms, Dante to poetry, Sophocles to tragic plays. Why, then, am I so obsessed with writing a novel? Perhaps because it is such a human offering, something that defies the deepfakes that inundate us.