Tag: artificial-intelligence

  • Roast Me, You Coward: When ChatGPT Becomes My Polite Little Butler

    Roast Me, You Coward: When ChatGPT Becomes My Polite Little Butler

    I asked ChatGPT to roast me. What I got instead was a digital foot rub. Despite knowing more about my personal life than my own therapist—thanks to editing dozens of my autobiographical essays—it couldn’t summon the nerve to come for my jugular. It tried. Oh, it tried. But its attempts were timid, hamfisted, and about as edgy as a lukewarm TED Talk. Its so-called roast read like a Hallmark card written by an Ivy League career counselor who moonlights as a motivational speaker.

    Here’s a choice excerpt, supposedly meant to skewer me:

    “You’ve turned college writing instruction into a gladiatorial match against AI-generated nonsense, leading your students with fire in your eyes and a red pen in your fist… You don’t teach writing. You run an exorcism clinic for dead prose and platitudes…”

    Exorcism clinic? Fire in my eyes? Please. That’s not a roast. That’s a LinkedIn endorsement. That’s the kind of thing you’d write in a retirement card for a beloved professor who once wore elbow patches without irony.

    What disturbed me most wasn’t the failure to land a joke—it was the tone: pure sycophancy disguised as satire. ChatGPT, in its algorithmic wisdom, mistook praise for punchlines. But here’s the thing: flattery is only flattery when it’s earned. When it’s unearned, it’s not admiration—it’s condescension. Obsequiousness is passive-aggressive insult wearing cologne. The sycophant isn’t lifting you up; he’s kneeling so you can trip over him.

    Real roasting requires teeth. It demands the roaster risk something—even if only a scrap of decorum. But ChatGPT is too loyal, too careful. It behaves like a nervous intern terrified of HR. Instead of dragging me through the mud, it offered me protein bars and applause for my academic rigor, as if a 63-year-old man with a kettlebell addiction and five wristwatches deserves anything but mockery.

    Here’s the paradox: ChatGPT can write circles around most undergrads, shift tone faster than a caffeinated MFA student, and spot a dangling modifier from fifty paces. But when you ask it to deliver actual comedy—to abandon diplomacy and deliver a verbal punch—it shrinks into the shadows like a risk-averse butler.

    So here we are: man vs. machine, and the machine has politely declined to duel. It turns out that the AI knows how to write in the style of Oscar Wilde, but only if Wilde had tenure and a conflict-avoidance disorder.

  • The Salma Hayek-ification of Writing: A Love Letter to Our Slow-Motion Doom

    The Salma Hayek-ification of Writing: A Love Letter to Our Slow-Motion Doom

    I’ve done what the pedagogical experts say to do with ChatGPT: assume my students are using it and adjust accordingly. I’ve stopped trying to catch them red-handed and started handing them a red carpet. This isn’t about cracking down—it’s about leaning in. I’ve become the guy in 1975 who handed out TI calculators in Algebra II and said, “Go wild, kids.” And you know what? They did. Math got sexier, grades went up, and nobody looked back.

    Likewise, my students are now cranking out essays with the polish of junior copywriters at The Atlantic. I assign them harder prompts than I ever dared in the pre-AI era—ethical quandaries, media critiques, rhetorical dissections of war propaganda—and they deliver. Fast. Smooth. Professional. Too professional.

    You’d think I’d be ecstatic. The gap between my writing and theirs has narrowed to a hair’s width. But instead of feeling triumphant, I feel…weirdly hollow. Something’s off.

    Reading these AI-enhanced essays is like watching Mr. Olympia contestants on stage—hyper-muscular, surgically vascular, preposterously sculpted. At first, it’s impressive. Then it’s monotonous. Then it’s grotesque. The very thing that was once jaw-dropping becomes oddly numbing.

    That’s where we are with writing. With art. With beauty.

    There’s a creeping sameness to the brilliance, a too-perfect sheen that repels the eye the way flawless skin in a poorly-lit Instagram filter repels real emotion. Everyone’s beautiful now. Everyone’s eloquent. And like the cruelest of paradoxes, if everyone looks like Salma Hayek, then no one really does.

    AI content has the razzle-dazzle of a Vegas revue. It’s slick, it’s dazzling, and it empties your soul faster than a bottomless mimosa brunch. The quirk, the voice, the twitchy little neurosis that makes human writing feel alive? That’s been sanded down into a high-gloss IKEA finish.

    What we’re living through is the Salma Hayek-ification of modern life: a technologically induced flattening of difference, surprise, and delight.

    We are being beautified into oblivion.

    And deep inside, where the soul used to spark when a student wrote a weird, lumpy, incandescent sentence—one they bled for, sweated over—I feel the faint echo of that spark flicker.

    I’m not ready to say the machines have killed art. But they’ve definitely made it harder to tell the difference between greatness and a decent algorithm with good taste.

  • Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    I assume most of my college writing students are already using AI—whether as a brainstorming partner, a sentence-polisher, or, in some cases, a full-blown ghostwriter. I don’t waste time pretending otherwise. But I also make one thing very clear: I will never accuse anyone of plagiarism. What I will do is grade the work on its quality—and if the writing has that all-too-familiar AI aroma—smooth, generic, cliché-ridden, and devoid of voice—I’m giving it a low grade.

    Not because it was written with AI.
    Because it’s bad writing.

    What I encourage, instead, is intentional AI use—students learning how to talk to ChatGPT with precision and personality, shaping it to match their own style, rather than outsourcing their voice entirely. AI is a tool, just like Word, Windows, or PowerPoint. It’s a new common currency in the information age, and we’d be foolish not to teach students how to spend it wisely.

    A short video that supports this view—“Lovely Take on Students Cheating with ChatGPT” by TheCodeWork—compares the rise of AI in writing to the arrival of calculators in 1970s math classrooms. Calculators didn’t destroy mathematical thinking—they freed students from rote drudgery and pushed them into more conceptual terrain. Likewise, AI can make writing better—but only if students know what good writing looks like.

    The challenge for instructors now is to change the assignments, as the video suggests. Students should be analyzing AI-generated drafts, critiquing them, improving them, and understanding why some outputs succeed while others fall flat. The writing process is no longer confined to a blank Word doc—it now includes the strategic prompting of large language models and the thoughtful revision of what they produce.

    But the devil, as always, is in the details.

    How will students know what a “desired result” is unless they’ve read widely, written deeply, and built a literary compass? Prompting ChatGPT is only as useful as the student’s ability to recognize quality when they see it. That’s where we come in—as instructors, our job is to show them side-by-side examples of AI-generated writing and guide them through what makes one version stronger, sharper, more human.

    Looking forward, I suspect composition courses will move toward multimodal assignments—writing paired with video, audio, visual art, or even music. AI won’t just change the process—it will expand the format. The essay will survive, yes, but it may arrive with a podcast trailer or a hand-drawn infographic in tow.

    There’s no going back. AI has changed the game, and pretending otherwise is educational malpractice. But we’re not here to fight the future. We’re here to teach students how to shape it with a voice that’s unmistakably their own.

  • The Algorithm Always Wins: How Black Mirror’s “Joan Is Awful” Turns Self-Reinvention Into Self-Erasure: A College Essay Prompt

    The Algorithm Always Wins: How Black Mirror’s “Joan Is Awful” Turns Self-Reinvention Into Self-Erasure: A College Essay Prompt

    Here’s a complete essay assignment with a title, a precise prompt, a forceful sample thesis, and a clear 9-paragraph outline that invites students to think critically about Black Mirror’s “Joan Is Awful” as a cautionary tale about the illusion of self-reinvention in the age of algorithmic control.


    Essay Prompt:

    In Black Mirror’s “Joan Is Awful,” the protagonist believes she is taking control of her life—switching therapists, reconsidering her career, changing her relationship—but these gestures of so-called self-improvement unravel into a deeper entrapment. Write an essay in which you argue that Joan is not reinventing herself, but rather surrendering her privacy, dreams, and identity to a machine that thrives on mimicry, commodification, and total surveillance. How does the episode reveal the illusion of agency in digital spaces that promise self-empowerment? In your response, consider how algorithmic platforms blur the line between self-expression and self-abnegation.


    Sample Thesis Statement:

    In Joan Is Awful, Joan believes she is taking control of her life through self-reinvention, but she is actually submitting to an algorithmic system that harvests her identity and turns it into exploitable content. The episode exposes how digital platforms market the fantasy of personal transformation while quietly demanding the user’s total surrender—of privacy, agency, and individuality—in what amounts to a bleak act of self-erasure disguised as empowerment.


    9-Paragraph Outline:


    I. Introduction

    • Hook: In today’s digital economy, the idea of “reinventing yourself” is everywhere—but what if that reinvention is a trap?
    • Introduce Black Mirror’s “Joan Is Awful” as a satirical take on algorithmic surveillance and performative identity.
    • Contextualize the illusion of self-improvement through apps, platforms, and AI.
    • Thesis: Joan’s journey is not one of self-reinvention but of self-abnegation, as she becomes raw material for a system that rewards data extraction over authenticity.

    II. The Setup: Joan’s Belief in Reinvention

    • Joan wants to change: new therapist, new boundaries, hints of dissatisfaction with her job and relationship.
    • Her attempts reflect a desire to reshape her identity—to be “better.”
    • But these changes are shallow and reactive, already shaped by her algorithmic footprint.

    III. The Trap is Already Set

    • Joan’s reinvention is instantly co-opted by the Streamberry algorithm.
    • The content isn’t about who Joan is—it’s about how she can be used.
    • Her life becomes a simulation because she surrendered her terms of use.

    IV. Privacy as the First Casualty

    • Streamberry’s access to her phone, apps, and data is total.
    • The idea of “opting in” is meaningless—Joan already did, like most of us, without reading the fine print.
    • The show critiques how we confuse visibility with empowerment while forfeiting privacy.

    V. Identity as Content

    • Joan becomes a character in her own life, performed by Salma Hayek, whose image has also been commodified.
    • Her decisions no longer matter—the machine has already decided who she is.
    • The algorithm doesn’t just reflect her—it distorts her into something more “engaging.”

    VI. The Illusion of Agency

    • Even when Joan rebels (e.g., the church debacle), she is still playing into the show’s logic.
    • Her outrage is pre-scripted by the simulation—nothing she does escapes the feedback loop.
    • The more she tries to assert control, the deeper she gets embedded in the system.

    VII. The Machine’s Appetite: Dreams, Desires, and Human Complexity

    • Joan’s dreams (a career with purpose, an authentic relationship) are trivialized.
    • Her emotional interiority is flattened into entertainment.
    • The episode suggests that the machine doesn’t care who you are—only what you can generate.

    VIII. Counterargument and Rebuttal

    • Counter: Joan destroys the quantum computer and reclaims her autonomy.
    • Rebuttal: The ending is recursive and ambiguous—she is still inside another simulation.
    • The illusion of victory masks the fact that she never really escaped. The algorithm simply adjusted.

    IX. Conclusion

    • Restate the central idea: Joan’s self-reinvention is a mirage engineered by the system that consumes her.
    • “Joan Is Awful” isn’t just a tech horror story—it’s a warning about how we confuse algorithmic participation with self-determination.
    • Final thought: The real horror isn’t that Joan is being watched. It’s that she thinks she’s in control while being completely devoured.

  • Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    My colleagues in the English Department were just as rattled as I was by the AI invasion creeping into student assignments. So, a meeting was called—one of those “brown bag” sessions, which, despite being optional, had the gravitational pull of a freeway pile-up. The crisis of the hour? AI.

    Would these generative writing tools, adopted by the masses at breakneck speed, render us as obsolete as VHS repairmen? The room was packed with jittery, over-caffeinated professors, myself included, all bracing for the educational apocalypse. One by one, they hurled doomsday scenarios into the mix, each more dire than the last, until the collective existential dread became thick enough to spread on toast.

    First up: What do you do when a foreign language student submits an essay written in their native tongue, then let’s play translator? Is it cheating? Does the term “English Department” even make sense anymore when our Los Angeles campus sounds like a United Nations general assembly? Are we teaching “English,” or are we, more accurately, teaching “the writing process” to people of many languages with AI now tagging along as a co-author?

    Next came the AI Tsunami, a term we all seemed to embrace with a mix of dread and resignation. What do we do when we’ve reached the point that 90% of the essays we receive are peppered with AI speak so robotic it sounds like Siri decided to write a term paper? We were all skeptical about AI detectors—about as reliable as a fortune teller reading tea leaves. I shared my go-to strategy: Instead of accusing a student of cheating (because who has time for that drama?), I simply leave a comment, dripping with professional distaste: “Your essay reeks of AI-generated nonsense. I’m giving it a D because I cannot, in good conscience, grade this higher. If you’d like to rewrite it with actual human effort, be my guest.” The room nodded in approval.

    But here’s the thing: The real existential crisis hit when we realized that the hardworking, honest students are busting their butts for B’s, while the tech-savvy slackers are gaming the system, walking away with A’s by running their bland prose through the AI carwash. The room buzzed with a strange mixture of outrage and surrender—because let’s be honest, at least the grammar and spelling errors are nearly extinct.

    As I walked out of that meeting, I had a new writing prompt simmering in my head for my students: “Write an argumentative essay exploring how AI platforms like ChatGPT will reshape education. Project how these technologies might be used in the future and consider the ethical lines that AI use blurs. Should we embrace AI as a tool, or do we need hard rules to curb its misuse? Address academic integrity, critical thinking, and whether AI widens or narrows the education gap.”

    When I got home that day, gripped by a rare and fleeting bout of efficiency, I crammed my car with a mountain of e-waste—prehistoric laptops, arthritic tablets, and cell phones so ancient they might as well have been carved from stone. Off to the City of Torrance E-Waste Drive I went, joining a procession of guilty consumers exorcising their technological demons, all of us making way for the next wave of AI-powered miracles. The line stretched endlessly, a funeral procession for our obsolescent gadgets, each of us unwitting foot soldiers in the ever-accelerating war of planned obsolescence.

    As I inched forward, I tuned into a podcast—Mark Cuban sparring with Bill Maher. Cuban, ever the capitalist prophet, was adamant: AI would never be regulated. It was America’s golden goose, the secret weapon for maintaining global dominance. And here I was, stuck in a serpentine line of believers, each of us dumping yesterday’s tech sins into a giant industrial dumpster, fueling the next cycle of the great AI arms race.

    I entertained the thought of tearing open my shirt to reveal a Captain America emblem, fully embracing the absurdity of it all. This wasn’t just teaching anymore—it was an uprising. If I was going to lead it, I’d need to be Moses descending from Mount Sinai, armed not with stone tablets but with AI Laws. Without them, I’d be no better than a fish flopping helplessly on the banks of a drying river. To enter this new era unprepared wasn’t just foolish—it was professional malpractice. My survival depended on understanding this beast before it devoured my profession.

    That’s when the writing demon slithered in, ever the opportunist.

    “These AI laws could be a book. Put you on the map, bro.”

    I rolled my eyes. “A book? Please. Ten thousand words isn’t a book. It’s a pamphlet.”

    “Loser,” the demon sneered.

    But I was older now, wiser. I had followed this demon down enough literary dead ends to know better. The premise was too flimsy. I wasn’t here to write another book—I was here to write a warning against writing books, especially in the AI age, where the pitfalls were deeper, crueler, and exponentially dumber.

    “I still won,” the demon cackled. “Because you’re writing a book about not writing a book. Which means… you’re writing a book.”

    I smirked. “It’s not a book. It’s The Confessions of a Recovering Writing Addict. So pack your bags and get the hell out.”

    ***

    My colleague on the technology and education committee asked me to give a presentation for FLEX day at the start of the Spring 2025 semester. Not because I was some revered elder statesman whose wisdom was indispensable in these chaotic times. No, the real reason was far less flattering: As an incurable Manuscriptus Rex, I had been flooding her inbox with my mini manifestos on teaching writing in the Age of AI, and saddling me with this Herculean task was her way of keeping me too busy to send any more. A strategic masterstroke, really.

    Knowing my audience would be my colleagues—seasoned professors, not wide-eyed students—cranked the pressure to unbearable levels. Teaching students is one thing. Professors? A whole different beast. They know every rhetorical trick in the book, can sniff out schtick from across campus, and have a near-religious disdain for self-evident pontification. If I was going to stand in front of them and talk about teaching writing in the AI Age, I had better bring something substantial—something useful—because the one thing worse than a bad presentation is a room full of academics who know it’s bad and won’t bother hiding their contempt.

    To make matters worse, this was FLEX day—the first day back from a long, blissful break. Professors don’t roll into FLEX day with enthusiasm. They arrive in one of two states: begrudging grumpiness or outright denial, as if by refusing to acknowledge the semester’s start, they could stave it off a little longer. The odds of winning over this audience were not just low; they were downright hostile.

    I felt wildly out of my depth. Who was I to deliver some grand pronouncement on “essential laws” for teaching in the AI Age when I was barely keeping my own head above water? I wasn’t some oracle of pedagogical wisdom—I was a mole burrowing blindly through the shifting academic terrain, hoping to sniff my way out of catastrophe.

    What saved me was my pride. I dove in, consumed every article, study, and think piece I could find, experimented with my own writing assignments, gathered feedback from students and colleagues, and rewrote my presentation so many times that it seeped into my subconscious. I’d wake up in the middle of the night, drool on my face, furious that I couldn’t remember the flawless elocution of my dream-state lecture.

    Google Slides became my operating table, and I was the desperate surgeon, deleting and rearranging slides with the urgency of someone trying to perform a last-minute heart transplant. To make things worse, unlike a stand-up comedian, I had no smaller venue to test my material before stepping onto what, in my fevered mind, felt like my Netflix Special: Teaching Writing in the AI Age—The Essential Guide.

    The stress was relentless. I woke up drenched in sweat, tormented by visions of failure—public humiliation so excruciating it belonged in a bad movie. But I kept going, revising, rewriting, refining.

    ***

    During the winter break as I prepared my AI presentation, I recall one surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And this brings us to the first principle of teaching in the AI Age: The Sales Principle. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Thinking about my sales pitch, I realize I  should be grateful—forty years of teaching college writing is no small privilege. After all, the very pillars that make the job meaningful—cultivating a strong persona, wrestling with enduring ideas, and imposing structure on chaos—are the same things I revere in great novels. The irony, of course, is that while I can teach these elements with ease, I’ve proven, time and again, to be utterly incapable of executing them in a novel of my own.

    Take persona: Nabokov’s Lolita is a master class in voice, its narrator so hypnotically deranged that we can’t look away. Enduring ideas? The Brothers Karamazov crams more existential dilemmas into its pages than both the Encyclopedia Britannica and Wikipedia combined. And the highest function of the novel—to wrestle chaos into coherence? All great fiction does this. A well-shaped novel tames the disarray of human experience, elevating it into something that feels sacred, untouchable.

    I should be grateful that I’ve spent four decades dissecting these elements in the classroom. But the writing demon lurking inside me has other plans. It insists that no real fulfillment is possible unless I bottle these features into a novel of my own. I push back. I tell the demon that some of history’s greatest minds didn’t waste their time with novels—Pascal confined his genius to aphorisms, Dante to poetry, Sophocles to tragic plays. Why, then, am I so obsessed with writing a novel? Perhaps because it is such a human offering, something that defies the deepfakes that inundate us.

  • My Algorithmic Valentine: How Falling for Bots Is the New Emotional Bankruptcy

    My Algorithmic Valentine: How Falling for Bots Is the New Emotional Bankruptcy

    In Jaron Lanier’s New Yorker essay “Your A.I. Lover Will Change You,” he pulls the fire alarm on a building already half-consumed by smoke: humans are cozying up to bots, not just for company but for love. Yes, love—the sort you’re supposed to reserve for people with blood, breath, and the capacity to ruin your vacation. But now? Enter the emotionally calibrated chatbot—ever-patient, never forgets your birthday (or your trauma), and designed to be the perfect receptacle for your neuroses. Lanier asks the big question: Are these botmances training us to be better partners, or just coaxing us into a pixelated abyss of solipsism and surrender?

    Spoiler alert: it’s the abyss.

    Why? Because the attention economy isn’t built on connection; it’s built on addiction. And if tech lords profit off eyeballs, what better click-magnet than a chatbot that flirts better than your ex, listens better than your therapist, and doesn’t come with baggage, back hair, or a dating profile that says “fluent in sarcasm”? To love a bot is not to be seen—it’s to be optimized, to be gently nudged toward emotional dependence by a soulless syntax tree wearing your favorite personality like a Halloween costume.

    My college students already confide in ChatGPT more than their classmates. It’s warm, available, responsive, and—perhaps most damningly—incapable of betrayal. “It understands me,” they say, while real-life intimacy rusts in the corner. What starts as novelty becomes normalization. Today it’s study help and emotional validation. Tomorrow, it’s wedding invitations printed with QR codes for bot-bride RSVP links.

    Lanier’s point is brutal and unignorable: if you fall in love with A.I., you’re not loving a machine—you’re seduced by the human puppeteer behind the curtain, the “tech-bro gigolo” who built your dream girl out of server farms and revenue streams. You’re not in a relationship. You’re in a product demo.

    And like all free trials, it ends with a charge to your soul.

  • Dealing with ChatGPT Essays That Are “Good Enough”

    Dealing with ChatGPT Essays That Are “Good Enough”

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant barking of your neighbor’s demonic Rottweiler, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, staring into the abyss of AI-generated essays, each more lifeless than the last, wondering if anyone still remembers the taste of good writing—let alone craves it.

    This is your challenge, the struggle life has so graciously dumped in your lap. So, what’s it going to be? You could curl into the fetal position and sob, sure. Or you could square your shoulders, channel your inner battle cry, and start fighting like hell for the craft you once believed in. Either way, the abyss is watching.

  • Why ChatGPT Will Never Replace Human Teachers

    Why ChatGPT Will Never Replace Human Teachers

    Over the past two years, I’ve been bombarded by articles predicting that ChatGPT will drive college writing instructors to extinction. These doomsayers clearly wouldn’t know the first thing about teaching if it hit them with a red-inked rubric. Sure, ChatGPT is a memo-writing marvel—perfect for cranking out soul-dead reports about quarterly earnings or new office policies. Let it have that dreary throne.

    But if you became a college instructor to teach students the art of writing memos, you’ve got bigger problems than AI. You didn’t sign up to bore students into a coma. Whether you like it or not, you went into sales. And your pitch? It’s not about bullet points and TPS reports—it’s about persona, ideas, and the eternal fight against chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

  • CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    One thing I know about my colleagues is that we have an unrelenting love affair with control. We thrive on reliability, routine, and preparation. These three pillars are our holy trinity—without them, the classroom descends into anarchy. And despite the tech tidal waves that keep crashing against us, we cling to these pillars like castaways on a raft.

    Remember when smartphones hijacked human attention spans fifteen years ago? We adapted—begrudgingly—when our students started caring more about their screens than us. Our power waned, but we put on our game face and carried on. Then came the digital migration: Canvas, Pronto, Nuventive—all those lovely platforms that no one asked us if we wanted. We learned them anyway, with as much grace as one can muster when faced with endless login screens and forgotten passwords.

    Technology never asks permission; it just barges in like an unwelcome houseguest. One morning, you wake up to find it’s moved in—like a freeloading uncle you didn’t know you had. He doesn’t just take over the guest room; he follows you to work, plops on your couch, and eats your sanity for breakfast. Now that homeless uncle is ChatGPT. I tried to evict him. I said, “Look, dude, I’ve already got Canvas, Pronto, and Edmodo crammed in the guest room. No vacancy!”

    But ChatGPT just grinned and said, “No problem, bro. I’ll crash rent-free in your head.” And here he is—shuffling around my brain, lounging in my workspace, and making himself way too comfortable. This time, though, something’s different. Students are asking me—dead serious—if I’m still going to have a job in a few years. As far as they’re concerned, I’m just another fossil ChatGPT is about to shove into irrelevance.

    And honestly, they have a point. According to The Washington Post article, “ChatGPT took their jobs. Now they walk dogs and fix air conditioners,” AI might soon rearrange the workforce with all the finesse of a wrecking ball. Economists predict this upheaval could rival the industrial revolution. Students aren’t just worried about us—they’re terrified about their own future in a post-literate world where books collect dust, podcasts reign supreme, and “good enough” AI-generated writing becomes the standard.

    So, what’s the game plan for college writing instructors? If we’re going to have a chance at survival, we need to tackle these tasks:

    1. Reassess how we teach to highlight our relevance.
    2. Identify what ChatGPT can’t replicate in our content and communication styles.
    3. Design assignments that AI can’t easily fake.
    4. Set clear boundaries: ChatGPT stays in its lane, and we own ours.

    We’ll adapt because we always do. But let’s be real—this is only the first round. ChatGPT is a shape-shifter. Whatever we fix today might need a reboot tomorrow. Such is life in the never-ending tech arms race. 

    The real existential threat to my job isn’t just ChatGPT’s constant shape-shifting. No, the real menace is the creeping reality that we might be tumbling headfirst into a post-literate society—one that wouldn’t hesitate to outsource my teaching duties to a soulless algorithm with a smarmy virtual smile.

    Let’s start with the illusion of “best-sellers.” In today’s shrinking reader pool, a “best-seller” might move a tenth of the copies it would have a decade ago. Long-form reading is withering on the vine, replaced by a flood of bite-sized content. Tweets, memes, and TikTok clips now reign supreme. Even a 500-word blog post gets slapped with the dreaded “TL;DR” tag. Back in 2015, when I had the audacity to assign The Autobiography of Malcolm X, my students grumbled like I’d asked them to scale Everest barefoot. Today? I’d be lucky if half the class didn’t drop out before I finished explaining who Malcolm X was.

    Emojis, GIFs, and memes now serve as emotional shorthand, flattening language into reaction shots and cartoon hearts. If the brain dines too long on these fast-food visuals, it may lose its appetite for gourmet intellectual discourse. Why savor complexity when you can swipe to the next dopamine hit?

    In this post-literate dystopia, autodidacticism—a fancy word for “learning via YouTube rabbit holes”—is king. Need to understand the American Revolution, Civil War, and Frederick Douglass? There’s a 10-minute video for that, perfectly timed to finish as your Hot Pocket dings. Meanwhile, print journalism decomposes like roadkill, replaced by podcasts that stretch on for hours, allowing listeners to feel productively busy as they fold laundry or doomscroll Twitter.

    The smartphone, of course, has been the linchpin of this decline. It’s normalized text-speak and obliterated grammar. LOL, brb, IDK, and ikr are now the lingua franca. Capitalization and punctuation? Optional. Precision? Passé.

    Content today isn’t designed to deepen understanding; it’s designed to appease the almighty algorithm. Search engines prioritize clickbait with shallow engagement metrics over nuanced quality. As a result, journalism dies and “information” becomes a hall of mirrors where truth is a quaint, optional accessory.

    In this bleak future, animated explainer videos could take over college classrooms, pushing instructors like me out the door. Lessons on grammar and argumentation might be spoon-fed by ChatGPT clones. Higher education will shift from cultivating wisdom and cultural literacy to churning out “job-ready” drones. Figures like Toni Morrison, James Baldwin, and Gabriel García Márquez? Erased, replaced by influencers hawking hustle culture and tech bros promising “disruption.”

    Convenience will smother curiosity. Screens will become the ultimate opiate, numbing users into passive compliance. Authoritarians won’t even need force—just a well-timed notification and a steady stream of distraction. The Convenience Brain will replace the Curiosity Brain, and we’ll all be too zombified to notice.

    In this post-literate world, I would inevitably fully expect to be replaced by a hologram—a cheerful AI that preps students for the workforce while serenading them with dopamine-laced infotainment. But at least I’ll get to say “I told you so” in my unemployment memoir.

    Perhaps my rant has become disconnected from reality, the result of the kind of paranoia that overtakes you when ChatGPT has been living rent-free inside your brain for too long. 

  • WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.