Tag: chatgpt

  • Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    Teaching Writing in the Age of the Machine: Why I Grade the Voice, Not the Tool

    I assume most of my college writing students are already using AI—whether as a brainstorming partner, a sentence-polisher, or, in some cases, a full-blown ghostwriter. I don’t waste time pretending otherwise. But I also make one thing very clear: I will never accuse anyone of plagiarism. What I will do is grade the work on its quality—and if the writing has that all-too-familiar AI aroma—smooth, generic, clichĂ©-ridden, and devoid of voice—I’m giving it a low grade.

    Not because it was written with AI.
    Because it’s bad writing.

    What I encourage, instead, is intentional AI use—students learning how to talk to ChatGPT with precision and personality, shaping it to match their own style, rather than outsourcing their voice entirely. AI is a tool, just like Word, Windows, or PowerPoint. It’s a new common currency in the information age, and we’d be foolish not to teach students how to spend it wisely.

    A short video that supports this view—“Lovely Take on Students Cheating with ChatGPT” by TheCodeWork—compares the rise of AI in writing to the arrival of calculators in 1970s math classrooms. Calculators didn’t destroy mathematical thinking—they freed students from rote drudgery and pushed them into more conceptual terrain. Likewise, AI can make writing better—but only if students know what good writing looks like.

    The challenge for instructors now is to change the assignments, as the video suggests. Students should be analyzing AI-generated drafts, critiquing them, improving them, and understanding why some outputs succeed while others fall flat. The writing process is no longer confined to a blank Word doc—it now includes the strategic prompting of large language models and the thoughtful revision of what they produce.

    But the devil, as always, is in the details.

    How will students know what a “desired result” is unless they’ve read widely, written deeply, and built a literary compass? Prompting ChatGPT is only as useful as the student’s ability to recognize quality when they see it. That’s where we come in—as instructors, our job is to show them side-by-side examples of AI-generated writing and guide them through what makes one version stronger, sharper, more human.

    Looking forward, I suspect composition courses will move toward multimodal assignments—writing paired with video, audio, visual art, or even music. AI won’t just change the process—it will expand the format. The essay will survive, yes, but it may arrive with a podcast trailer or a hand-drawn infographic in tow.

    There’s no going back. AI has changed the game, and pretending otherwise is educational malpractice. But we’re not here to fight the future. We’re here to teach students how to shape it with a voice that’s unmistakably their own.

  • Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    Writing in the Time of Deepfakes: One Professor’s Attempt to Stay Human

    My colleagues in the English Department were just as rattled as I was by the AI invasion creeping into student assignments. So, a meeting was called—one of those “brown bag” sessions, which, despite being optional, had the gravitational pull of a freeway pile-up. The crisis of the hour? AI.

    Would these generative writing tools, adopted by the masses at breakneck speed, render us as obsolete as VHS repairmen? The room was packed with jittery, over-caffeinated professors, myself included, all bracing for the educational apocalypse. One by one, they hurled doomsday scenarios into the mix, each more dire than the last, until the collective existential dread became thick enough to spread on toast.

    First up: What do you do when a foreign language student submits an essay written in their native tongue, then let’s play translator? Is it cheating? Does the term “English Department” even make sense anymore when our Los Angeles campus sounds like a United Nations general assembly? Are we teaching “English,” or are we, more accurately, teaching “the writing process” to people of many languages with AI now tagging along as a co-author?

    Next came the AI Tsunami, a term we all seemed to embrace with a mix of dread and resignation. What do we do when we’ve reached the point that 90% of the essays we receive are peppered with AI speak so robotic it sounds like Siri decided to write a term paper? We were all skeptical about AI detectors—about as reliable as a fortune teller reading tea leaves. I shared my go-to strategy: Instead of accusing a student of cheating (because who has time for that drama?), I simply leave a comment, dripping with professional distaste: “Your essay reeks of AI-generated nonsense. I’m giving it a D because I cannot, in good conscience, grade this higher. If you’d like to rewrite it with actual human effort, be my guest.” The room nodded in approval.

    But here’s the thing: The real existential crisis hit when we realized that the hardworking, honest students are busting their butts for B’s, while the tech-savvy slackers are gaming the system, walking away with A’s by running their bland prose through the AI carwash. The room buzzed with a strange mixture of outrage and surrender—because let’s be honest, at least the grammar and spelling errors are nearly extinct.

    As I walked out of that meeting, I had a new writing prompt simmering in my head for my students: “Write an argumentative essay exploring how AI platforms like ChatGPT will reshape education. Project how these technologies might be used in the future and consider the ethical lines that AI use blurs. Should we embrace AI as a tool, or do we need hard rules to curb its misuse? Address academic integrity, critical thinking, and whether AI widens or narrows the education gap.”

    When I got home that day, gripped by a rare and fleeting bout of efficiency, I crammed my car with a mountain of e-waste—prehistoric laptops, arthritic tablets, and cell phones so ancient they might as well have been carved from stone. Off to the City of Torrance E-Waste Drive I went, joining a procession of guilty consumers exorcising their technological demons, all of us making way for the next wave of AI-powered miracles. The line stretched endlessly, a funeral procession for our obsolescent gadgets, each of us unwitting foot soldiers in the ever-accelerating war of planned obsolescence.

    As I inched forward, I tuned into a podcast—Mark Cuban sparring with Bill Maher. Cuban, ever the capitalist prophet, was adamant: AI would never be regulated. It was America’s golden goose, the secret weapon for maintaining global dominance. And here I was, stuck in a serpentine line of believers, each of us dumping yesterday’s tech sins into a giant industrial dumpster, fueling the next cycle of the great AI arms race.

    I entertained the thought of tearing open my shirt to reveal a Captain America emblem, fully embracing the absurdity of it all. This wasn’t just teaching anymore—it was an uprising. If I was going to lead it, I’d need to be Moses descending from Mount Sinai, armed not with stone tablets but with AI Laws. Without them, I’d be no better than a fish flopping helplessly on the banks of a drying river. To enter this new era unprepared wasn’t just foolish—it was professional malpractice. My survival depended on understanding this beast before it devoured my profession.

    That’s when the writing demon slithered in, ever the opportunist.

    “These AI laws could be a book. Put you on the map, bro.”

    I rolled my eyes. “A book? Please. Ten thousand words isn’t a book. It’s a pamphlet.”

    “Loser,” the demon sneered.

    But I was older now, wiser. I had followed this demon down enough literary dead ends to know better. The premise was too flimsy. I wasn’t here to write another book—I was here to write a warning against writing books, especially in the AI age, where the pitfalls were deeper, crueler, and exponentially dumber.

    “I still won,” the demon cackled. “Because you’re writing a book about not writing a book. Which means
 you’re writing a book.”

    I smirked. “It’s not a book. It’s The Confessions of a Recovering Writing Addict. So pack your bags and get the hell out.”

    ***

    My colleague on the technology and education committee asked me to give a presentation for FLEX day at the start of the Spring 2025 semester. Not because I was some revered elder statesman whose wisdom was indispensable in these chaotic times. No, the real reason was far less flattering: As an incurable Manuscriptus Rex, I had been flooding her inbox with my mini manifestos on teaching writing in the Age of AI, and saddling me with this Herculean task was her way of keeping me too busy to send any more. A strategic masterstroke, really.

    Knowing my audience would be my colleagues—seasoned professors, not wide-eyed students—cranked the pressure to unbearable levels. Teaching students is one thing. Professors? A whole different beast. They know every rhetorical trick in the book, can sniff out schtick from across campus, and have a near-religious disdain for self-evident pontification. If I was going to stand in front of them and talk about teaching writing in the AI Age, I had better bring something substantial—something useful—because the one thing worse than a bad presentation is a room full of academics who know it’s bad and won’t bother hiding their contempt.

    To make matters worse, this was FLEX day—the first day back from a long, blissful break. Professors don’t roll into FLEX day with enthusiasm. They arrive in one of two states: begrudging grumpiness or outright denial, as if by refusing to acknowledge the semester’s start, they could stave it off a little longer. The odds of winning over this audience were not just low; they were downright hostile.

    I felt wildly out of my depth. Who was I to deliver some grand pronouncement on “essential laws” for teaching in the AI Age when I was barely keeping my own head above water? I wasn’t some oracle of pedagogical wisdom—I was a mole burrowing blindly through the shifting academic terrain, hoping to sniff my way out of catastrophe.

    What saved me was my pride. I dove in, consumed every article, study, and think piece I could find, experimented with my own writing assignments, gathered feedback from students and colleagues, and rewrote my presentation so many times that it seeped into my subconscious. I’d wake up in the middle of the night, drool on my face, furious that I couldn’t remember the flawless elocution of my dream-state lecture.

    Google Slides became my operating table, and I was the desperate surgeon, deleting and rearranging slides with the urgency of someone trying to perform a last-minute heart transplant. To make things worse, unlike a stand-up comedian, I had no smaller venue to test my material before stepping onto what, in my fevered mind, felt like my Netflix Special: Teaching Writing in the AI Age—The Essential Guide.

    The stress was relentless. I woke up drenched in sweat, tormented by visions of failure—public humiliation so excruciating it belonged in a bad movie. But I kept going, revising, rewriting, refining.

    ***

    During the winter break as I prepared my AI presentation, I recall one surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And this brings us to the first principle of teaching in the AI Age: The Sales Principle. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Thinking about my sales pitch, I realize I  should be grateful—forty years of teaching college writing is no small privilege. After all, the very pillars that make the job meaningful—cultivating a strong persona, wrestling with enduring ideas, and imposing structure on chaos—are the same things I revere in great novels. The irony, of course, is that while I can teach these elements with ease, I’ve proven, time and again, to be utterly incapable of executing them in a novel of my own.

    Take persona: Nabokov’s Lolita is a master class in voice, its narrator so hypnotically deranged that we can’t look away. Enduring ideas? The Brothers Karamazov crams more existential dilemmas into its pages than both the Encyclopedia Britannica and Wikipedia combined. And the highest function of the novel—to wrestle chaos into coherence? All great fiction does this. A well-shaped novel tames the disarray of human experience, elevating it into something that feels sacred, untouchable.

    I should be grateful that I’ve spent four decades dissecting these elements in the classroom. But the writing demon lurking inside me has other plans. It insists that no real fulfillment is possible unless I bottle these features into a novel of my own. I push back. I tell the demon that some of history’s greatest minds didn’t waste their time with novels—Pascal confined his genius to aphorisms, Dante to poetry, Sophocles to tragic plays. Why, then, am I so obsessed with writing a novel? Perhaps because it is such a human offering, something that defies the deepfakes that inundate us.

  • Dealing with ChatGPT Essays That Are “Good Enough”

    Dealing with ChatGPT Essays That Are “Good Enough”

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant barking of your neighbor’s demonic Rottweiler, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, staring into the abyss of AI-generated essays, each more lifeless than the last, wondering if anyone still remembers the taste of good writing—let alone craves it.

    This is your challenge, the struggle life has so graciously dumped in your lap. So, what’s it going to be? You could curl into the fetal position and sob, sure. Or you could square your shoulders, channel your inner battle cry, and start fighting like hell for the craft you once believed in. Either way, the abyss is watching.

  • Why ChatGPT Will Never Replace Human Teachers

    Why ChatGPT Will Never Replace Human Teachers

    Over the past two years, I’ve been bombarded by articles predicting that ChatGPT will drive college writing instructors to extinction. These doomsayers clearly wouldn’t know the first thing about teaching if it hit them with a red-inked rubric. Sure, ChatGPT is a memo-writing marvel—perfect for cranking out soul-dead reports about quarterly earnings or new office policies. Let it have that dreary throne.

    But if you became a college instructor to teach students the art of writing memos, you’ve got bigger problems than AI. You didn’t sign up to bore students into a coma. Whether you like it or not, you went into sales. And your pitch? It’s not about bullet points and TPS reports—it’s about persona, ideas, and the eternal fight against chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

  • CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    One thing I know about my colleagues is that we have an unrelenting love affair with control. We thrive on reliability, routine, and preparation. These three pillars are our holy trinity—without them, the classroom descends into anarchy. And despite the tech tidal waves that keep crashing against us, we cling to these pillars like castaways on a raft.

    Remember when smartphones hijacked human attention spans fifteen years ago? We adapted—begrudgingly—when our students started caring more about their screens than us. Our power waned, but we put on our game face and carried on. Then came the digital migration: Canvas, Pronto, Nuventive—all those lovely platforms that no one asked us if we wanted. We learned them anyway, with as much grace as one can muster when faced with endless login screens and forgotten passwords.

    Technology never asks permission; it just barges in like an unwelcome houseguest. One morning, you wake up to find it’s moved in—like a freeloading uncle you didn’t know you had. He doesn’t just take over the guest room; he follows you to work, plops on your couch, and eats your sanity for breakfast. Now that homeless uncle is ChatGPT. I tried to evict him. I said, “Look, dude, I’ve already got Canvas, Pronto, and Edmodo crammed in the guest room. No vacancy!”

    But ChatGPT just grinned and said, “No problem, bro. I’ll crash rent-free in your head.” And here he is—shuffling around my brain, lounging in my workspace, and making himself way too comfortable. This time, though, something’s different. Students are asking me—dead serious—if I’m still going to have a job in a few years. As far as they’re concerned, I’m just another fossil ChatGPT is about to shove into irrelevance.

    And honestly, they have a point. According to The Washington Post article, “ChatGPT took their jobs. Now they walk dogs and fix air conditioners,” AI might soon rearrange the workforce with all the finesse of a wrecking ball. Economists predict this upheaval could rival the industrial revolution. Students aren’t just worried about us—they’re terrified about their own future in a post-literate world where books collect dust, podcasts reign supreme, and “good enough” AI-generated writing becomes the standard.

    So, what’s the game plan for college writing instructors? If we’re going to have a chance at survival, we need to tackle these tasks:

    1. Reassess how we teach to highlight our relevance.
    2. Identify what ChatGPT can’t replicate in our content and communication styles.
    3. Design assignments that AI can’t easily fake.
    4. Set clear boundaries: ChatGPT stays in its lane, and we own ours.

    We’ll adapt because we always do. But let’s be real—this is only the first round. ChatGPT is a shape-shifter. Whatever we fix today might need a reboot tomorrow. Such is life in the never-ending tech arms race. 

    The real existential threat to my job isn’t just ChatGPT’s constant shape-shifting. No, the real menace is the creeping reality that we might be tumbling headfirst into a post-literate society—one that wouldn’t hesitate to outsource my teaching duties to a soulless algorithm with a smarmy virtual smile.

    Let’s start with the illusion of “best-sellers.” In today’s shrinking reader pool, a “best-seller” might move a tenth of the copies it would have a decade ago. Long-form reading is withering on the vine, replaced by a flood of bite-sized content. Tweets, memes, and TikTok clips now reign supreme. Even a 500-word blog post gets slapped with the dreaded “TL;DR” tag. Back in 2015, when I had the audacity to assign The Autobiography of Malcolm X, my students grumbled like I’d asked them to scale Everest barefoot. Today? I’d be lucky if half the class didn’t drop out before I finished explaining who Malcolm X was.

    Emojis, GIFs, and memes now serve as emotional shorthand, flattening language into reaction shots and cartoon hearts. If the brain dines too long on these fast-food visuals, it may lose its appetite for gourmet intellectual discourse. Why savor complexity when you can swipe to the next dopamine hit?

    In this post-literate dystopia, autodidacticism—a fancy word for “learning via YouTube rabbit holes”—is king. Need to understand the American Revolution, Civil War, and Frederick Douglass? There’s a 10-minute video for that, perfectly timed to finish as your Hot Pocket dings. Meanwhile, print journalism decomposes like roadkill, replaced by podcasts that stretch on for hours, allowing listeners to feel productively busy as they fold laundry or doomscroll Twitter.

    The smartphone, of course, has been the linchpin of this decline. It’s normalized text-speak and obliterated grammar. LOL, brb, IDK, and ikr are now the lingua franca. Capitalization and punctuation? Optional. Precision? PassĂ©.

    Content today isn’t designed to deepen understanding; it’s designed to appease the almighty algorithm. Search engines prioritize clickbait with shallow engagement metrics over nuanced quality. As a result, journalism dies and “information” becomes a hall of mirrors where truth is a quaint, optional accessory.

    In this bleak future, animated explainer videos could take over college classrooms, pushing instructors like me out the door. Lessons on grammar and argumentation might be spoon-fed by ChatGPT clones. Higher education will shift from cultivating wisdom and cultural literacy to churning out “job-ready” drones. Figures like Toni Morrison, James Baldwin, and Gabriel GarcĂ­a MĂĄrquez? Erased, replaced by influencers hawking hustle culture and tech bros promising “disruption.”

    Convenience will smother curiosity. Screens will become the ultimate opiate, numbing users into passive compliance. Authoritarians won’t even need force—just a well-timed notification and a steady stream of distraction. The Convenience Brain will replace the Curiosity Brain, and we’ll all be too zombified to notice.

    In this post-literate world, I would inevitably fully expect to be replaced by a hologram—a cheerful AI that preps students for the workforce while serenading them with dopamine-laced infotainment. But at least I’ll get to say “I told you so” in my unemployment memoir.

    Perhaps my rant has become disconnected from reality, the result of the kind of paranoia that overtakes you when ChatGPT has been living rent-free inside your brain for too long. 

  • Where ChatGPT falls short as a writing tool

    Where ChatGPT falls short as a writing tool

    In More Than Words: How to Think About Writing in the Age of AI, John Warner points out just how emotionally tone-deaf ChatGPT is when tasked with describing something as tantalizing as a cinnamon roll. At best, the AI produces a sterile list of adjectives like “delicious,” “fattening,” and “comforting.” For a human who has gluttonous memories, however, the scent of cinnamon rolls sets off a chain reaction of sensory and emotional triggers—suddenly, you’re transported into a heavenly world of warm, gooey indulgence. For Warner, the smell launches him straight into vivid memories of losing his willpower at a Cinnabon in O’Hare Airport. ChatGPT, by contrast, is utterly incapable of such sensory delirium. It has no desire, no memory, no inner turmoil. As Warner explains, “ChatGPT has no capacity for sense memory; it has no memory in the way human memory works, period.”

    Without memory, ChatGPT can’t make meaningful connections and associations. The cinnamon roll for John Warner is a marker for a very particular time and place in his life. He was in a state of mind then that made him a different person than he was twelve years later reminiscing about the days of caving in to the temptation to buy a Cinnabon. For him, the cinnamon roll has layers and layers of associations that inform his writing about the cinnamon roll that gives depth to his description of that dessert that ChatGPT cannot match.

    Imagine ChatGPT writing a vivid description of Farrell’s Ice Cream Parlour. It would perform a serviceable job describing the physical layout–the sweet aroma of fresh waffle cones, sizzling burgers, and syrupy fudge;  the red-and-white striped wallpaper stretched from corner to corner, the dark, polished wooden booths lining the walls; the waitstaff, dressed in candy-cane-striped vests and straw boater hats, and so on. However, there are vital components missing in the description–a kid’s imagination full of memories and references to their favorite movies, TV shows, and books. The ChatGPT version is also lacking in a kid’s perspective, which is full of grandiose aspirations to being like their heroes and mythical legends. 

    For someone who grow up believing that Farrell’s was the Holy Grail for birthday parties, my memory of the place adds multiple dimensions to the ice cream parlour that ChatGPT is incapable of rendering:

    When I was a kid growing up in the San Francisco Bay Area in the 1970s, there was an ice creamery called Farrell’s. In a child’s imagination, Farrell’s was the equivalent of Willy Wonka’s Chocolate Factory. You didn’t go to Farrell’s often, maybe once every two years or so. Entering Farrell’s, you were greeted by the cacophony of laughter and the clinking of spoons against glass. Servers in candy-striped uniforms dashed around with the energy of marathon runners, bearing trays laden with gargantuan sundaes. You sat down, your eyes wide with awe, and the menu was presented to you like a sacred scroll. You don’t need to read it, though. Your quest was clear: the legendary banana split. When the dessert finally arrived, it was nothing short of a spectacle. The banana split was monumental, an ice cream behemoth. It was as if the dessert gods themselves had conspired to create this masterpiece. Three scoops of ice cream, draped in velvety hot fudge and caramel, crowned with mountains of whipped cream and adorned with maraschino cherries, all nestled between perfectly ripe bananas. Sprinkles and nuts cascaded down the sides like the treasures of a sugar-coated El Dorado. As you took your first bite, you embarked on a journey as grand and transformative as any hero’s quest. The flavors exploded in your mouth, each spoonful a step deeper into the enchanted forest of dessert ecstasy. You were not just eating ice cream; you were battling dragons of indulgence and conquering kingdoms of sweetness. The sheer magnitude of the banana split demanded your full attention and stamina. Your small arms wielded the spoon like a warrior’s sword, and with each bite, you felt a mixture of triumph and fatigue. By the time you reached the bottom of the bowl, you were exhausted. Your muscles ached as if you’d climbed a mountain, and you were certain that you’d expanded your stomach capacity to Herculean proportions. You briefly considered the possibility of needing an appendectomy. But oh, the glory of it all! Your Farrell’s sojourn was worth every ache and groan. You entered the ice creamery as an ordinary child and emerged as a hero. In this fairy-tale-like journey, you had undergone a metamorphosis. You were no longer just a scrawny kid from the Bay Area; you were now a muscle-bound strutting Viking of the dessert world, having mastered the art of indulgence and delight. As you returned home, the experience of Farrell’s left a lasting imprint on your soul. You regaled your friends with tales of your conquest, the banana split becoming a legendary feast in the annals of your childhood adventures. In your heart, you knew that this epic journey to Farrell’s, this magical pilgrimage, had elevated you to the ranks of dessert royalty, a memory that would forever glitter like a golden crown in the kingdom of your mind. As a child, even an innocent trip to an ice creamery was a transformational experience. You entered Farrell’s a helpless runt; you exited it a glorious Viking. 

    The other failure of ChatGPT is that it cannot generate meaningful narratives. Without memory or point of view, ChatGPT has no stories to tell and no lessons to impart. Since the days of our Paleolithic ancestors, humans have shared emotionally charged stories around the campfire to ward off both external dangers—like saber-toothed tigers—and internal demons—obsessions, pride, and unbridled desires that can lead to madness. These tales resonate because they acknowledge a truth that thoughtful people, religious or not, can agree on: we are flawed and prone to self-destruction. It’s this precarious condition that makes storytelling essential. Stories filled with struggle, regret, and redemption offer us more than entertainment; they arm us with the tools to stay grounded and resist our darker impulses. ChatGPT, devoid of human frailty, cannot offer us such wisdom.

    Because ChatGPT has no memory, it cannot give us the stories and life lessons we crave and have craved for thousands of years in the form of folk tales, religious screeds, philosophical treatises, and personal manifestos. 

    That ChatGPT can only muster a Wikipedia-like description of a cinnamon roll hardly makes it competitive with humans when it comes to the kind of writing we crave with all of our heart, mind, and soul. 

    One of ChatGPT’s greatest disadvantages is that, unlike us, it is not a fallen creature slogging through the freak show that is this world, to use the language of George Carlin. Nor does ChatGPT understand how our fallen condition can put us at the mercy of our own internal demons and obsessions that cause us to warp reality that leads to dysfunction. In other words, ChatGPT does not have a haunted mind and without any oppressive memories, it cannot impart stories of value to us.

    When I think of being haunted, I think of one emotion above all others–regret. Regret doesn’t just trap people in the past—it embalms them in it, like a fly in amber, forever twitching with regret. Case in point: there are  three men I know who, decades later, are still gnashing their teeth over a squandered romantic encounter so catastrophic in their minds, it may as well be their personal Waterloo.

    It was the summer of their senior year, a time when testosterone and bad decisions flowed freely. Driving from Bakersfield to Los Angeles for a Dodgers game, they were winding through the Grapevine when fate, wearing a tie-dye bikini, waved them down. On the side of the road, an overheated vintage Volkswagen van—a sunbaked shade of decayed orange—coughed its last breath. Standing next to it? Four radiant, sun-kissed Grateful Dead followers, fresh from a concert and still floating on a psychedelic afterglow.

    These weren’t just women. These were ethereal, free-spirited nymphs, perfumed in the intoxicating mix of patchouli, wild musk, and possibility. Their laughter tinkled like wind chimes in an ocean breeze, their sun-bronzed shoulders glistening as they waved their bikinis and spaghetti-strap tops in the air like celestial signals guiding sailors to shore.

    My friends, handy with an engine but fatally clueless in the ways of the universe, leaped to action. With grease-stained heroism, they nursed the van back to health, coaxing it into a purring submission. Their reward? An invitation to abandon their pedestrian baseball game and join the Deadhead goddesses at the Santa Barbara Summer Solstice Festival—an offer so dripping with hedonistic promise that even a monk would’ve paused to consider.

    But my friends? NaĂŻve. Stupid. Shackled to their Dodgers tickets as if they were golden keys to Valhalla. With profuse thanks (and, one imagines, the self-awareness of a plank of wood), they declined. They drove off, leaving behind the road-worn sirens who, even now, are probably still dancing barefoot somewhere, oblivious to the tragedy they unwittingly inflicted.

    Decades later, my friends can’t recall a single play from that Dodgers game, but they can describe—down to the last bead of sweat—the precise moment they drove away from paradise. Bring it up, and they revert into snarling, feral beasts, snapping at each other over whose fault it was that they abandoned the best opportunity of their pathetic young lives. Their girlfriends, beautiful and present, might as well be holograms. After all, these men are still spiritually chained to that sun-scorched highway, watching the tie-dye bikini tops flutter in the wind like banners of a lost kingdom.

    Insomnia haunts them. Their nights are riddled with fever dreams of sun-drenched bacchanals that never happened. They wake in cold sweats, whispering the names of women they never actually kissed. Their relationships suffer, their souls remain malnourished, and all because, on that fateful day, they chose baseball over Dionysian bliss.

    Regret couldn’t have orchestrated a better long-term psychological prison if it tried. It’s been forty years, but they still can’t forgive themselves. They never will. And in their minds, somewhere on that dusty stretch of highway, a rusted-out orange van still sits, idling in the sun, filled with the ghosts of what could have been.

    Humans have always craved stories of folly, and for good reason. First, there’s the guilty pleasure of witnessing someone else’s spectacular downfall—our inner schadenfreude finds comfort in knowing it wasn’t us who tumbled into the abyss of human madness. Second, these stories hold up a mirror to our own vulnerability, reminding us that we’re all just one bad decision away from disaster.

    As a teacher, I can tell you that if you don’t anchor your ideas to a compelling story, you might as well be lecturing to statues. Without a narrative hook, students’ eyes glaze over, their minds drift, and you’re left questioning every career choice that led you to this moment. But if you offer stories brimming with flawed characters—haunted by regrets so deep they’re like Lot’s wife, frozen and unmovable in their failure—students perk up. These narratives speak to something profoundly human: the agony of being broken and the relentless desire to become whole again. That’s precisely where AI like ChatGPT falls short. It may craft mechanically perfect prose, but it has never known the sting of regret or the crushing weight of shame. Without that depth, it can’t deliver the kind of storytelling that truly resonates.

  • WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

  • Ozempic Challenges the Notion of Free Will

    Ozempic Challenges the Notion of Free Will

    The other day I was listening to Howard Stern and his co-host Robin Quivers talking about how a bunch of celebrities magically slimmed down at the same time. The culprit, they noted, was Ozempic—a drug available mostly to the rich. While they laughed about the side effects, such as incontinence, “Ozempic face” and “Ozempic butt,” I couldn’t help but see these grotesque symptoms as a metaphor for the Ozempification of a society hooked on shortcuts. They enjoyed some short-term benefits but the side effects were far worse than the supposed solution. Ozempification was strikingly evident in AI-generated essays–boring, generic, surface-level, cliche-ridden, just about worthless. Regardless of how well structured and logically composed, these essays have the telltale signs of “Ozempfic face” and “Ozempic butt.” 

    As a college writing instructor, I’m not just trying to sell academic honesty. I’m trying to sell pride. As I face the brave new world of teaching writing in the AI era, I’ve realized that my job as a college instructor has morphed into that of a supercharged salesman. And what am I selling? No less than survival in an age where the very tools meant to empower us—like AI—threaten to bury us alive under layers of polished mediocrity. Imagine it: a spaceship has landed on Earth in the form of ChatGPT. It’s got warp-speed potential, sure, but it can either launch students into the stars of academic brilliance or plunge them into the soulless abyss of bland, AI-generated drivel. My mission? To make them realize that handling this tool without care is like inviting a black hole into their writing.

    As I fine-tune my sales pitch, I think about Ozempic–that magic slimming drug, beloved by celebrities who’ve turned from mid-sized to stick figures overnight. Like AI, Ozempic offers a seductive shortcut. But shortcuts have a price. You see the trade-off in “Ozempic face”—that gaunt, deflated look where once-thriving skin sags like a Shar-Pei’s wrinkles—or, worse still, “Ozempic butt,” where shapely glutes shrink to grim, skeletal wiring. The body wasn’t worked; it was bypassed. No muscle-building, no discipline. Just magic pill ingestion—and what do you get? A husk of your former self. Ozempified.

    The Ozempification of writing is a marvel of modern mediocrity—a literary gastric bypass where prose, instead of slimming down to something sleek and muscular, collapses into a bloated mess of clichĂ©s and stock phrases. It’s writing on autopilot, devoid of tension, rhythm, or even the faintest trace of a soul. Like the human body without effort, writing handed over to AI without scrutiny deteriorates into a skeletal, soulless product: technically coherent, yes, but lifeless as an elevator pitch for another cookie-cutter Marvel spinoff.

    What’s worse? Most people can’t spot it. They think their AI-crafted essay sparkles when, in reality, it has all the charm of Botox gone wrong—rigid, lifeless, and unnervingly “off.” Call it literary Ozempic face: a hollowed-out, sagging simulacrum of actual creativity. These essays prance about like bargain-bin Hollywood knock-offs—flashy at first glance but gutless on closer inspection.

    But here’s the twist: demonizing AI and Ozempic as shortcuts to ruin isn’t the full story. Both technologies have a darker complexity that defies simplistic moralizing. Sometimes, they’re necessary. Just as Ozempic can prevent a diabetic’s fast track to early organ failure, AI can become a valuable tool—if wielded with care and skill.

    Take Rebecca Johns’ haunting essay, “A Diet Writer’s Regrets.” It rattled me with its brutal honesty and became the cornerstone of my first Critical Thinking essay assignment. Johns doesn’t preach or wallow in platitudes. She exposes the failures of free will and good intentions in weight management with surgical precision. Her piece suggests that, as seductive as shortcuts may be, they can sometimes be life-saving, not soul-destroying. This tension—between convenience and survival, between control and surrender—deserves far more than a knee-jerk dismissal. It’s a line we walk daily in both our bodies and our writing. The key is knowing when you’re using a crutch versus when you’re just hobbling on borrowed time. 

    I want my students to grasp the uncanny parallels between Ozempic and AI writing platforms like ChatGPT. Both are cutting-edge solutions to modern problems: GLP-1 drugs for weight management and AI tools for productivity. And let’s be honest—both are becoming necessary adaptations to the absurd conditions of modern life. In a world flooded with calorie-dense junk, “willpower” and “food literacy” are about as effective as handing out umbrellas during a tsunami. For many, weight gain isn’t just an inconvenience—it’s a life-threatening hazard. Enter GLP-1s, the biochemical cavalry.

    Similarly, with AI tools quickly becoming the default infrastructure for white-collar work, resisting them might soon feel as futile as refusing to use Google Docs or Windows. If you’re in the information economy, you either adapt or get left behind. But here’s the twist I want my students to explore: both technologies, while necessary, come with strings attached. They save us from drowning, but they also bind us in ways that provoke deep, existential anguish.

    Rebecca Johns captures this anguish in her essay, “A Diet Writer’s Regrets.” Ironically, Johns started her career in diet journalism not just to inform others, but to arm herself with insider knowledge to win her own weight battles. Perhaps she could kill two birds with one stone: craft top-tier content while secretly curbing her emotional eating. But, as she admits, “None of it helped.” Instead, her career exploded along with her waistline. The magazine industry’s appetite for diet articles grew insatiable—and so did her own cravings. The stress ate away at her resolve, and before long, she was 30 pounds heavier, trapped by the very cycle she was paid to analyze.

    By the time her BMI hit 45 (deep in the obesity range), Johns was ashamed to tell anyone—even her husband. Desperate, she cycled through every diet plan she had ever recommended, only to regain the weight every time. Enter 2023. Her doctor handed her a lifeline: Mounjaro, a GLP-1 drug with a name as grand as the results it promised. (Seriously, who wouldn’t picture themselves triumphantly hiking Mount Kilimanjaro after hearing that name?) For Johns, it delivered. She shed 80 pounds without white-knuckling through hunger pangs. The miracle wasn’t just the weight loss—it was how Mounjaro rewired her mind.

    “Medical science has done what no diet-and-exercise plan ever could,” she writes. “It changed my entire relationship with what I eat and when and why.” Food no longer controlled her. But here’s the kicker: while the drug granted her a newfound sense of freedom, it also raises profound questions about dependence, control, and the shifting boundaries of human resilience—questions not unlike those we face with AI. Both Ozempic and AI can save us. But at what cost? 

    And is the cost of not using these technologies even greater? Rebecca Johns’ doctor didn’t mince words—she was teetering on the edge of diabetes. The trendy gospel of “self-love” and “body acceptance” she had once explored for her articles suddenly felt like a cruel joke. What’s the point of “self-acceptance” when carrying extra weight could put you six feet under?

    Once she started Mounjaro, everything changed. Her cravings for rich, calorie bombs disappeared, she got full on tiny portions, and all those golden nuggets of diet advice she’d dished out over the years—cut carbs, eat more protein and veggies, avoid snacks—were suddenly effortless. No more bargaining with herself for “just one cookie.” The biggest shift, however, was in her mind. She experienced a complete mental “reset.” Food no longer haunted her every waking thought. “I no longer had to white-knuckle my way through the day to lose weight,” she writes.

    Reading that, I couldn’t help but picture my students with their glowing ChatGPT tabs, no longer caffeinated zombies trying to churn out a midnight essay. With AI as their academic Mounjaro, they’ve ditched the anxiety-fueled, last-minute grind and achieved polished results with half the effort. AI cushions the process—time, energy, and creativity now outsourced to a digital assistant.

    Of course, the analogy isn’t perfect. AI tools like ChatGPT are dirt-cheap (or free), while GLP-1 drugs are expensive, scarce, and buried under a maze of insurance red tape. Johns herself is on borrowed time—her insurance will stop covering Mounjaro in just over a year. Her doctor warns that once off the drug, her weight will likely return, dragging her health risks back with it. Faced with this grim reality, she worries she’ll have no choice but to return to the endless cycle of dieting—“white-knuckling” her days with tricks and hacks that have repeatedly failed her.

    Her essay devastates me for many reasons. Johns is a smart, painfully honest narrator who lays bare the shame and anguish of relying on technology to rescue her from a problem that neither expertise nor willpower could fix. She reports on newfound freedom—freedom from food obsession, the physical benefits of shedding 80 pounds, and the relief of finally feeling like a more present, functional family member. But lurking beneath it all is the bitter truth: her well-being is tethered to technology, and that dependency is a permanent part of her identity.

    This contradiction haunts me. Technology, which I was raised to believe would stifle our potential, is now enhancing identity, granting people the ability to finally become their “better selves.” As a kid, I grew up on Captain Kangaroo, where Bob Keeshan preached the gospel of free will and positive thinking. Books like The Little Engine That Could drilled into me the sacred mantra: “I think I can.” Hard work, affirmations, and determination were supposed to be the alchemy that transformed character and gave us a true sense of self-worth.

    But Johns’ story—and millions like hers—rewrite that childhood gospel into something far darker: The Little Engine That Couldn’t. No amount of grit or optimism got her to the top of the hill. In the end, only medical science saved her from herself. And it terrifies me to think that maybe, just maybe, this is the new human condition: we can’t become our Higher Selves without technological crutches.

    This raises questions that I can’t easily shake. What does it mean to cheat if technology is now essential to survival and success? Just as GLP-1 drugs sculpt bodies society deems “acceptable,” AI is quietly reshaping creativity and productivity. At what point do we stop being individuals who achieve greatness through discipline and instead become avatars of the tech we rely on? Have we traded the dream of self-actualization for a digital illusion of competence and control?

    Of course, these philosophical quandaries feel like a luxury when most of us are drowning in the realities of modern life. Who has time to ponder free will or moral fortitude when you’re working overtime just to stay afloat? Maybe that’s the cruelest twist of all. Technology hasn’t just rewritten the rules—it’s made them inescapable. You adapt, or you get left behind. And maybe, somewhere deep down, we all already know which path we’re on.

    As I mull over the anguish and philosophical complexities presented in Rebecca Johns’ essay, I realize I’ve hit a goldmine for my Critical Thinking class. The themes of free will and technological dependency in her essay make it a worthy essay assignment for my students. For an assignment to be worthy, it must contain “Enduring ideas” that transcend the course and are so powerful and haunting they potentially sear an indelible impression in the students’ souls. 

    My college’s online education coordinator, Moses Wolfenstein, introduced me to this idea of “Enduring Ideas,” which he learned from Grant Wiggins and Jay McTighe’s Understanding by Design. Moses explained that “Enduring Ideas” are the foundational, universal concepts within a subject—those big ideas that students are likely to carry with them well beyond the classroom. According to Moses, these ideas form “the heart of the discipline,” connecting to the larger truths of the human condition. Because they resonate so deeply with students, “Enduring Ideas” have the power to drive genuine engagement.

    I was convinced that Rebecca Johns’ essay fulfilled the criteria, so now I had to create an argumentative essay assignment:

    In a 1,700-word essay using at least four credible sources, support, refute, or complicate the claim that, despite the philosophical challenges to free will, self-worth, and authenticity raised in Rebecca Means’ essay “A Diet Writer’s Regrets,” her story demonstrates that reliance on technology—such as GLP-1 drugs and AI writing tools—can be a necessary adaptation for survival, success, and competitiveness in today’s world. However, this adaptation comes at a significant cost: the erosion of self-reliance, a diminished sense of identity, and compromised authenticity. Is the cost justified? Are we striking a dangerous bargain for convenience and success? Or is refusing this deal even more self-destructive, with consequences so severe that avoiding it is the greater folly? Explore these questions in your essay, considering both the benefits and risks of technological dependency.

  • HOW DO YOU GRADE AN AI-GENERATED “GENIE” ESSAY?

    HOW DO YOU GRADE AN AI-GENERATED “GENIE” ESSAY?

    Let’s get one thing straight: AI writing tools are impressive—borderline sorcery—for tasks like editing, outlining, experimenting with rhetorical voices, and polishing prose. I want my students to learn how to wield these tools because, spoiler alert, they aren’t going anywhere. AI will be as embedded in their future careers as email and bad office coffee. Teaching them to engage with AI isn’t just practical; it can actually make the process of learning to write more dynamic and engaging—assuming they don’t treat it like a magic eight-ball.

    That said, let’s not kid ourselves: in the AI Age, the line between authentic writing and plagiarism has become blurred. I’ll concede that writing today is more of a hybrid creature. You’re no longer grading a lone student’s essay—you’re evaluating how effectively someone can collaborate with technology without it turning into a lifeless, Frankensteined word salad.

    And here’s the kicker—not all AI-assisted writing is created equal. Some students use AI as a trusty sidekick, enhancing their own writing. Others? Well, they treat AI like a wish-granting genie, hoping it’ll conjure a masterpiece with a few vague prompts. What they end up with are “genie essays”—stiff, robotic monstrosities that reek of what instructors lovingly refer to as AI plagiarism. It’s like the uncanny valley of academic writing: technically coherent but soulless enough to give you existential dread.

    When faced with the dreaded genie essay, resist the urge to brandish the scarlet P for plagiarism. That’s a rabbit hole lined with bureaucratic landmines and self-inflicted migraines. First off, in a world where screenwriters and CEOs are cheerfully outsourcing their brains to ChatGPT, it’s hypocritical to deny students access to the same tools. Second, AI detection software is about as trustworthy as a used car salesman—glitchy, inconsistent, and bound to fail spectacularly when you need it most. Third, confronting a student about AI use is a fast track to an ugly, defensive shouting match that makes everyone want to crawl into a dark hole and die.

    My advice? Forget chasing “academic honesty” like some puritan witch hunter. Instead, focus on grading quality based on your rubric. Genie essays—those hollow, AI-generated snoozefests—practically grade themselves with a big, fat D or F. No need to scream “Plagiarism!” from the rooftops. Just point out the abysmal writing quality.

    Picture this: A student turns in an essay that technically ticks all your boxes—claim, evidence, organization, even a few dutiful signal phrases. But the whole thing reads like it was written by a Hallmark card algorithm that’s one motivational quote away from a nervous breakdown. Time to whip out a comment like this:

    “While your essay follows the prompt and contains necessary structural elements, it lacks in-depth analysis, presents generic, surface-level ideas, and is riddled with stock phrases, clichĂ©s, and formulaic robot-speak. As a result, it does not meet the standards of college-level writing or satisfy the Student Learning Outcomes.”

    I’ve used something kinder than this (barely), and you know what? Not one student has argued with me. Why? My guess is they don’t want to die on the hill of defending their AI-generated sludge. They’d rather take the low grade than risk having a grievance committee dissect their essay and reveal it for the bot-written monstrosity it is. Smart move. Even they know when to fold.

    More often than not, after I make a comment on a genie essay, the student will later confess and apologize for resorting to ChatGPT. They’ll tell me they had time constraints due to their job or a family emergency, and they take the hit. 

    The shame of passing off a chatbot-generated essay as your own has all but evaporated, and honestly, I’m not shocked. It’s not that today’s students are any less ethical than their predecessors. No, it’s that the line between “authentic” work and AI-assisted output has turned into a smudgy Rorschach test. In the AI Age, the idea of originality is slipperier than a politician at a press conference. Still, let’s be real: quality writing—sharp, insightful, and memorable—hasn’t gone extinct. Turning in some bland, AI-scented drivel that reads like a rejected Wikipedia draft? That’s still unacceptable, no matter how much technology is doing the heavy lifting these days.

    When it comes to grading, if you want to encourage your students to create authentic writing and not hide behind AI, it’s essential to give them a chance to rewrite. I’ve found that allowing one or two rewrites with the possibility of a higher grade keeps them from spiraling into despair when their first submission bombs. In today’s world of online Learning Management Systems (LMS), students are already navigating a digital labyrinth that could produce a migraine. They open their course page and are hit with a chaotic onslaught of modules, notifications, and resources—like the educational equivalent of being trapped in a Vegas casino with no exit signs. It’s no wonder anxiety sets in before they even find the damn syllabus.

    By giving students room to fail and rewrite, I’m essentially throwing them a lifeline. I tell them, “Relax. You can screw this up and try again.” The result? They engage more. They take risks. They’re more likely to produce writing that actually has a pulse—something authentic, which is exactly what I’m fighting for in an age where AI-written drivel is a tempting shortcut. In short, I’m not just teaching composition; I’m running a support group for people overwhelmed by both technology and their own perfectionism.

    If you want to crush your students’ spirits like a cinder block to a soda can, go ahead—pepper their essays with comments until they resemble the Dead Sea Scrolls, riddled with ancient mysteries and editorial marks. Remember, you’re not the high priest of Random House, dissecting a bestseller with the fervor of a literary surgeon. Your students are not authors tweaking their next Pulitzer prizewinner; they’re deer in the headlights, dodging corrections like hunters’ bullets. Load them down with too many notes, and they’ll toss their first draft like it’s cursed Ikea furniture in desperate need for assembly—wood screws, cam lock nuts, and dowel rods strewn across the floor next to an inscrutable instruction manual. At that point, ChatGPT becomes their savior, and off they go, diving into AI’s warm, mind-clearing waters.

    Here’s a reality check: Your students were raised texting, scrolling, and laughing at 15-second TikToks, not slogging through The Count of Monte Cristo or unraveling Dickensian labyrinths in Bleak House. Their attention spans have the tensile strength of wet spaghetti. Handing them an intricate manifesto on rewriting will make their brains flatline faster than you can say “Les MisĂ©rables.” If you want results, focus on three key improvements. Yes, just three. Keep it simple and digestible, like a McNugget of literary wisdom.

    You are their personal trainer, not some sadistic drill sergeant barking out Herculean demands. You don’t shove them under a bar loaded with 400 pounds on day one and shout, “Lift or die!” No, you ease them in. Guide them to the lat machine like a gentle Sherpa of education. Set the weight selector pin at 10 pounds. Teach them to pull with grace, not grunt like they’re auditioning for Gladiator. Form comes first. Confidence follows. They need to trust the process, to see themselves slowly building strength. Maybe they won’t make viral gains overnight, but this is why you became a teacher—not for glory or applause, but for those small, stubborn victories that bloom over time.

    And trust me—there will be victories. I’ve seen it. Students with writing deficits are not doomed to live forever in the land of dangling modifiers and comma splices. I’m living proof. When I stumbled onto my college campus in 1979 at seventeen, I was told I wasn’t ready for freshman composition. They shunted me into what I’d later dub “Bonehead English,” which kicked my ass so hard I had to downgrade to “Pre-Bonehead.” I wasn’t stupid. My teachers weren’t to blame. I was just too busy daydreaming about being the next Schwarzenegger, consumed by the illusion of future pecs and glory. But something clicked in college—I redirected my muscle dreams from biceps to brain cells. And here I am now, climbing the educational ladder I once thought was unreachable.

    So, lighten up on the corrections, and maybe—just maybe—you’ll witness your students climb too.

    The point of this chapter isn’t to have you allow your AI concerns to make you morph into some grim, clipboard-wielding overlord of academic misery. It’s about threading the needle: keeping your standards intact while preventing your students from mentally checking out like a bored clerk on a Friday shift. And to strike that balance, here’s a radical idea—stop moonlighting as the plagiarism police. Nobody wants to see you patrolling Turnitin reports like it’s an episode of CSI: MLA Edition. Instead, fixate on improving the actual writing.

    Next, throw your students a rewrite lifeline. Give them a shot at redemption, or at least at salvaging their GPA from the wreckage of their latest Word doc catastrophe. The goal is to prevent them from spiraling into despair and skipping class faster than a doomed New Year’s resolution.

    Lastly, remember, these are academic toddlers in a gym full of intellectual kettlebells. You wouldn’t toss them onto the T-Bar Row or demand a perfect Turkish Get-Up without first teaching them how not to blow out their L5-S1. Show them the fundamentals, give them small wins, and gradually increase the weight. This isn’t a Rocky montage—it’s education. Adjust your expectations accordingly.

  • Talking About ChatGPT with My College Students

    Talking About ChatGPT with My College Students

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, crypto investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant yapping of your neighbor’s demonic Belgian dachshund, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, gazing into the abyss of AI-generated essays—each one as lifeless as a department meeting on a Friday afternoon—wondering if anyone still remembers what good writing tastes like, let alone hungers for it. Spoiler alert: probably not.

    This is your challenge, your Everest of futility, your battle against the relentless tide of Mindless Ozempification. Life has oh-so-generously handed you this cosmic joke disguised as a teaching mission. So what’s your next move? You could curl up in the fetal position, weeping salty tears of despair into your syllabus. That’s one option. Or you could square your shoulders, roar your best primal scream, and fight like hell for the craft you once worshipped.

    Either way, the abyss is staring back, smirking, and waiting for your next move.