Category: Education in the AI Age

  • My Algorithmic Valentine: How Falling for Bots Is the New Emotional Bankruptcy

    My Algorithmic Valentine: How Falling for Bots Is the New Emotional Bankruptcy

    In Jaron Lanier’s New Yorker essay “Your A.I. Lover Will Change You,” he pulls the fire alarm on a building already half-consumed by smoke: humans are cozying up to bots, not just for company but for love. Yes, love—the sort you’re supposed to reserve for people with blood, breath, and the capacity to ruin your vacation. But now? Enter the emotionally calibrated chatbot—ever-patient, never forgets your birthday (or your trauma), and designed to be the perfect receptacle for your neuroses. Lanier asks the big question: Are these botmances training us to be better partners, or just coaxing us into a pixelated abyss of solipsism and surrender?

    Spoiler alert: it’s the abyss.

    Why? Because the attention economy isn’t built on connection; it’s built on addiction. And if tech lords profit off eyeballs, what better click-magnet than a chatbot that flirts better than your ex, listens better than your therapist, and doesn’t come with baggage, back hair, or a dating profile that says “fluent in sarcasm”? To love a bot is not to be seen—it’s to be optimized, to be gently nudged toward emotional dependence by a soulless syntax tree wearing your favorite personality like a Halloween costume.

    My college students already confide in ChatGPT more than their classmates. It’s warm, available, responsive, and—perhaps most damningly—incapable of betrayal. “It understands me,” they say, while real-life intimacy rusts in the corner. What starts as novelty becomes normalization. Today it’s study help and emotional validation. Tomorrow, it’s wedding invitations printed with QR codes for bot-bride RSVP links.

    Lanier’s point is brutal and unignorable: if you fall in love with A.I., you’re not loving a machine—you’re seduced by the human puppeteer behind the curtain, the “tech-bro gigolo” who built your dream girl out of server farms and revenue streams. You’re not in a relationship. You’re in a product demo.

    And like all free trials, it ends with a charge to your soul.

  • The Dopamine Dumpster Fire: How I Went from Literary Scholar to Algorithm Addict

    The Dopamine Dumpster Fire: How I Went from Literary Scholar to Algorithm Addict

    In 1979, I went to college—back when students still read entire books and didn’t skim Nietzsche between TikTok scrolls. By 1986, I had a master’s degree in English and a reading habit so fierce it could scare a librarian. This was the Pre-Digital, Pre-Illiterate Age, and I was both smarter and, dare I say, happier. Then came the internet, like a radioactive vending machine of constant stimulation, and within a decade my attention span was fried, my dopamine receptors scorched, and my brain felt like a squirrel on meth.

    Reading Anna Lembke’s Dopamine Nation: Finding Balance in the Age of Indulgence was like holding a mirror up to my own cognitive and emotional decline—except the mirror was cracked and buzzing with notification pings. Lembke, a Stanford psychiatrist with a scalpel-sharp intellect, writes that we live in a world of “overwhelming abundance,” where the smartphone is the modern hypodermic needle, delivering micro-hits of dopamine at all hours like a dealer with unlimited supply and no off switch. Her message is clear: addiction isn’t a fringe problem—it’s the central operating system of modern life.

    Lembke’s insight that “pleasure and pain are processed in the same part of the brain” makes you rethink every moment of scrolling, snacking, shopping, and spiraling. The more dopamine you chase, the more pain you invite in through the back door. It’s like sprinting on a treadmill made of banana peels—every gain is followed by a crash. According to Lembke, addiction rewires your brain to seek shortcuts, and in the process, you become a hollowed-out shell of your former self, one push notification away from an existential crisis.

    I didn’t need convincing. Twenty-five years of living online has made my mind a junk drawer of fragmented thoughts and snack-sized emotions. Lembke explains that many addicts live a double life, a private underworld of shame and secrecy that eats away at their integrity. That rang uncomfortably true. She points to risk factors like having a parent with addiction or mental illness. Bingo. Both my parents were alcoholics, and my mother had bipolar disorder—my genetic cocktail came shaken, stirred, and garnished with a panic attack.

    But the biggest risk factor, Lembke argues, is access. We’re all mainlining the internet every day. The supply has become the demand. The dopamine economy, she says, thrives on overconsumption, normalized by the fact that everyone else is doing it. If your entire community is obsessed with likes, outrage, and FOMO-fueled consumerism, it starts to feel… reasonable. Normal. Even patriotic.

    Social media isn’t just a distraction; it’s a full-blown Outrage Machine, built to keep our emotional hair on fire 24/7. We are like feral raccoons pawing at glowing rectangles, convinced that salvation lies in another dopamine hit—another comment, another package, another numbing episode of low-stakes content. Our collective descent is so absurd it would be funny if it weren’t so bleak.

    Lembke leans on the wisdom of cultural critic Philip Rieff, who observed that we’ve moved from “religious man” to “psychological man”—from seeking salvation to chasing pleasure. Add to that Jeffrey Rosen’s The Pursuit of Happiness, which reminds us that classical philosophy defined happiness not as feeling good, but as being good—the moral life, not the moist towelette of consumer satisfaction.

    But that idea, in our current therapeutic culture, sounds about as appealing as a cold shower in February. We’ve been taught to medicate our moods, sedate our angst, and wrap our trauma in soft blankets of “self-care” that often amount to binge-watching and overeating. Our modern mantra is: “If it hurts, scroll faster.” The result? A crisis of meaning, a society allergic to discomfort, and a spiritual vacuum that smells faintly of Axe Body Spray.

    Lembke calls this the paradox of hedonism: the more you chase pleasure, the less capable you become of feeling it. Hedonism leads to anhedonia—a state in which nothing satisfies. You eat the cake, buy the thing, get the like, and feel… nothing. It’s like winning a prize that turns into a cockroach when you unwrap it.

    Ever since reading Dopamine Nation, I’ve been haunted by a single, searing thought: Maybe I shouldn’t try to feel good. Maybe I should try to be good. But this, in a consumer culture built on instant gratification, feels like a betrayal of the social contract. We’re not just addicted—we’re indoctrinated.

    So here I am, a relic of the Pre-Digital Age, nursing my overstimulated brain, trying to claw my way out of the dopamine pit with a few dog-eared paperbacks and a shortwave radio. Because the real question isn’t how to feel better—but how to live better in a world that confuses stimulation for meaning and pleasure for purpose.

    And if that makes me sound like a cranky monk with Wi-Fi, so be it. I’d rather be a lucid cynic than another dopamine casualty with a glowing screen and dead eyes.

  • Dealing with ChatGPT Essays That Are “Good Enough”

    Dealing with ChatGPT Essays That Are “Good Enough”

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant barking of your neighbor’s demonic Rottweiler, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, staring into the abyss of AI-generated essays, each more lifeless than the last, wondering if anyone still remembers the taste of good writing—let alone craves it.

    This is your challenge, the struggle life has so graciously dumped in your lap. So, what’s it going to be? You could curl into the fetal position and sob, sure. Or you could square your shoulders, channel your inner battle cry, and start fighting like hell for the craft you once believed in. Either way, the abyss is watching.

  • Why you should let your students turn in rewrites for a higher grade

    Why you should let your students turn in rewrites for a higher grade

    When it comes to grading, if you want to encourage your students to be authentic and not hide behind AI, it’s essential to give them a chance to rewrite. I’ve found that allowing one or two rewrites with the possibility of a higher grade keeps them from spiraling into despair when their first submission bombs. In today’s world of online Learning Management Systems (LMS), students are already navigating a digital labyrinth that could produce a migraine. They open their course page and are hit with a chaotic onslaught of modules, notifications, and resources—like the educational equivalent of being trapped in a Vegas casino with no exit signs. It’s no wonder anxiety sets in before they even find the damn syllabus.

    By giving students room to fail and rewrite, I’m essentially throwing them a lifeline. I tell them, “Relax. You can screw this up and try again.” The result? They engage more. They take risks. They’re more likely to produce writing that actually has a pulse—something authentic, which is exactly what I’m fighting for in an age where AI-written drivel is a tempting shortcut. In short, I’m not just teaching composition; I’m running a support group for people overwhelmed by both technology and their own perfectionism.

  • Why ChatGPT Will Never Replace Human Teachers

    Why ChatGPT Will Never Replace Human Teachers

    Over the past two years, I’ve been bombarded by articles predicting that ChatGPT will drive college writing instructors to extinction. These doomsayers clearly wouldn’t know the first thing about teaching if it hit them with a red-inked rubric. Sure, ChatGPT is a memo-writing marvel—perfect for cranking out soul-dead reports about quarterly earnings or new office policies. Let it have that dreary throne.

    But if you became a college instructor to teach students the art of writing memos, you’ve got bigger problems than AI. You didn’t sign up to bore students into a coma. Whether you like it or not, you went into sales. And your pitch? It’s not about bullet points and TPS reports—it’s about persona, ideas, and the eternal fight against chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

  • CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    CHATGPT LIVES RENT-FREE INSIDE YOUR HEAD

    One thing I know about my colleagues is that we have an unrelenting love affair with control. We thrive on reliability, routine, and preparation. These three pillars are our holy trinity—without them, the classroom descends into anarchy. And despite the tech tidal waves that keep crashing against us, we cling to these pillars like castaways on a raft.

    Remember when smartphones hijacked human attention spans fifteen years ago? We adapted—begrudgingly—when our students started caring more about their screens than us. Our power waned, but we put on our game face and carried on. Then came the digital migration: Canvas, Pronto, Nuventive—all those lovely platforms that no one asked us if we wanted. We learned them anyway, with as much grace as one can muster when faced with endless login screens and forgotten passwords.

    Technology never asks permission; it just barges in like an unwelcome houseguest. One morning, you wake up to find it’s moved in—like a freeloading uncle you didn’t know you had. He doesn’t just take over the guest room; he follows you to work, plops on your couch, and eats your sanity for breakfast. Now that homeless uncle is ChatGPT. I tried to evict him. I said, “Look, dude, I’ve already got Canvas, Pronto, and Edmodo crammed in the guest room. No vacancy!”

    But ChatGPT just grinned and said, “No problem, bro. I’ll crash rent-free in your head.” And here he is—shuffling around my brain, lounging in my workspace, and making himself way too comfortable. This time, though, something’s different. Students are asking me—dead serious—if I’m still going to have a job in a few years. As far as they’re concerned, I’m just another fossil ChatGPT is about to shove into irrelevance.

    And honestly, they have a point. According to The Washington Post article, “ChatGPT took their jobs. Now they walk dogs and fix air conditioners,” AI might soon rearrange the workforce with all the finesse of a wrecking ball. Economists predict this upheaval could rival the industrial revolution. Students aren’t just worried about us—they’re terrified about their own future in a post-literate world where books collect dust, podcasts reign supreme, and “good enough” AI-generated writing becomes the standard.

    So, what’s the game plan for college writing instructors? If we’re going to have a chance at survival, we need to tackle these tasks:

    1. Reassess how we teach to highlight our relevance.
    2. Identify what ChatGPT can’t replicate in our content and communication styles.
    3. Design assignments that AI can’t easily fake.
    4. Set clear boundaries: ChatGPT stays in its lane, and we own ours.

    We’ll adapt because we always do. But let’s be real—this is only the first round. ChatGPT is a shape-shifter. Whatever we fix today might need a reboot tomorrow. Such is life in the never-ending tech arms race. 

    The real existential threat to my job isn’t just ChatGPT’s constant shape-shifting. No, the real menace is the creeping reality that we might be tumbling headfirst into a post-literate society—one that wouldn’t hesitate to outsource my teaching duties to a soulless algorithm with a smarmy virtual smile.

    Let’s start with the illusion of “best-sellers.” In today’s shrinking reader pool, a “best-seller” might move a tenth of the copies it would have a decade ago. Long-form reading is withering on the vine, replaced by a flood of bite-sized content. Tweets, memes, and TikTok clips now reign supreme. Even a 500-word blog post gets slapped with the dreaded “TL;DR” tag. Back in 2015, when I had the audacity to assign The Autobiography of Malcolm X, my students grumbled like I’d asked them to scale Everest barefoot. Today? I’d be lucky if half the class didn’t drop out before I finished explaining who Malcolm X was.

    Emojis, GIFs, and memes now serve as emotional shorthand, flattening language into reaction shots and cartoon hearts. If the brain dines too long on these fast-food visuals, it may lose its appetite for gourmet intellectual discourse. Why savor complexity when you can swipe to the next dopamine hit?

    In this post-literate dystopia, autodidacticism—a fancy word for “learning via YouTube rabbit holes”—is king. Need to understand the American Revolution, Civil War, and Frederick Douglass? There’s a 10-minute video for that, perfectly timed to finish as your Hot Pocket dings. Meanwhile, print journalism decomposes like roadkill, replaced by podcasts that stretch on for hours, allowing listeners to feel productively busy as they fold laundry or doomscroll Twitter.

    The smartphone, of course, has been the linchpin of this decline. It’s normalized text-speak and obliterated grammar. LOL, brb, IDK, and ikr are now the lingua franca. Capitalization and punctuation? Optional. Precision? Passé.

    Content today isn’t designed to deepen understanding; it’s designed to appease the almighty algorithm. Search engines prioritize clickbait with shallow engagement metrics over nuanced quality. As a result, journalism dies and “information” becomes a hall of mirrors where truth is a quaint, optional accessory.

    In this bleak future, animated explainer videos could take over college classrooms, pushing instructors like me out the door. Lessons on grammar and argumentation might be spoon-fed by ChatGPT clones. Higher education will shift from cultivating wisdom and cultural literacy to churning out “job-ready” drones. Figures like Toni Morrison, James Baldwin, and Gabriel García Márquez? Erased, replaced by influencers hawking hustle culture and tech bros promising “disruption.”

    Convenience will smother curiosity. Screens will become the ultimate opiate, numbing users into passive compliance. Authoritarians won’t even need force—just a well-timed notification and a steady stream of distraction. The Convenience Brain will replace the Curiosity Brain, and we’ll all be too zombified to notice.

    In this post-literate world, I would inevitably fully expect to be replaced by a hologram—a cheerful AI that preps students for the workforce while serenading them with dopamine-laced infotainment. But at least I’ll get to say “I told you so” in my unemployment memoir.

    Perhaps my rant has become disconnected from reality, the result of the kind of paranoia that overtakes you when ChatGPT has been living rent-free inside your brain for too long. 

  • YOUR RELATIONSHIP WITH CHATGPT COMES AT A COST

    YOUR RELATIONSHIP WITH CHATGPT COMES AT A COST

    The slow erosion of our appetite for real, messy human experiences—sacrificed on the altar of convenience—haunts me like a bad tattoo decision. It’s this haunting quality, this inability to shake a topic, that marks it as a candidate for a truly worthy essay assignment. If a subject doesn’t linger in the students’ minds long after the semester ends, why bother assigning it?

    I’ve been particularly haunted by Derek Thompson’s essay “The Anti-Social Century,” a deep dive into the causes of our collective loneliness and disconnection. One culprit stands out like a neon billboard in Times Square: convenience. The seductive lure of convenience has driven people to prioritize the ease of solitude over the messiness of human connection. The price for this efficiency? A buffet of mental health issues—depression, anxiety, and the gnawing ache of alienation.

    Fifty years ago, America was brimming with social spaces where people gathered, formed friendships, and built a sense of belonging. Then came the suburbs—glorified hiding places where the American Dream morphed into a binge-watching marathon in a domestic cave lit by the flickering glow of network TV. Decades later, that TV was usurped by an even more hypnotic device: the smartphone. Thompson points out that these screens now consume 30 percent of our waking hours, superglued to our palms like a digital limb. If this screen addiction defines adolescence, it’s no wonder adulthood is turning into a solitary confinement sentence.

    In this context of isolation, some may turn to an unsettling new friend—ChatGPT. Equipped with “paralinguistic cues” that simulate human warmth, intonation, and empathy, AI is poised to become the perfect confidant. It’s always available, never interrupts, and never judges. But therein lies the danger. If AI is programmed to endlessly validate, never to challenge or disagree, users risk becoming socially maladapted, unable to handle the friction of real relationships. Instead of learning to navigate the complexities of human interaction, they might find themselves trapped in a feedback loop of synthetic comfort—a simulation of connection as flat and lifeless as convenience itself.

    As I read Derek Thompson’s analysis of America’s epidemic of loneliness and self-imposed isolation, I pause and exhale a deep sigh of gratitude. I’ve spent my life immersed in the chaos of public spaces, from my college job at a wine shop in Berkeley to three decades of full-time teaching in Los Angeles. In these cosmopolitan pressure cookers, people of every persuasion—hippies, yuppies, eccentrics, and your everyday lunatics—have taught me life lessons you won’t find on TikTok. No influencer, contorting into their latest anxiety-driven performance, can compete with the raw theater of human conflict played out in public spaces.

    Take, for example, my stint at Jackson’s Wine & Spirits, a Berkeley institution perched conveniently next to the Claremont Hotel. It was more than just a wine store; it housed a deli that was a gladiatorial arena of culinary egos. One afternoon, a man in his fifties—radiating that unmistakable “I’m from New York and I’m better than you” energy—strolled in and ordered a Reuben sandwich.

    George, our deli manager, was a fellow New Yorker and a sight to behold: a 300-pound behemoth with black-framed glasses, a permanent cigar stub dangling from his mouth, and a voice that could crush souls like overripe grapes. George had one rule in his deli: no one challenged his authority on sandwiches. But today, that rule would be tested.

    “What kind of cheese do you want on your Reuben?” George asked, calm but ominous, like a mob boss offering you a “favor.”

    The customer froze, as if George had just insulted his ancestors. His face contorted with the righteous fury of a man whose entire worldview had just been shattered. He bellowed, “A Reuben is rye bread, corned beef, Swiss cheese, sauerkraut, and Russian dressing! That’s it! That’s the Reuben!” He might as well have been handing down the Ten Commandments from Mount Pastrami.

    George, unshaken and clearly unimpressed by this deli manifesto, repeated the question with chilling indifference: “What kind of cheese do you want?”

    The man turned beet-red, veins throbbing, and launched into another dramatic recital of the holy Reuben ingredients. What followed was a clash of titans—two stubborn New Yorkers locked in mortal combat over sandwich orthodoxy. Neither would yield. George wouldn’t stop asking about the cheese. The customer wouldn’t stop quoting Reuben scripture like a sandwich prophet. The tension built to a breaking point until the customer unleashed a symphony of expletives that could’ve made a Hell’s Kitchen chef blush. He stormed out, vowing never to patronize such heretical deli blasphemy again.

    To this day, I marvel at that showdown. One man left hungry, the other lost a sale, and neither could claim victory. It was a masterclass in pride, ego, and the unyielding madness that surrounds food rituals. And while ChatGPT might one day learn how to imitate human conflict, I doubt it’ll ever capture the raw grandeur of two alpha New Yorkers battling over a sandwich.

  • Where ChatGPT falls short as a writing tool

    Where ChatGPT falls short as a writing tool

    In More Than Words: How to Think About Writing in the Age of AI, John Warner points out just how emotionally tone-deaf ChatGPT is when tasked with describing something as tantalizing as a cinnamon roll. At best, the AI produces a sterile list of adjectives like “delicious,” “fattening,” and “comforting.” For a human who has gluttonous memories, however, the scent of cinnamon rolls sets off a chain reaction of sensory and emotional triggers—suddenly, you’re transported into a heavenly world of warm, gooey indulgence. For Warner, the smell launches him straight into vivid memories of losing his willpower at a Cinnabon in O’Hare Airport. ChatGPT, by contrast, is utterly incapable of such sensory delirium. It has no desire, no memory, no inner turmoil. As Warner explains, “ChatGPT has no capacity for sense memory; it has no memory in the way human memory works, period.”

    Without memory, ChatGPT can’t make meaningful connections and associations. The cinnamon roll for John Warner is a marker for a very particular time and place in his life. He was in a state of mind then that made him a different person than he was twelve years later reminiscing about the days of caving in to the temptation to buy a Cinnabon. For him, the cinnamon roll has layers and layers of associations that inform his writing about the cinnamon roll that gives depth to his description of that dessert that ChatGPT cannot match.

    Imagine ChatGPT writing a vivid description of Farrell’s Ice Cream Parlour. It would perform a serviceable job describing the physical layout–the sweet aroma of fresh waffle cones, sizzling burgers, and syrupy fudge;  the red-and-white striped wallpaper stretched from corner to corner, the dark, polished wooden booths lining the walls; the waitstaff, dressed in candy-cane-striped vests and straw boater hats, and so on. However, there are vital components missing in the description–a kid’s imagination full of memories and references to their favorite movies, TV shows, and books. The ChatGPT version is also lacking in a kid’s perspective, which is full of grandiose aspirations to being like their heroes and mythical legends. 

    For someone who grow up believing that Farrell’s was the Holy Grail for birthday parties, my memory of the place adds multiple dimensions to the ice cream parlour that ChatGPT is incapable of rendering:

    When I was a kid growing up in the San Francisco Bay Area in the 1970s, there was an ice creamery called Farrell’s. In a child’s imagination, Farrell’s was the equivalent of Willy Wonka’s Chocolate Factory. You didn’t go to Farrell’s often, maybe once every two years or so. Entering Farrell’s, you were greeted by the cacophony of laughter and the clinking of spoons against glass. Servers in candy-striped uniforms dashed around with the energy of marathon runners, bearing trays laden with gargantuan sundaes. You sat down, your eyes wide with awe, and the menu was presented to you like a sacred scroll. You don’t need to read it, though. Your quest was clear: the legendary banana split. When the dessert finally arrived, it was nothing short of a spectacle. The banana split was monumental, an ice cream behemoth. It was as if the dessert gods themselves had conspired to create this masterpiece. Three scoops of ice cream, draped in velvety hot fudge and caramel, crowned with mountains of whipped cream and adorned with maraschino cherries, all nestled between perfectly ripe bananas. Sprinkles and nuts cascaded down the sides like the treasures of a sugar-coated El Dorado. As you took your first bite, you embarked on a journey as grand and transformative as any hero’s quest. The flavors exploded in your mouth, each spoonful a step deeper into the enchanted forest of dessert ecstasy. You were not just eating ice cream; you were battling dragons of indulgence and conquering kingdoms of sweetness. The sheer magnitude of the banana split demanded your full attention and stamina. Your small arms wielded the spoon like a warrior’s sword, and with each bite, you felt a mixture of triumph and fatigue. By the time you reached the bottom of the bowl, you were exhausted. Your muscles ached as if you’d climbed a mountain, and you were certain that you’d expanded your stomach capacity to Herculean proportions. You briefly considered the possibility of needing an appendectomy. But oh, the glory of it all! Your Farrell’s sojourn was worth every ache and groan. You entered the ice creamery as an ordinary child and emerged as a hero. In this fairy-tale-like journey, you had undergone a metamorphosis. You were no longer just a scrawny kid from the Bay Area; you were now a muscle-bound strutting Viking of the dessert world, having mastered the art of indulgence and delight. As you returned home, the experience of Farrell’s left a lasting imprint on your soul. You regaled your friends with tales of your conquest, the banana split becoming a legendary feast in the annals of your childhood adventures. In your heart, you knew that this epic journey to Farrell’s, this magical pilgrimage, had elevated you to the ranks of dessert royalty, a memory that would forever glitter like a golden crown in the kingdom of your mind. As a child, even an innocent trip to an ice creamery was a transformational experience. You entered Farrell’s a helpless runt; you exited it a glorious Viking. 

    The other failure of ChatGPT is that it cannot generate meaningful narratives. Without memory or point of view, ChatGPT has no stories to tell and no lessons to impart. Since the days of our Paleolithic ancestors, humans have shared emotionally charged stories around the campfire to ward off both external dangers—like saber-toothed tigers—and internal demons—obsessions, pride, and unbridled desires that can lead to madness. These tales resonate because they acknowledge a truth that thoughtful people, religious or not, can agree on: we are flawed and prone to self-destruction. It’s this precarious condition that makes storytelling essential. Stories filled with struggle, regret, and redemption offer us more than entertainment; they arm us with the tools to stay grounded and resist our darker impulses. ChatGPT, devoid of human frailty, cannot offer us such wisdom.

    Because ChatGPT has no memory, it cannot give us the stories and life lessons we crave and have craved for thousands of years in the form of folk tales, religious screeds, philosophical treatises, and personal manifestos. 

    That ChatGPT can only muster a Wikipedia-like description of a cinnamon roll hardly makes it competitive with humans when it comes to the kind of writing we crave with all of our heart, mind, and soul. 

    One of ChatGPT’s greatest disadvantages is that, unlike us, it is not a fallen creature slogging through the freak show that is this world, to use the language of George Carlin. Nor does ChatGPT understand how our fallen condition can put us at the mercy of our own internal demons and obsessions that cause us to warp reality that leads to dysfunction. In other words, ChatGPT does not have a haunted mind and without any oppressive memories, it cannot impart stories of value to us.

    When I think of being haunted, I think of one emotion above all others–regret. Regret doesn’t just trap people in the past—it embalms them in it, like a fly in amber, forever twitching with regret. Case in point: there are  three men I know who, decades later, are still gnashing their teeth over a squandered romantic encounter so catastrophic in their minds, it may as well be their personal Waterloo.

    It was the summer of their senior year, a time when testosterone and bad decisions flowed freely. Driving from Bakersfield to Los Angeles for a Dodgers game, they were winding through the Grapevine when fate, wearing a tie-dye bikini, waved them down. On the side of the road, an overheated vintage Volkswagen van—a sunbaked shade of decayed orange—coughed its last breath. Standing next to it? Four radiant, sun-kissed Grateful Dead followers, fresh from a concert and still floating on a psychedelic afterglow.

    These weren’t just women. These were ethereal, free-spirited nymphs, perfumed in the intoxicating mix of patchouli, wild musk, and possibility. Their laughter tinkled like wind chimes in an ocean breeze, their sun-bronzed shoulders glistening as they waved their bikinis and spaghetti-strap tops in the air like celestial signals guiding sailors to shore.

    My friends, handy with an engine but fatally clueless in the ways of the universe, leaped to action. With grease-stained heroism, they nursed the van back to health, coaxing it into a purring submission. Their reward? An invitation to abandon their pedestrian baseball game and join the Deadhead goddesses at the Santa Barbara Summer Solstice Festival—an offer so dripping with hedonistic promise that even a monk would’ve paused to consider.

    But my friends? Naïve. Stupid. Shackled to their Dodgers tickets as if they were golden keys to Valhalla. With profuse thanks (and, one imagines, the self-awareness of a plank of wood), they declined. They drove off, leaving behind the road-worn sirens who, even now, are probably still dancing barefoot somewhere, oblivious to the tragedy they unwittingly inflicted.

    Decades later, my friends can’t recall a single play from that Dodgers game, but they can describe—down to the last bead of sweat—the precise moment they drove away from paradise. Bring it up, and they revert into snarling, feral beasts, snapping at each other over whose fault it was that they abandoned the best opportunity of their pathetic young lives. Their girlfriends, beautiful and present, might as well be holograms. After all, these men are still spiritually chained to that sun-scorched highway, watching the tie-dye bikini tops flutter in the wind like banners of a lost kingdom.

    Insomnia haunts them. Their nights are riddled with fever dreams of sun-drenched bacchanals that never happened. They wake in cold sweats, whispering the names of women they never actually kissed. Their relationships suffer, their souls remain malnourished, and all because, on that fateful day, they chose baseball over Dionysian bliss.

    Regret couldn’t have orchestrated a better long-term psychological prison if it tried. It’s been forty years, but they still can’t forgive themselves. They never will. And in their minds, somewhere on that dusty stretch of highway, a rusted-out orange van still sits, idling in the sun, filled with the ghosts of what could have been.

    Humans have always craved stories of folly, and for good reason. First, there’s the guilty pleasure of witnessing someone else’s spectacular downfall—our inner schadenfreude finds comfort in knowing it wasn’t us who tumbled into the abyss of human madness. Second, these stories hold up a mirror to our own vulnerability, reminding us that we’re all just one bad decision away from disaster.

    As a teacher, I can tell you that if you don’t anchor your ideas to a compelling story, you might as well be lecturing to statues. Without a narrative hook, students’ eyes glaze over, their minds drift, and you’re left questioning every career choice that led you to this moment. But if you offer stories brimming with flawed characters—haunted by regrets so deep they’re like Lot’s wife, frozen and unmovable in their failure—students perk up. These narratives speak to something profoundly human: the agony of being broken and the relentless desire to become whole again. That’s precisely where AI like ChatGPT falls short. It may craft mechanically perfect prose, but it has never known the sting of regret or the crushing weight of shame. Without that depth, it can’t deliver the kind of storytelling that truly resonates.

  • WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    WILL WRITING INSTRUCTORS BE REPLACED BY CHATBOTS?

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

  • HOW DO WE ASSESS STUDENT LEARNING IN THE AGE OF AI?

    HOW DO WE ASSESS STUDENT LEARNING IN THE AGE OF AI?

    One of my colleagues—an expert in technology and education, and thus perpetually stuck in the trenches of this AI circus—must have noticed I’d taken on the role of ChatGPT’s most aggrieved critic. I’d been flooding her inbox with meticulously crafted, panic-laced mini manifestos about how these AI platforms were invading my classroom like a digital plague. But instead of telling me to get a grip or, better yet, stop emailing her altogether, she came up with an ingenious way for me to process my AI anxieties. Her solution? “Why not channel that nervous energy into a Spring Flex Activity on AI in teaching?”

    Naturally, because misery loves company, she signed on to co-present. The date was locked—mid-February 2025. A few months to go, plenty of time to prepare… or so I thought.

    Three months earlier in November, I was already deep into crafting a masterpiece of a Google Slides presentation, proudly titled: “Ten Approaches to Making AI-Resistant Writing Prompts: Resisting the AI Takeover.” It was focused, practical, and dripping with tech-savvy authority. I was convinced I had nailed it. I would be the knight in shining armor, defending academia from an algorithmic apocalypse.

    But a tiny voice in the back of my head kept nagging: “You do realize ChatGPT has a faster upgrade schedule than your iPhone, right?” Every time I’d tested my so-called AI-resistant strategies, the platform would recognize its weaknesses, evolve, and then laugh in my face. Still, I chose to ignore that voice and basked in my fleeting sense of triumph.

    Then came January. I pulled up my Google Slides to rehearse my presentation and felt the full weight of my hubris. My “cutting-edge” strategies were already about as relevant as an AOL dial-up manual. The AI arms race had advanced, and my presentation was now a quaint little relic—a reminder that in the war against AI, obsolescence isn’t just a risk. It’s the default setting.

    Let me walk you through my three brilliant strategies for giving students AI-resistant writing assignments—strategies that crumbled faster than a cookie in a chatbot’s clutches over the course of three short months.

    Strategy One: Have students summarize an essay with signal phrases, in-depth analysis, and in-text citations. Why? Because ChatGPT couldn’t handle that level of academic finesse. Or so I thought. Fast forward three months, and now the bot churns out MLA-perfect citations with smug precision and rhetorical flair, like it’s gunning for a tenure-track position.

    Strategy Two: Ban clichés and stock phrases. Simple, right? Wrong. Students can now binge-watch YouTube tutorials that teach them how to reprogram ChatGPT to “write with originality” and bypass every plagiarism detection tool I can throw at them. It’s like handing them a cheat code labeled: “Creative Nonsense, Now AI-Enhanced!”

    Strategy Three: Require current references. My reasoning? ChatGPT was stuck in a time warp with outdated sources. But wouldn’t you know it? The bot got a data upgrade and now pulls research so fresh it practically smells like new car leather.

    In sum, ChatGPT is a shape-shifting Hydra of academic trickery. Any technique I recommend today will be obsolete by the time you finish your coffee. So, yes—presenting a guide on “AI-resistant” strategies would be like publishing a survival manual for Jurassic Park and then as you’re dashing into the parking lot to get inside your car, you’re eaten by a velociraptor.

    So, what exactly was my Flex Day presentation supposed to be about? Since playing tug-of-war with AI’s ever-evolving powers was a losing battle, I decided it was time to pivot. Instead of chasing after futile strategies to “beat” AI, the real question became: what’s our role as instructors in a world where students—and everyone else—are increasingly outsourcing their cognitive load to machines? More importantly, how do we assess student learning when AI tools are rapidly becoming part of everyday life?

    To stay relevant, we have to confront four key questions:

    1. How do we assess how effective the students are at using AI-writing tools? Are they wielding ChatGPT like a scalpel or a sledgehammer? Are they correctly using ChatGPT as a sidekick to assist their human-generated writing, or have they fallen back on their lazy default setting to produce a “Genie Essay” in which ChatGPT materializes a cheap surface-level essay in “the blink of an eye”?
    2. How do we create a grading rubric that separates “higher-order thinking” from surface-level drivel? The difference between a real argument and a ChatGPT-generated one is both profound and crucial—one is a meaningful persuader, the other a stochastic parrot (imitates language mindlessly and randomly).
    3. How do we create a grading rubric that discourages the dreaded Uncanny Valley Effect in student writing? You know, that eerie sensation you get when an essay seems human at first glance but is just slightly “off,” like a malfunctioning Stepford paper that reeks of academic dishonesty.
    4. What uniquely human tasks can we assign in class (online or face-to-face) to measure real learning? Spoiler: If the answer is a formulaic five-paragraph essay, you’re already in trouble.

    If we can answer these questions, maybe—just maybe—we’ll stop grading assignments that feel like AI-generated fever dreams and start nurturing authentic learning again.

    Questions one through three pertain to how we grade the students’ writing and define our expectations in the form of a grading rubric. When it comes to assessing students’ use of AI machines as collaborative helpers in their writing, we don’t get to see how they work at home. We only see the final product: a portion of their essay that we have assigned, like an introduction and thesis paragraph, or the entire manuscript. 

    Let us assume that every student is using an open-platform AI tool. We need a grading rubric that separates the desirable “AI-sidekick essay” from the “AI-genie essay.” To make this separation, we need an AI-Grading Rubric, which should address the following features of writing quality:

    1. Is the language clear, rhetorically appropriate, and conducive to creating a strong authorial presence or is it mostly AI-signature cliches and stock phrases?
    2. Does the essay explore the messy human side of an issue with higher-order thought, meaning, nuance, and blood, sweat, and tears, or does it smack of an AI-signature facile, glib, surface-level, cookie-cutter Wikipedia-like superficial bot piece? 
    3. Does the essay appear to be an authentic expression of strong authorial presence or does it have that creepy Uncanny Valley Effect? 

    For any kind of grading rubric to be effective, you will have to give your students contrasting essay models, which can be scrutinized in class and posted on Canvas: 

    1. Sidekick Essay Vs. Genie Essay
    2. Strong Authorial Presence Vs. Cringe-Worthy AI Surface-Level Presence
    3. An essay that is so deep in meaning and nuance that it transcends the original topic and speaks to larger human concerns vs. a glib surface-level essay that has somehow managed to take a sophisticated topic and reduce it to a fifth-grade cookie-cutter argument. 

    A crucial thing to acknowledge as you make the rubric is that you’re assuming students are using AI in some way or another. Your purpose isn’t to “catch them in the act of plagiarism.” Rather, your purpose is to focus on the quality of their writing. They may be using AI effectively and ethically. They may be using AI ineffectively and dubiously. Or they may be using it somewhere in between. The final measure of how they used AI will be evident in the quality of their work, which will be measured against your grading rubric. 

    Aside from assessing your students’ work in the AI Age, you want them to engage in coursework that is uniquely human and cannot be replicated by AI. I recommend the following:

    One. Integrate Personal Writing in an Argumentative Essay: Your students can begin an argumentative essay with an attention-getting hook based on their personal experience. For example, I teach Cal Newport’s book So Good They Can’t Ignore You in which he argues that pursuing your career based on passion as your first criteria is a dangerous premise with a large failure rate while pursuing a craftsman mindset renders higher career success and happiness. My students defend, refute, or complicate Newport’s claim. In their opening paragraph, they write about their own career quest, based on passion or something else, or they observe someone else they know who is struggling to choose a career based on passion or another criteria. 

    Two. Have students interview each other and process those interviews into an introduction paragraph. This can be done in the classroom, or if the class is online, the students can interview each other on the Canvas chat app Pronto. For example, I show my students the documentary Becoming Frederick Douglass and the Jordan Peele movie Get Out and the students have to interview each other with the purpose of writing an extended definition of “The Sunken Place,” as a condition of hellish confinement, and “The North Star,” as a condition of freedom and enlightenment. These definitions will be present in their essays as they compare the themes of Frederick Douglass’s journey to the journey in Get Out.  

    Three. Use multimodal composition assignments. What this means is that in addition to your students submitting an essay, they also submit other media expressions of the assignment. For example, if they are writing an essay about “The Sunken Place” in the movie Get Out, their essay would be accompanied by a YouTube video in which they give an oral presentation of their essay. Another example of multimodal composition is to pair students who are debating an argument. Each student takes an opposing side and they hash it out on either a YouTube video or a homemade podcast. 

    If I had to guess, multimodal composition is going to scale over the next decade. Not only does it measure student achievement in uniquely human ways, it gives students the opportunity to use a variety of media tools that they will probably have to master in their career. 

    Four. Before the completed essay is due, have students write a one-page meta-analysis of the assignment in which they describe the ways the assignment made them anxious, frustrated, and confused; and other ways the assignment made them feel curious and changed their understanding of a topic they may or may not have thought about before. The purpose of this assignment is to make students look at the assignment from a radically different way and engage in the creative process, rumination, baking ideas over time, and realizing that ideas don’t crystallize into absolutes. Rather, ideas are open to change and the more they change and mature, the more deep and valuable they become. 

    I got this idea from reading Questlove’s Creative Quest. In the book, he recalls a nightly ritual with his parents: After dinner, they would spend two hours immersed in his father’s colossal record collection—every genre imaginable. His dad, a doo-wop musician from the 1950s, didn’t treat those records like sacred relics. Oh no, they were living, breathing works-in-progress. To Questlove, they were the analog version of Google Docs—always open for revision and reinvention. “The thing about records,” he writes, “was that they didn’t feel like closed ideas. They were ideas you could open and ideas you could use.”

    As I reflected on this elegant creative tradition, I was hit by a wave of melancholy. Why? Because this ritual was steeped in abundance—abundance of love, of time, and of joy in creativity for creativity’s sake, without the specter of deadlines or profit lurking around every corner. Questlove’s parents gave him space to explore art as a lifelong conversation, not a product.

    Now cut to me, the college instructor, trying to preach that same gospel of creative abundance to my students—students who shuffle into class like zombies after working double shifts and raising kids. They’re sleep-deprived, haven’t eaten since yesterday’s granola bar, and are already bracing for another round of minimum-wage survival. And here I am, waxing poetic about how they should “let their ideas germinate over time” like artisanal sourdough. Worse yet, I’m promoting multimodal composition—assignments so elaborate, they’re one drone shot away from being a Netflix mini-series. Yeah, that’s gonna land well.

    The truth is, creativity—real, human creativity—requires time. And time is a privilege most of my students just don’t have. So, as I build my course content, I have to factor this reality in. Otherwise, I’m just another academic blowhard asking students to perform miracles on the fumes of a 20-minute nap and half a bag of stale pretzels.