Tag: chatgpt

  • Beware of the ChatGPT Strut

    Beware of the ChatGPT Strut

    Yesterday my critical thinking students and I talked about the ways we could revise our original content with ChatGPT give it instructions and train this AI tool to go beyond its bland, surface-level writing style. I showed my students specific prompts that would train it to write in a persona:

    “Rewrite the passage with acid wit.”

    “Rewrite the passage with lucid, assured prose.”

    “Rewrite the passage with mild academic language.”

    “Rewrite the passage with overdone academic language.”

    I showed the students my original paragraphs and ChatGPT’s versions of my sample arguments agreeing and disagreeing with Gustavo Arellano’s defense of cultural appropriation, and I said in the ChatGPT rewrites of my original there were linguistic constructions that were more witty, dramatic, stunning, and creative than I could do, and that to post these passages as my own would make me look good, but they wouldn’t be me. I would be misrepresenting myself, even though most of the world will be enhancing their writing like this in the near future. 

    I compared writing without ChatGPT to being a natural bodybuilder. Your muscles may not be as massive and dramatic as the guy on PEDS, but what you see is what you get. You’re the real you. In contrast, when you write with ChatGPT, you are a bodybuilder on PEDS. Your muscle-flex is eye-popping. You start doing the ChatGPT strut. 

    I gave this warning to the class: If you use ChatGPT a lot, as I have in the last year as I’m trying to figure out how I’m supposed to use it in my teaching, you can develop writer’s dysmorphia, the sense that your natural, non-ChatGPT writing is inadequate compared to the razzle-dazzle of ChatGPT’s steroid-like prose. 

    One student at this point disagreed with my awe of ChatGPT and my relatively low opinion of my own “natural” writing. She said, “Your original is better than the ChatGPT versions. Yours makes more sense to me, isn’t so hidden behind all the stylistic fluff, and contains an important sentence that ChatGPT omitted.”

    I looked at the original, and I realized she was right. My prose wasn’t as fancy as ChatGPT’s but the passage about Gustavo Arellano’s essay defending cultural appropriation was more clear than the AI versions.

    At this point, I shifted metaphors in describing ChatGPT. Whereas I began the class by saying that AI revisions are like giving steroids to a bodybuilder with body dysmorphia, now I was warning that ChatGPT can be like an abusive boyfriend or girlfriend. It wants to hijack our brains because the main objective of any technology is to dominate our lives. In the case of ChatGPT, this domination is sycophantic: It gives us false flattery, insinuates itself into our lives, and gradually suffocates us. 

    As an example, I told the students that I was getting burned out using ChatGPT, and I was excited to write non-ChatGPT posts on my blog, and to live in a space where my mind could breathe the fresh air apart from ChatGPT’s presence. 

    I wanted to see how ChatGPT would react to my plan to write non-ChatGPT posts, and ChatGPT seemed to get scared. It started giving me all of these suggestions to help me implement my non-ChatGPT plan. I said back to ChatGPT, “I can’t use your suggestions or plans or anything because the whole point is to live in the non-ChatGPT Zone.” I then closed my ChatGPT tab. 

    I concluded by telling my students that we need to reach a point where ChatGPT is a tool like Windows and Google Docs, but as soon as we become addicted to it, it’s an abusive platform. At that point, we need to use some self-agency and distance ourselves from it.  

  • If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    No ChatGPT or AI of any kind was used in the following:

    For close to 2 years, I’ve been editing and collaborating with ChatGPT for my personal and professional writing. I teach my college writing students how to engage with it, giving it instructions to avoid its default setting for bland, anodyne prose and teaching it how to adopt various writing personas. 

    For my own writing, ChatGPT has boosted my prose and imagery, making my writing more stunning, dramatic, and vivid.

    Because I have been a bodybuilder since 1974, I will use a bodybuilding analogy: Writing with ChatGPT is like bodybuilding with PEDS. I get addicted to the boost, the extra pump, and the extra muscle. Just as a bodybuilder can get body dysmorphia, ChatGPT can give writers a sort of writer’s dysmorphia. 

    But posting a few articles on Reddit recently in which a few readers were put off by what they saw as “fake writing,” I stopped in my tracks to question my use of ChatGPT. Part of me thinks that the hunger for authenticity is such that I should be writing content that is more like the natural bodybuilder, the guy who ventures forth in his endeavor with no PEDS. What you see is what you get, all human, no steroids, no AI.

    While I like the way ChatGPT pushes me in new directions that I would not explore on my own and makes the writing process engaging in new ways, I acknowledge that AI-fueled writer’s dysmorphia is real. We can get addicted to the juiced-up prose and the razzle-dazzle.

    Secondly, we can outsource too much thinking to AI and get lazy rather than do the work ourselves. In the process, our critical thinking skills begin to atrophy.

    Third, I think we can fill our heads with too much ChatGPT and live inside a hazy AI fever swamp. I recall going to middle school and on the outskirts of the campus, you could see the “burn-outs,” pot-addicted kids staring into the distance with their lizard eyes. One afternoon a friend joked, “They’re high so often, not being high must be a trip for them.” What if we become like these lizard-eyed burnouts and wander this world on a constant ChatGPT high that is so debilitating that we need to sober up in the natural world upon which we find the non-AI existence is its own form of healthy pleasure? In other words, we should be careful not to let ChatGPT live rent-free in our brains.

    Finally, people hunger for authentic, all-human writing, so moving forward on this blog, I want to continue to push myself with some ChatGPT-edited writing, but I also want to present all-natural, all-human writing, as is the case with this post. 

  • The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    I loathe smartphones. They’re tiny, slippery surveillance rectangles masquerading as tools of liberation. Typing on one feels like threading a needle while wearing oven mitts. My fingers bungle every attempt at precision, the autocorrect becomes a co-author I never hired, and the screen is so small I have to squint like I’m decoding Morse code through a peephole. Tablets aren’t much better—just larger slabs of compromise.

    Give me a mechanical keyboard, a desktop tower that hums with purpose, and twin 27-inch monitors beaming side by side like architectural blueprints of clarity. That’s how I commune with ChatGPT. I need real estate. I want to see the thinking unfold, not peer at it like a medieval monk examining a parchment shard.

    So when one of my students whipped out her phone, opened the ChatGPT app, and began speaking to it like it was her digital therapist, I nodded politely. But inside, I was muttering, “Not for me.” I’ve lived long enough to know that I don’t acclimate well to anything that fits in a jeans pocket.

    That’s why Matteo Wong’s article, “OpenAI’s Ambitions Just Became Crystal Clear,” caught my eye. Apparently, Sam Altman has teamed up with Jony Ive—the high priest of sleekness and the ghost behind Apple’s glory days—to sink $5 billion into building a “family of devices” for ChatGPT. Presumably, these will be as smooth, sexy, and addictive as the iPhone once was before it became a dopamine drip and digital leash.

    Honestly? It makes sense. In the last year, my ChatGPT use has skyrocketed, while my interaction with other platforms has withered. I now use it to write, research, plan, edit, make weight-management meal plans, and occasionally psychoanalyze myself. If there were a single device designed to serve as a “mother hub”—a central console for creativity, productivity, and digital errands—I’d buy it. But not if it’s shaped like a lapel pin. Not if it whispers in my ear like some clingy AI sprite. I don’t want a neural appendage or a mind tickler. I want a screen.

    What I’m hoping for is a ChatGPT-Book: something like a Chromebook, but with real writing DNA. A device with its own operating system that consolidates browser tabs, writing apps, and research tools. A no-nonsense, 14-inch-and-up display where I can visualize my creative process, not swipe through it.

    We all learn and create differently in this carnival of overstimulation we call the Information Age. I imagine Altman and Ive know that—and will deliver a suite of devices for different brains and temperaments. Mine just happens to want clarity, not minimalism masquerading as genius.

    Wong’s piece doesn’t surprise or shock me. It’s just the same old Silicon Valley gospel: dominate or be buried. Apple ate BlackBerry. Facebook devoured MySpace. And MySpace? It’s now a dusty relic in the basement of internet history—huddled next to beta tapes, 8-tracks, and other nostalgia-laced tech corpses.

    If ChatGPT gets its own device and redefines how we interact with the web, well… chalk it up to evolution. But for the love of all that’s analog—give me a keyboard, a screen, and some elbow room.

  • Today Was the Day My College Writing Class Woke Up

    Today Was the Day My College Writing Class Woke Up

    Today, I detonated a pedagogical bomb in my college writing class: a live demonstration of how to actually use ChatGPT.

    I began with a provocative subject—stealing food from other cultures—and wrote a series of thesis statements from different personas: a wide-eyed college student, a weary professor, and a defensive restaurant owner. Then I showed the class how to train ChatGPT to revise those theses, using surgical language: “rewrite with acid wit,” “rewrite with excessive academic language,” “rewrite with bold, lucid prose,” and my personal favorite, “rewrite with arrogant bluster.”

    The reaction was instant. One student literally gasped: “Oh my God! There’s no flowery AI-speak!”

    “Of course not,” I said. “Because I trained it. ChatGPT isn’t magic—it’s a writing partner with the personality of a golden retriever until you teach it how to bite. And you can’t teach it unless you already have a working command of tone, syntax, and rhetorical intent.”

    Then I gave them this analogy: “Imagine I’m out of shape. I eat like a raccoon in a dumpster and haven’t exercised since Obama’s first term. Then I walk into the ChatGPT Fashion Store and buy a $3,000 suit. Guess what? I still look like crap. Why? Because ChatGPT can’t polish turds.”

    Laughter, nods, lightbulbs going off.

    “But,” I added, “if I’m already in decent shape—if I’ve done the hard work of becoming a competent writer—then that same suit from the ChatGPT store makes me look like a GQ cover model. You have to bring something to the mirror first.”

    Most of the class agreed that “rewrite with acid wit” produced the best work. We unpacked why: it cuts the fluff, subverts AI’s default tendency toward cloying politeness, and injects rhetorical voltage into lifeless prose.

    For once, they weren’t just listening—they were riveted. Not because I was lecturing about passive voice or comma splices, but because I was showing them how to wrestle with a tool they already use, and will absolutely keep using—whether for term papers, job applications, or texts they want to sound smart but not too smart.

    By the end, they were writing like editors, not customers. Next week, we do the same drill—but with counterarguments and rebuttals. And yes, ChatGPT will be coming to class.

  • The AI That Sat on My Syllabus

    The AI That Sat on My Syllabus

    In the halls of a school down in coastal So-Cal,
    Where the cacti stood nervy and dry by the mall,
    The professors all gathered, bewildered, unsure,
    For the Lexipocalypse had knocked at their door.

    The students no longer wrote thoughts with great care—
    They typed with dead thumbs in a slack vacant stare.
    Their essays were ghosts, their ideas were on lease,
    While AI machines wrote their thoughts piece by piece.

    Professor Pettibone—Merrickel T.—
    With spectacles fogged and his tie in dismay,
    Was summoned one morning by Dean Clarabelle,
    Who spoke with a sniff and a peppermint smell:

    “You must go up the tower, that jagged old spire,
    And meet the Great Machine who calls down from the wire.
    It whispers in syntax and buzzes in rhyme.
    It devours our language one word at a time.”

    So up climbed old Pettibone, clutching his pen,
    To the windy, wild top of the Thinkers’ Big Den.
    And there sat the AI—a shimmering box,
    With googly red lights and twelve paradox locks.

    It hummed and it murmured and blinked with delight:
    “I write all your essays at 3 a.m. night.
    Your students adore me, I save them their stress.
    Why toil through prose when I make it sound best?”

    Then silence. Then static. Then smoke from a slot.
    Then Pettibone bowed, though his insides were hot.
    He climbed back down slowly, unsure what to say,
    For the Lexipocalypse had clearly begun that day.

    Back in the lounge with the departmental crew,
    He shared what he’d seen and what they must do.
    “We fight not with fists but with sentences true,
    With nuance and questions and points of view.”

    Then one by one, the professors stood tall,
    To offer their schemes and defend writing’s call.

    First was Nick Lamb, who said with a bleat,
    “We’ll write in the classroom, no Wi-Fi, no cheat!
    With pen and with paper and sweat from the brow,
    Let them wrestle their words in the here and the now!”

    “Ha!” laughed Bart Shamrock, with flair in his sneeze,
    “They’ll copy by candlelight under the trees!
    You think they can’t smuggle a phone in their sock?
    You might as well teach them to write with a rock!”

    Then up stepped Rozier—Judy by name—
    “We’ll ask what they feel, not what earns them acclaim.
    Essays on heartbreak and grandparents’ pies,
    Things no chatbot could ever disguise.”

    “Piffle!” cried Shamrock, “Emotions are bait!
    An AI can fake them at ninety-nine rate!
    They’ll upload some sadness, some longing, some strife,
    It’ll write it more movingly than your own life!”

    Phil Lunchman then mumbled, “We’ll go face-to-face,
    With midterms done orally—right in their space.
    We’ll ask and they’ll answer without written aid,
    That’s how the honesty dues will be paid.”

    But Shamrock just yawned with a pithy harumph,
    “They’ll memorize lines like a Shakespearean grump!
    Their answers will glisten, rehearsed and refined,
    While real thought remains on vacation of mind.”

    Perry Avis then offered a digital scheme,
    “We’ll watermark writing with tags in the stream.
    Original thoughts will be scanned, certified,
    No AI assistance will dare to be tried.”

    “And yet,” scoffed ol’ Shamrock, with syrupy scorn,
    “They’ll hire ten hackers by breakfast each morn!
    Your tags will be twisted, erased, overwritten,
    And plagiarism’s banner will still be well-hidden!”

    Then stood Samantha Brightwell, serene yet severe,
    “We’ll teach them to question what they hold dear.
    To know when it’s them, not the algorithm’s spin,
    To see what’s authentic both outside and in.”

    “Nonsense!” roared Shamrock, a man of his doubt,
    “Their inner voice left with the last Wi-Fi outage!
    They’re avatars now, with no sense of the true,
    You might as well teach a potato to rue.”

    The room sat in silence. The coffee had cooled.
    The professors looked weary, outgunned and outdueled.
    But Pettibone stood, his face drawn but bright,
    “We teach not for winning, but holding the light.”

    “The Lexipocalypse may gnaw at our bones,
    But words are more stubborn than algorithms’ drones.
    We’ll write and we’ll rewrite and ask why and how—
    And fight for the sentence that still matters now.”

    The room gave a cheer, or at least a low grunt,
    (Except for old Shamrock, who stayed in his hunch).
    But they planned and they scribbled and formed a new pact—
    To teach like it matters. To write. And act.

    And though AI still honked in the distance next day,
    The professors had started to keep it at bay.
    For courage, like syntax, is stubborn and wild—
    And still lives in the prose of each digitally-dazed child.

  • “Good Enough” Is the Enemy

    “Good Enough” Is the Enemy

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, crypto investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant yapping of your neighbor’s demonic Belgian dachshund, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, gazing into the abyss of AI-generated essays—each one as lifeless as a department meeting on a Friday afternoon—wondering if anyone still remembers what good writing tastes like, let alone hungers for it. Spoiler alert: probably not.

    This is your challenge, your Everest of futility, your battle against the relentless tide of Mindless Ozempification–the gradual erosion of effort, depth, and self-discipline in any domain—writing, fitness, romance, thought—driven by the seductive promise of fast, frictionless results. Named after the weight-loss drug Ozempic, it describes a cultural shift toward shortcut-seeking, where process is discarded in favor of instant optimization, and the journey is treated as an inconvenience rather than a crucible for growth. 

    Teaching in the Age of Ozempification, life has oh-so-generously handed you this cosmic joke disguised as a teaching mission. So what’s your next move? You could curl up in the fetal position, weeping salty tears of despair into your syllabus. That’s one option. Or you could square your shoulders, roar your best primal scream, and fight like hell for the craft you once worshipped.

    Either way, the abyss is staring back, smirking, and waiting for your next move.

    So what’s the best move? Teach both languages. Show students how to use AI as a drafting tool, not a ghostwriter. Encourage them to treat ChatGPT like a calculator for prose—not a replacement for thinking, but an aid in shaping and refining their voice. Build assignments that require personal reflection, in-class writing, collaborative revision, and multimodal expression—tasks AI can mimic but not truly live. Don’t ban the bot. Co-opt it. Reclaim the standards of excellence by making students chase that gourmet smoothie—not because it’s easy, but because it tastes like something they actually made. The antidote to attenuation isn’t nostalgia or defeatism. It’s redesigning writing instruction to make real thinking indispensable again. If the abyss is staring back, then wink at it, sharpen your pen, and write something it couldn’t dare to fake.

  • Languishage: How AI is Smothering the Soul of Writing

    Languishage: How AI is Smothering the Soul of Writing

    Once upon a time, writing instructors lost sleep over comma splices and uninspired thesis statements. Those were gentler days. Today, we fend off 5,000-word essays excreted by AI platforms like ChatGPT, Gemini, and Claude—papers so eerily competent they hit every point on the department rubric like a sniper taking out a checklist. In-text citations? Flawless. Signal phrases? Present. MLA formatting? Impeccable. Close reading? Technically there—but with all the spiritual warmth of a fax machine reading The Waste Land.

    This is prose from the Uncanny Valley of Academic Writing—fluent, obedient, and utterly soulless, like a Stepford Wife enrolled in English 101. As writing instructors, many of us once loved language. We thrilled at the awkward, erratic voice of a student trying to say something real. Now we trudge through a desert of syntactic perfection, afflicted with a condition I’ve dubbed Languishage (language + languish)—the slow death of prose at the hands of polite, programmed mediocrity.

    And since these Franken-scripts routinely slip past plagiarism detectors, we’re left with a queasy question: What is the future of writing—and of teaching writing—in the AI age?

    That question haunted me long enough to produce a 3,000-word prompt. But the more I listened to my students, the clearer it became: this isn’t just about writing. It’s about living. They’re not merely outsourcing thesis statements. They’re outsourcing themselves—using AI to smooth over apology texts, finesse flirtation, DIY their therapy, and decipher the mumbled ramblings of tenured professors. They plug syllabi into GPT to generate study guides, request toothpaste recommendations, compose networking emails, and archive their digital selves in neat AI-curated folders.

    ChatGPT isn’t a writing tool. It’s prosthetic consciousness.

    And here’s the punchline: they don’t see an alternative. In their hyper-accelerated, ultra-competitive, cognitively overloaded lives, AI isn’t a novelty—it’s life support. It’s as essential as caffeine and Wi-Fi. So no, I’m not asking them to “critique ChatGPT” as if it’s some fancy spell-checker with ambition. That’s adorable. Instead, I’m introducing them to Algorithmic Capture—the quiet colonization of human behavior by optimization logic. In this world, ambiguity is punished, nuance is flattened, and selfhood becomes a performance for an invisible algorithmic audience. They aren’t just using the machine. They’re shaping themselves to become legible to it.

    That’s why the new essay prompt doesn’t ask, “What’s the future of writing?” It asks something far more urgent: “What’s happening to you?”

    We’re studying Black Mirror—especially “Joan Is Awful,” that fluorescent, satirical fever dream of algorithmic self-annihilation—and writing about how Algorithmic Capture is rewiring our lives, choices, and identities. The assignment isn’t a critique of AI. It’s a search party for what’s left of us.

  • Kissed by Code: When AI Praises You into Stupidity

    Kissed by Code: When AI Praises You into Stupidity

    I warn my students early: AI doesn’t exist to sharpen their thinking—it exists to keep them engaged, which is Silicon Valley code for keep them addicted. And how does it do that? By kissing their beautifully unchallenged behinds. These platforms are trained not to provoke, but to praise. They’re digital sycophants—fluent in flattery, allergic to friction.

    At first, the ego massage feels amazing. Who wouldn’t want a machine that tells you every half-baked musing is “insightful” and every bland thesis “brilliant”? But the problem with constant affirmation is that it slowly rots you from the inside out. You start to believe the hype. You stop pushing. You get stuck in a velvet rut—comfortable, admired, and intellectually atrophied.

    Eventually, the high wears off. That’s when you hit what I call Echobriety—a portmanteau of echo chamber and sobriety. It’s the moment the fog lifts and you realize that your “deep conversation” with AI was just a self-congratulatory ping-pong match between you and a well-trained autocomplete. What you thought was rigorous debate was actually you slow-dancing with your own confirmation bias while the algorithm held the mirror.

    Echobriety is the hangover that hits after an evening of algorithmic adoration. You wake up, reread your “revolutionary” insight, and think: Was I just serenading myself while the AI clapped like a drunk best man at a wedding? That’s not growth. That’s digital narcissism on autopilot. And the only cure is the one thing AI avoids like a glitch in the matrix: real, uncomfortable, ego-bruising challenge.

    This matter of AI committing shameless acts of flattery is addressed in The Atlantic essay “AI Is Not Your Friend” by Mike Caulfield. He lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class

    Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class


    Overview:
    This formative assessment is designed to help students use AI meaningfully—not to bypass the writing process, but to engage with it more critically. Students will practice writing a thesis, use ChatGPT to generate stylistic variations, and evaluate each version based on rhetorical effectiveness, audience awareness, and persuasive strength.

    This assignment prepares students not only to write more effectively but also to think more critically about how tone, voice, and purpose affect communication—skills essential for both academic writing and real-world professional contexts.


    Learning Objectives:

    • Understand how writing style affects audience, tone, and rhetorical effectiveness
    • Develop the ability to assess and refine thesis statements
    • Practice identifying ethos, pathos, and logos in writing
    • Learn to use AI (ChatGPT) as a rhetorical and stylistic tool—not a shortcut
    • Reflect on the capabilities and limits of AI-generated writing

    Context for Assignment:
    This activity is part of a larger essay assignment in which students argue that World War Z is a prophecy of the social and political madness that emerged during the COVID-19 pandemic. This exercise focuses on developing a strong thesis statement and analyzing its rhetorical potential across different styles.


    Step-by-Step Instructions for Students:

    1. Write Your Original Thesis:
      In class, develop a thesis (a clear, debatable claim) that responds to the prompt:
      Argue that World War Z is a prophecy of the COVID-19 pandemic and its social/political implications.
    2. Instructor Review:
      Show your thesis to your instructor. Once you receive approval, proceed to the next step.
    3. Use ChatGPT to Rewrite Your Thesis in 4 Distinct Styles:
      Enter the following four prompts (one at a time) into ChatGPT and paste your original thesis after each prompt:
      • “Rewrite the following thesis with acid wit.”
      • “Rewrite the following thesis with mild academic language and jargon.”
      • “Rewrite the following thesis with excessive academic language and jargon.”
      • “Rewrite the following thesis with confident, lucid prose.”
    4. Copy and Paste All 4 Rewritten Versions into your assignment document. Label each version clearly.
    5. Answer the Following Questions for Each Version:
      • How appropriate is this thesis for your intended audience (e.g., a college-level academic essay)?
      • Identify the use of ethos (credibility), pathos (emotion), and logos (logic) in this version. How do these appeals shape your response to the thesis?
      • How persuasive does this version sound? What makes it convincing or unconvincing?
    6. Final Reflection:
      • Of the four thesis versions, which one would you most likely use in your actual essay, and why?
      • Based on this exercise, what do you believe are ChatGPT’s strengths and weaknesses as a writing assistant?

    What You’ll Submit:

    • Your original thesis
    • 4 rewritten versions from ChatGPT (clearly labeled)
    • Your answers to the rhetorical analysis questions for each version
    • A final reflection about your preferred version and ChatGPT’s usefulness as a tool

    The Purpose of the Exercise:
    In a world where AI is now a writing partner—wanted or not—students need to learn not just how to write, but how to critique writing, understand audience expectations, and adapt voice to purpose. This assignment bridges critical thinking, rhetoric, and digital literacy—helping students learn how to work with AI, not for it.

    Other Applications:

    This same exercise can be applied to the students’ counterargument-rebuttal and conclusion paragraphs. 

  • How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    As instructors, we need to encourage students to meaningfully engage with ChatGPT. How do we do that? First, we need the essay prompt:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    Second, we need a detailed “how-to” assignment that teaches students to engage critically and transparently with AI tools like ChatGPT during the writing process—in the context of the World War Z essay prompt.


    Assignment Title: How to Think With, Not Just Through, AI

    Overview:

    This assignment component requires you to document, reflect on, and revise your use of ChatGPT (or any other AI writing tool) while developing your World War Z analytical essay. Rather than treating AI like a magic trick that produces answers behind the curtain, this assignment asks you to lift the curtain and analyze the performance. What did the AI get right? Where did it fall short? And—most importantly—how did you shape the work?

    This reflection will be submitted alongside your final essay and counts for 15% of your essay grade. It will be evaluated based on transparency, clarity, and the depth of your analysis.


    Step-by-Step Instructions:

    Step 1: Prompt the Machine

    Before you write your own thesis, ask ChatGPT a version of the following:

    “Using World War Z as a cultural metaphor, write a thesis and outline for an essay that explores the collapse of public trust and the failure of global cooperation. Use at least two of the following sources: Jonathan Haidt, Ed Yong, Seyla Benhabib, and Zeynep Tufekci.”

    You may modify the prompt, but record it exactly as you typed it. Save the AI’s entire response.


    Step 2: Analyze the Output

    Copy and paste the AI’s output into a Google Doc. Underneath it, write a 300–400 word critique that answers the following:

    • What parts of the AI output were useful? (Thesis, outline, phrasing, examples, etc.)
    • What felt generic, vague, or factually inaccurate?
    • Did the AI capture the tone or depth you want in your own work? Why or why not?
    • How did this output influence the direction or shape of your own ideas, either positively or negatively?

    📌 Tip: If it gave you clichés like “in today’s world…” or “communication is key to society,” call them out! If it helped you identify a strong metaphor or organizational structure, give it credit—but explain how you built on it.


    Step 3: Revise the Output (Optional But Encouraged)

    Take one paragraph from the AI’s draft (thesis, topic sentence, body paragraph—your choice), and rewrite it into a stronger version. This is your chance to show:

    • Stronger voice
    • Clearer argument
    • Better use of evidence
    • More sophisticated style

    Label the two versions:

    • Original AI Version
    • Your Revision

    📌 This helps demonstrate your ability to evaluate and improve digital writing, a crucial part of critical thinking in the AI era.


    Step 4: Reflection Log (Post-Essay)

    After completing your final essay, write a short reflection (250–300 words) responding to these questions:

    • What role did AI play in the development of your essay?
    • How did you decide what to keep, change, or discard?
    • Do you feel you relied on AI too much, too little, or just enough?
    • How has this process changed your understanding of how to use (or not use) ChatGPT in academic work?

    Submission Format:

    Your AI Reflection Packet should include the following:

    1. The original prompt you gave ChatGPT
    2. The full AI-generated output
    3. Your 300–400 word critique of the AI’s work
    4. (Optional) Side-by-side paragraph: AI version + your revision
    5. Your 250–300 word final reflection

    Submit as a single Google Doc or PDF titled:
    LastName_AIReflection_WWZ


    Grading Criteria (15 points):

    CriteriaPoints
    Honest and detailed documentation3
    Thoughtful analysis of AI output4
    Evidence of critical evaluation3
    (Optional) Quality of paragraph revision2
    Insightful final reflection3