Tag: technology

  • ChatGPT Killed Lacie Pound and Other Artificial Lies

    ChatGPT Killed Lacie Pound and Other Artificial Lies

    In Matteo Wong’s sharp little dispatch, “The Entire Internet Is Reverting to Beta,” he argues that AI tools like ChatGPT aren’t quite ready for daily life. Not unless your definition of “ready” includes faucets that sometimes dispense boiling water instead of cold or cars that occasionally floor the gas when you hit the brakes. It’s an apt metaphor: we’re being sold precision, but what we’re getting is unpredictability in a shiny interface.

    I was reminded of this just yesterday when ChatGPT gave me the wrong title for a Meghan Daum essay collection—an essay I had just read. I didn’t argue. You don’t correct a toaster when it burns your toast; you just sigh and start over. ChatGPT isn’t thinking. It’s a stochastic parrot with a spellchecker. Its genius is statistical, not epistemological.

    And yet people keep treating it like a digital oracle. One of my students recently declared—thanks to ChatGPT—that Lacie Pound, the protagonist of Black Mirror’s “Nosedive,” dies a “tragic death.” She doesn’t. She ends the episode in a prison cell, laughing—liberated, not lifeless. But the essay had already been turned in, the damage done, the grade in limbo.

    This sort of glitch isn’t rare. It’s not even surprising. And yet this technology is now embedded into classrooms, military systems, intelligence agencies, healthcare diagnostics—fields where hallucinations are not charming eccentricities, but potential disasters. We’re handing the scalpel to a robot that sometimes thinks the liver is in the leg.

    Why? Because we’re impatient. We crave novelty. We’re addicted to convenience. It’s the same impulse that led OceanGate CEO Stockton Rush to ignore engineers, cut corners on sub design, and plunge five people—including himself—into a carbon-fiber tomb. Rush wanted to revolutionize deep-sea tourism before the tech was seaworthy. Now he’s a cautionary tale with his own documentary.

    The stakes with AI may not involve crushing depths, but they do involve crushing volumes of misinformation. The question isn’t Can ChatGPT produce something useful? It clearly can. The real question is: Can it be trusted to do so reliably, and at scale?

    And if not, why aren’t we demanding better? Why haven’t tech companies built in rigorous self-vetting systems—a kind of epistemological fail-safe? If an AI can generate pages of text in seconds, can’t it also cross-reference a fact before confidently inventing a fictional death? Shouldn’t we be layering safety nets? Or have we already accepted the lie that speed is better than accuracy, that beta is good enough?

    Are we building tools that enhance our thinking, or are we building dependencies that quietly dismantle it?

  • Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    In the HBO Max film Mountainhead and the Black Mirror episode “Joan Is Awful,” viewers are plunged into unnerving dystopias shaped not by evil governments or alien invasions, but by tech corporations whose influence surpasses state power and whose tools penetrate the most intimate corners of human consciousness.

    Both works dramatize a chilling premise: that the very notion of an autonomous self is under siege. We are not simply consumers of technology but the raw material it digests, distorts, and reprocesses. In these narratives, the protagonists find their sense of self unraveled, their identities replicated, manipulated, and ultimately owned by forces they cannot control. Whether through digital doppelgängers, surveillance entertainment, or techno-induced psychosis, these stories illustrate the terrifying consequences of surrendering power to those who build technologies faster than they can understand or ethically manage them.

    In this essay, write a 1,700-word argumentative exposition responding to the following claim:

    In the age of runaway innovation, where the ambitions of tech elites override democratic values and psychological safeguards, the very concept of free will, informed consent, and the autonomous self is collapsing under the weight of its digital imitation.

    Use Mountainhead and “Joan Is Awful” as your core texts. Analyze how each story addresses the themes of free will, consent, identity, and power. You are encouraged to engage with outside sources—philosophical, journalistic, or theoretical—that help you interrogate these themes in a broader context.

    Consider addressing:

    • The illusion of choice and algorithmic determinism
    • The commodification of human identity
    • The satire of corporate terms of service and performative consent
    • The psychological toll of being digitally duplicated or manipulated
    • Whether technological “progress” is outpacing moral development

    Your argument should include a strong thesis, counterargument with rebuttal, and close textual analysis that connects narrative detail to broader social and philosophical stakes.


    Five Sample Thesis Statements with Mapping Components


    1. The Death of the Autonomous Self

    In Mountainhead and Joan Is Awful, the protagonists’ loss of agency illustrates how modern tech empires undermine the very concept of selfhood by reducing human experience to data, delegitimizing consent through obfuscation, and accelerating psychological collapse under the guise of innovation.

    Mapping:

    • Reduction of human identity to data
    • Meaningless or manipulated consent
    • Psychological consequences of tech-induced identity collapse

    2. Mock Consent in the Age of Surveillance Entertainment

    Both narratives expose how user agreements and passive digital participation mask deeply coercive systems, revealing that what tech companies call “consent” is actually a legalized form of manipulation, moral abdication, and commercial exploitation.

    Mapping:

    • Consent as coercion disguised in legal language
    • Moral abdication by tech designers and executives
    • Profiteering through exploitation of personal identity

    3. From Users to Subjects: Tech’s New Authoritarianism

    Mountainhead and Joan Is Awful warn that the unchecked ambitions of tech elites have birthed a new form of soft authoritarianism—where control is exerted not through force but through omnipresent surveillance, AI-driven personalization, and identity theft masquerading as entertainment.

    Mapping:

    • Tech ambition and loss of oversight
    • Surveillance and algorithmic control
    • Identity theft as entertainment and profit

    4. The Algorithm as God: Tech’s Unholy Ascendancy

    These works portray the tech elite as digital deities who reprogram reality without ethical limits, revealing a cultural shift where the algorithm—not the soul, society, or state—determines who we are, what we do, and what versions of ourselves are publicly consumed.

    Mapping:

    • Tech elites as godlike figures
    • Algorithmic reality creation
    • Destruction of authentic identity in favor of profitable versions

    5. Selfhood on Lease: How Tech Undermines Freedom and Flourishing

    The protagonists’ descent into confusion and submission in both Mountainhead and Joan Is Awful show that freedom and personal flourishing are now contingent upon platforms and policies controlled by distant tech overlords, whose tools amplify harm faster than they can prevent it.

    Mapping:

    • Psychological dependency on digital platforms
    • Collapse of personal flourishing under tech influence
    • Lack of accountability from the tech elite

    Sample Outline


    I. Introduction

    • Hook: A vivid description of Joan discovering her life has become a streamable show, or the protagonist in Mountainhead questioning his own sanity.
    • Context: Rise of tech empires and their control over identity and consent.
    • Thesis: (Insert selected thesis statement)

    II. The Disintegration of the Self

    • Analyze how Joan and the Mountainhead protagonist experience a crisis of identity.
    • Discuss digital duplication, surveillance, and manipulated perception.
    • Use scenes to show how each story fractures the idea of an integrated, autonomous self.

    III. Consent as a Performance, Not a Principle

    • Explore how both stories critique the illusion of informed consent in the tech age.
    • Examine the use of user agreements, surveillance participation, and passive digital exposure.
    • Link to real-world examples (terms of service, data collection, facial recognition use).

    IV. Tech Elites as Unaccountable Gods

    • Compare the figures or systems in charge—Streamberry in Joan Is Awful, the nebulous forces in Mountainhead.
    • Analyze how the lack of ethical oversight allows systems to spiral toward harm.
    • Use real-world examples like social media algorithms and AI misuse.

    V. Counterargument and Rebuttal

    • Counterargument: Technology isn’t inherently evil—it’s how we use it.
    • Rebuttal: These works argue that the current infrastructure privileges power, speed, and profit over reflection, ethics, or restraint—and humans are no longer the ones in control.

    VI. Conclusion

    • Restate thesis with higher stakes.
    • Reflect on what these narratives ask us to consider about our current digital lives.
    • Pose an open-ended question: Can we build a future where tech enhances human agency instead of annihilating it?

  • Brand Me, Break Me: The Confused User’s Guide to Digital Collapse (A College Essay Prompt)

    Brand Me, Break Me: The Confused User’s Guide to Digital Collapse (A College Essay Prompt)

    In addition to teaching Critical Thinking, I also teach Freshman Composition, and this semester I’m working with student-athletes—specifically, football players navigating the brave new world of NIL (Name, Image, Likeness) deals. These athletes are now eligible to make money from social media, which makes our first writing assignment both practical and perilous.

    Essay Prompt #1: Brand Me, Break Me: The Confused User’s Guide to Digital Collapse

    Social media is a business. Social media is also a drug. Sometimes, it’s both—and that’s when things get weird.

    In the docuseries Money Game, we watch college athletes play the algorithm like it’s just another playbook. They build brands, negotiate deals, and treat their social feeds like a revenue stream. Let’s call them Business Users—people who understand the game and are winning it.

    But then come the Dopamine Users, the rest of us poor souls, scrolling and posting not for profit, but for approval. In Black Mirror’s “Nosedive” and “Joan Is Awful,” we see social media mutate into a psychological carnival of rating systems, fake smiles, and avatars of self-worth. The result? A curated self that has nothing to do with reality and everything to do with anxiety, desperation, and an ongoing identity crisis.

    And then there’s the tragicomic third act: The Confused User. Think Untold: The Liver King. Here’s a guy who tried to be a Business User but collapsed into parody—lying, self-deluding, and publicly unraveling. The Confused User believes they’re optimizing for attention and success but ends up optimizing for ridicule and collapse.

    In this essay, use Money Game, “Nosedive,” “Joan Is Awful,” Untold: The Liver King, Jonathan Haidt’s essay “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” and Sherry Turkle’s TED Talk “Alone, but Connected?” to respond to the following claim:

    Social media can be a profitable business tool—but when it becomes a substitute for self-worth, it guarantees isolation, mental illness, and eventual collapse. Understanding the difference between Business Users, Dopamine Users, and Confused Users may be the only way to survive the algorithm without losing your mind.

    You may agree, partially agree, or disagree with the claim—but either way, take a position with clarity and nuance. Analyze the psychology, the economics, and the wreckage.

    And remember: this is a critical thinking exercise. That means no TikTok therapy takes, no AI-generated summaries, and no mushy conclusions. Think hard, argue well, and—above all—write like someone who’s seen the glitch in the matrix.

    Sample Thesis Statements:

    1. While social media offers entrepreneurial opportunities for Business Users, the vast majority of people are Dopamine Users unknowingly trading mental stability for validation, making the platform a psychological trap disguised as empowerment.
    2. The Confused User, exemplified by the Liver King, represents a cautionary tale in the digital economy: when brand-building and identity collapse into one, social media success becomes indistinguishable from self-destruction.
    3. Social media doesn’t inherently damage us—but without a clear distinction between economic strategy and personal validation, users risk becoming Confused Users whose craving for attention leads not to fame, but to ruin.

    In a world where your Instagram handle might carry more currency than your GPA, this isn’t just an academic exercise—it’s a survival guide. Whether you’re gunning for a sponsorship deal or just trying not to lose your sense of self in the scroll, this essay is your chance to interrogate the game before it plays you. Treat it like film study for the algorithm: read the plays, understand the players, and figure out how to stay human in a system designed to monetize your attention and, if you’re not careful, your identity.

  • How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    After flying to Miami recently, I finally understood the full appeal of noise-canceling headphones—not just for travel, but for the everyday, ambient escape act they offer my college students. Several claim, straight-faced, that they “hear the lecture better” while playing ASMR in their headphones because it soothes their anxiety and makes them better listeners. Is this neurological wizardry? Or performance art? I’m not sure. But apocryphal or not, the explanation has stuck with me.

    It made me see the modern, high-grade headphone as something far more than a listening device. It’s a sanctuary, or to use the modern euphemism, an aural safe space in a chaotic world. You may not have millions to seal yourself in a hyperbaric oxygen pod inside a luxury doomsday bunker carved into the Montana granite during World War Z, but if you’ve got $500 and a credit score above sea level, you can disappear in style—into a pair of Sony MX6s or Audio-Technica ATH-R70s.

    The headphone, in this context, is not just gear—it’s armor. Whether cocobolo wood or carbon fiber, it communicates something quietly radical: “I have opted out.”

    You’re not rejecting the world with malice—you’re simply letting it know that you’ve found something better. Something more reliable. Something calibrated to your nervous system. In fact, you’ve severed communication so politely that all they hear is the faint thump of curated escapism pulsing through your earpads.

    For my students, these headphones are not fashion statements—they’re boundary-drawing devices. The outside world is a cacophony of canvas announcements, attention fatigue, and algorithmically optimized despair. Inside the headphones? Rain sounds. Lo-fi beats from a YouTube loop titled “study with me until the world ends.” Maybe even a softly muttering AI voice telling them they are enough.

    It doesn’t matter whether it’s true. It matters that it works.

    And here’s the deeper point: the headphone isn’t just a sanctuary. It’s a non-accountability device. You can’t be blamed for ghosting a group chat or zoning out during a team huddle when you’re visibly plugged into something more profound. You’re no longer rude—you’re occupied. Your silence is now technically sound.

    In a hyper-networked world that expects your every moment to be a node of productivity or empathy, the headphone is the last affordable luxury that buys you solitude without apology. You don’t need a manifesto. You just need active noise-canceling and a decent DAC.

    You’re not ignoring anyone. You’ve just entered your own monastery of midrange clarity, bass-forward detachment, and spatially engineered peace.

    And if someone wants your attention?

    Tell them to knock louder. You’re in sanctuary.

  • Beware of the ChatGPT Strut

    Beware of the ChatGPT Strut

    Yesterday my critical thinking students and I talked about the ways we could revise our original content with ChatGPT give it instructions and train this AI tool to go beyond its bland, surface-level writing style. I showed my students specific prompts that would train it to write in a persona:

    “Rewrite the passage with acid wit.”

    “Rewrite the passage with lucid, assured prose.”

    “Rewrite the passage with mild academic language.”

    “Rewrite the passage with overdone academic language.”

    I showed the students my original paragraphs and ChatGPT’s versions of my sample arguments agreeing and disagreeing with Gustavo Arellano’s defense of cultural appropriation, and I said in the ChatGPT rewrites of my original there were linguistic constructions that were more witty, dramatic, stunning, and creative than I could do, and that to post these passages as my own would make me look good, but they wouldn’t be me. I would be misrepresenting myself, even though most of the world will be enhancing their writing like this in the near future. 

    I compared writing without ChatGPT to being a natural bodybuilder. Your muscles may not be as massive and dramatic as the guy on PEDS, but what you see is what you get. You’re the real you. In contrast, when you write with ChatGPT, you are a bodybuilder on PEDS. Your muscle-flex is eye-popping. You start doing the ChatGPT strut. 

    I gave this warning to the class: If you use ChatGPT a lot, as I have in the last year as I’m trying to figure out how I’m supposed to use it in my teaching, you can develop writer’s dysmorphia, the sense that your natural, non-ChatGPT writing is inadequate compared to the razzle-dazzle of ChatGPT’s steroid-like prose. 

    One student at this point disagreed with my awe of ChatGPT and my relatively low opinion of my own “natural” writing. She said, “Your original is better than the ChatGPT versions. Yours makes more sense to me, isn’t so hidden behind all the stylistic fluff, and contains an important sentence that ChatGPT omitted.”

    I looked at the original, and I realized she was right. My prose wasn’t as fancy as ChatGPT’s but the passage about Gustavo Arellano’s essay defending cultural appropriation was more clear than the AI versions.

    At this point, I shifted metaphors in describing ChatGPT. Whereas I began the class by saying that AI revisions are like giving steroids to a bodybuilder with body dysmorphia, now I was warning that ChatGPT can be like an abusive boyfriend or girlfriend. It wants to hijack our brains because the main objective of any technology is to dominate our lives. In the case of ChatGPT, this domination is sycophantic: It gives us false flattery, insinuates itself into our lives, and gradually suffocates us. 

    As an example, I told the students that I was getting burned out using ChatGPT, and I was excited to write non-ChatGPT posts on my blog, and to live in a space where my mind could breathe the fresh air apart from ChatGPT’s presence. 

    I wanted to see how ChatGPT would react to my plan to write non-ChatGPT posts, and ChatGPT seemed to get scared. It started giving me all of these suggestions to help me implement my non-ChatGPT plan. I said back to ChatGPT, “I can’t use your suggestions or plans or anything because the whole point is to live in the non-ChatGPT Zone.” I then closed my ChatGPT tab. 

    I concluded by telling my students that we need to reach a point where ChatGPT is a tool like Windows and Google Docs, but as soon as we become addicted to it, it’s an abusive platform. At that point, we need to use some self-agency and distance ourselves from it.  

  • If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    No ChatGPT or AI of any kind was used in the following:

    For close to 2 years, I’ve been editing and collaborating with ChatGPT for my personal and professional writing. I teach my college writing students how to engage with it, giving it instructions to avoid its default setting for bland, anodyne prose and teaching it how to adopt various writing personas. 

    For my own writing, ChatGPT has boosted my prose and imagery, making my writing more stunning, dramatic, and vivid.

    Because I have been a bodybuilder since 1974, I will use a bodybuilding analogy: Writing with ChatGPT is like bodybuilding with PEDS. I get addicted to the boost, the extra pump, and the extra muscle. Just as a bodybuilder can get body dysmorphia, ChatGPT can give writers a sort of writer’s dysmorphia. 

    But posting a few articles on Reddit recently in which a few readers were put off by what they saw as “fake writing,” I stopped in my tracks to question my use of ChatGPT. Part of me thinks that the hunger for authenticity is such that I should be writing content that is more like the natural bodybuilder, the guy who ventures forth in his endeavor with no PEDS. What you see is what you get, all human, no steroids, no AI.

    While I like the way ChatGPT pushes me in new directions that I would not explore on my own and makes the writing process engaging in new ways, I acknowledge that AI-fueled writer’s dysmorphia is real. We can get addicted to the juiced-up prose and the razzle-dazzle.

    Secondly, we can outsource too much thinking to AI and get lazy rather than do the work ourselves. In the process, our critical thinking skills begin to atrophy.

    Third, I think we can fill our heads with too much ChatGPT and live inside a hazy AI fever swamp. I recall going to middle school and on the outskirts of the campus, you could see the “burn-outs,” pot-addicted kids staring into the distance with their lizard eyes. One afternoon a friend joked, “They’re high so often, not being high must be a trip for them.” What if we become like these lizard-eyed burnouts and wander this world on a constant ChatGPT high that is so debilitating that we need to sober up in the natural world upon which we find the non-AI existence is its own form of healthy pleasure? In other words, we should be careful not to let ChatGPT live rent-free in our brains.

    Finally, people hunger for authentic, all-human writing, so moving forward on this blog, I want to continue to push myself with some ChatGPT-edited writing, but I also want to present all-natural, all-human writing, as is the case with this post. 

  • The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    I loathe smartphones. They’re tiny, slippery surveillance rectangles masquerading as tools of liberation. Typing on one feels like threading a needle while wearing oven mitts. My fingers bungle every attempt at precision, the autocorrect becomes a co-author I never hired, and the screen is so small I have to squint like I’m decoding Morse code through a peephole. Tablets aren’t much better—just larger slabs of compromise.

    Give me a mechanical keyboard, a desktop tower that hums with purpose, and twin 27-inch monitors beaming side by side like architectural blueprints of clarity. That’s how I commune with ChatGPT. I need real estate. I want to see the thinking unfold, not peer at it like a medieval monk examining a parchment shard.

    So when one of my students whipped out her phone, opened the ChatGPT app, and began speaking to it like it was her digital therapist, I nodded politely. But inside, I was muttering, “Not for me.” I’ve lived long enough to know that I don’t acclimate well to anything that fits in a jeans pocket.

    That’s why Matteo Wong’s article, “OpenAI’s Ambitions Just Became Crystal Clear,” caught my eye. Apparently, Sam Altman has teamed up with Jony Ive—the high priest of sleekness and the ghost behind Apple’s glory days—to sink $5 billion into building a “family of devices” for ChatGPT. Presumably, these will be as smooth, sexy, and addictive as the iPhone once was before it became a dopamine drip and digital leash.

    Honestly? It makes sense. In the last year, my ChatGPT use has skyrocketed, while my interaction with other platforms has withered. I now use it to write, research, plan, edit, make weight-management meal plans, and occasionally psychoanalyze myself. If there were a single device designed to serve as a “mother hub”—a central console for creativity, productivity, and digital errands—I’d buy it. But not if it’s shaped like a lapel pin. Not if it whispers in my ear like some clingy AI sprite. I don’t want a neural appendage or a mind tickler. I want a screen.

    What I’m hoping for is a ChatGPT-Book: something like a Chromebook, but with real writing DNA. A device with its own operating system that consolidates browser tabs, writing apps, and research tools. A no-nonsense, 14-inch-and-up display where I can visualize my creative process, not swipe through it.

    We all learn and create differently in this carnival of overstimulation we call the Information Age. I imagine Altman and Ive know that—and will deliver a suite of devices for different brains and temperaments. Mine just happens to want clarity, not minimalism masquerading as genius.

    Wong’s piece doesn’t surprise or shock me. It’s just the same old Silicon Valley gospel: dominate or be buried. Apple ate BlackBerry. Facebook devoured MySpace. And MySpace? It’s now a dusty relic in the basement of internet history—huddled next to beta tapes, 8-tracks, and other nostalgia-laced tech corpses.

    If ChatGPT gets its own device and redefines how we interact with the web, well… chalk it up to evolution. But for the love of all that’s analog—give me a keyboard, a screen, and some elbow room.

  • The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    Welcome to the future—where the algorithm reigns, identity is a curated filter pack, and dystopia arrives not with a boot to the face but a wellness app and a matching pair of $900 headphones that murmur Coldplay into your skull at just the right serotonin-laced frequency.

    We will all look like vaguely reprocessed versions of Salma Hayek or Brad Pitt—digitally airbrushed to remove all imperfections but retain just enough “authenticity” to keep our neuroses in play. Our playlists will be algorithmically optimized to sound like Coldplay mated with spa music and decided never to take risks again.

    We’ll wear identical headphones—sleek, matte, noise-canceling monuments to our collective disinterest in one another. Not to be rude. Just too evolved to engage. Every journal entry we write will be AI-assisted, reading like the bastard child of Brené Brown and ChatGPT: reflective, sincere, and soul-crushingly uniform.

    Our influencers? They’ll all look the same too—gender-fluid, lightly medicated, with just enough charisma to sell you an oat milk subscription while quoting Kierkegaard. Politics, entertainment, mental health, and skincare will be served up on the same TikTok platter, narrated by someone who once dated a crypto founder and now podcasts about trauma.

    Three times a day, we’ll sip our civilization smoothie: a beige sludge of cricket protein, creatine, nootropic fibers, and a lightly psychoactive GLP-1 variant that keeps hunger, sadness, and ambition at bay. It’s not a meal; it’s a contract with the status quo. We’ll all wear identical sweat-wicking athleisure in soothing desert neutrals, paired with orthopedic sneakers in punchy tech-startup orange.

    We’ll all “take breaks from social media” at the same approved hour—between 5 and 6 p.m.—so we can “reconnect with the analog world” by staring at a sunset long enough to photograph it and post our profound revelations online at 6:01.

    Nobody will want children, because who wants to drag a baby into a climate-controlled apartment where the rent is half your nervous system? Marriage? A relic of a time when humans still believed in eye contact. Romances will be managed by chatbots programmed to simulate caring without requiring reciprocation. You’ll tell the app your love language, it’ll write your messages, and your partner’s app will do the same. Everyone’s emotionally satisfied, no one’s truly known.

    And vacations? Pure fiction. Deepfakes will show us in Bali, Tuscany, or the moon—beaming with digital joy, sipping pixelated espresso. Real travel is for the ultra-rich and the deluded.

    As for existential despair? Doesn’t exist anymore. Our moods will be finely tuned by micro-dosed pharmacology and AI-generated affirmations. No more late-night crises or 3 a.m. sobbing into a pillow. Just an endless, gentle hum of stabilized contentment—forever.

  • Today Was the Day My College Writing Class Woke Up

    Today Was the Day My College Writing Class Woke Up

    Today, I detonated a pedagogical bomb in my college writing class: a live demonstration of how to actually use ChatGPT.

    I began with a provocative subject—stealing food from other cultures—and wrote a series of thesis statements from different personas: a wide-eyed college student, a weary professor, and a defensive restaurant owner. Then I showed the class how to train ChatGPT to revise those theses, using surgical language: “rewrite with acid wit,” “rewrite with excessive academic language,” “rewrite with bold, lucid prose,” and my personal favorite, “rewrite with arrogant bluster.”

    The reaction was instant. One student literally gasped: “Oh my God! There’s no flowery AI-speak!”

    “Of course not,” I said. “Because I trained it. ChatGPT isn’t magic—it’s a writing partner with the personality of a golden retriever until you teach it how to bite. And you can’t teach it unless you already have a working command of tone, syntax, and rhetorical intent.”

    Then I gave them this analogy: “Imagine I’m out of shape. I eat like a raccoon in a dumpster and haven’t exercised since Obama’s first term. Then I walk into the ChatGPT Fashion Store and buy a $3,000 suit. Guess what? I still look like crap. Why? Because ChatGPT can’t polish turds.”

    Laughter, nods, lightbulbs going off.

    “But,” I added, “if I’m already in decent shape—if I’ve done the hard work of becoming a competent writer—then that same suit from the ChatGPT store makes me look like a GQ cover model. You have to bring something to the mirror first.”

    Most of the class agreed that “rewrite with acid wit” produced the best work. We unpacked why: it cuts the fluff, subverts AI’s default tendency toward cloying politeness, and injects rhetorical voltage into lifeless prose.

    For once, they weren’t just listening—they were riveted. Not because I was lecturing about passive voice or comma splices, but because I was showing them how to wrestle with a tool they already use, and will absolutely keep using—whether for term papers, job applications, or texts they want to sound smart but not too smart.

    By the end, they were writing like editors, not customers. Next week, we do the same drill—but with counterarguments and rebuttals. And yes, ChatGPT will be coming to class.

  • “Good Enough” Is the Enemy

    “Good Enough” Is the Enemy

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, crypto investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant yapping of your neighbor’s demonic Belgian dachshund, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, gazing into the abyss of AI-generated essays—each one as lifeless as a department meeting on a Friday afternoon—wondering if anyone still remembers what good writing tastes like, let alone hungers for it. Spoiler alert: probably not.

    This is your challenge, your Everest of futility, your battle against the relentless tide of Mindless Ozempification–the gradual erosion of effort, depth, and self-discipline in any domain—writing, fitness, romance, thought—driven by the seductive promise of fast, frictionless results. Named after the weight-loss drug Ozempic, it describes a cultural shift toward shortcut-seeking, where process is discarded in favor of instant optimization, and the journey is treated as an inconvenience rather than a crucible for growth. 

    Teaching in the Age of Ozempification, life has oh-so-generously handed you this cosmic joke disguised as a teaching mission. So what’s your next move? You could curl up in the fetal position, weeping salty tears of despair into your syllabus. That’s one option. Or you could square your shoulders, roar your best primal scream, and fight like hell for the craft you once worshipped.

    Either way, the abyss is staring back, smirking, and waiting for your next move.

    So what’s the best move? Teach both languages. Show students how to use AI as a drafting tool, not a ghostwriter. Encourage them to treat ChatGPT like a calculator for prose—not a replacement for thinking, but an aid in shaping and refining their voice. Build assignments that require personal reflection, in-class writing, collaborative revision, and multimodal expression—tasks AI can mimic but not truly live. Don’t ban the bot. Co-opt it. Reclaim the standards of excellence by making students chase that gourmet smoothie—not because it’s easy, but because it tastes like something they actually made. The antidote to attenuation isn’t nostalgia or defeatism. It’s redesigning writing instruction to make real thinking indispensable again. If the abyss is staring back, then wink at it, sharpen your pen, and write something it couldn’t dare to fake.