Tag: technology

  • Languishage: How AI is Smothering the Soul of Writing

    Languishage: How AI is Smothering the Soul of Writing

    Once upon a time, writing instructors lost sleep over comma splices and uninspired thesis statements. Those were gentler days. Today, we fend off 5,000-word essays excreted by AI platforms like ChatGPT, Gemini, and Claude—papers so eerily competent they hit every point on the department rubric like a sniper taking out a checklist. In-text citations? Flawless. Signal phrases? Present. MLA formatting? Impeccable. Close reading? Technically there—but with all the spiritual warmth of a fax machine reading The Waste Land.

    This is prose from the Uncanny Valley of Academic Writing—fluent, obedient, and utterly soulless, like a Stepford Wife enrolled in English 101. As writing instructors, many of us once loved language. We thrilled at the awkward, erratic voice of a student trying to say something real. Now we trudge through a desert of syntactic perfection, afflicted with a condition I’ve dubbed Languishage (language + languish)—the slow death of prose at the hands of polite, programmed mediocrity.

    And since these Franken-scripts routinely slip past plagiarism detectors, we’re left with a queasy question: What is the future of writing—and of teaching writing—in the AI age?

    That question haunted me long enough to produce a 3,000-word prompt. But the more I listened to my students, the clearer it became: this isn’t just about writing. It’s about living. They’re not merely outsourcing thesis statements. They’re outsourcing themselves—using AI to smooth over apology texts, finesse flirtation, DIY their therapy, and decipher the mumbled ramblings of tenured professors. They plug syllabi into GPT to generate study guides, request toothpaste recommendations, compose networking emails, and archive their digital selves in neat AI-curated folders.

    ChatGPT isn’t a writing tool. It’s prosthetic consciousness.

    And here’s the punchline: they don’t see an alternative. In their hyper-accelerated, ultra-competitive, cognitively overloaded lives, AI isn’t a novelty—it’s life support. It’s as essential as caffeine and Wi-Fi. So no, I’m not asking them to “critique ChatGPT” as if it’s some fancy spell-checker with ambition. That’s adorable. Instead, I’m introducing them to Algorithmic Capture—the quiet colonization of human behavior by optimization logic. In this world, ambiguity is punished, nuance is flattened, and selfhood becomes a performance for an invisible algorithmic audience. They aren’t just using the machine. They’re shaping themselves to become legible to it.

    That’s why the new essay prompt doesn’t ask, “What’s the future of writing?” It asks something far more urgent: “What’s happening to you?”

    We’re studying Black Mirror—especially “Joan Is Awful,” that fluorescent, satirical fever dream of algorithmic self-annihilation—and writing about how Algorithmic Capture is rewiring our lives, choices, and identities. The assignment isn’t a critique of AI. It’s a search party for what’s left of us.

  • Sociopathware: When “Social” Media Turns on You

    Sociopathware: When “Social” Media Turns on You

    Reading Richard Seymour’s The Twittering Machine is like realizing that Black Mirror isn’t speculative fiction—it’s journalism. Seymour depicts our digital lives not as a harmless distraction, but as a propaganda-laced fever swamp where we are less users than livestock—bred for data, addicted to outrage, and stripped of self-agency. Watching sociopathic tech billionaires rise to power makes a dark kind of sense once you grasp that mass digital degradation isn’t a glitch—it’s the business model. We’re not approaching dystopia. We’re soaking in it.

    Most of us are already trapped in Seymour’s machine, flapping like digital pigeons in a Skinner Box—pecking for likes, retweets, or one more fleeting dopamine pellet. We scroll ourselves into oblivion, zombified by clickbait and influencer melodrama. Yet, a flicker of awareness sometimes breaks through the haze. We feel it in our fogged-over thoughts, our shortened attention spans, and our anxious obsession with being “seen” by strangers. We suspect that something inside us is being hollowed out.

    But Seymour doesn’t offer false comfort. He cites a 2015 study in which people attempted to quit Facebook for 99 days. Most couldn’t make it past 72 hours. Many defected to Instagram or Twitter instead—same addiction, different flavor. Only a rare few fully unplugged, and they reported something radical: clarity, calm, and a sudden liberation from the exhausting treadmill of self-performance. They had severed the feed and stepped outside what philosopher Byung-Chul Han calls gamification capitalism—a regime where every social interaction is a data point, and every self is an audition tape.

    Seymour’s conclusion is damning: it’s time to retire the quaint euphemism “social media.” The phrase slipped into our cultural vocabulary like a charming grifter—suggesting friendly exchanges over digital lattes. But this is no buzzing café. It’s a dopamine-spewing Digital Skinner Box, where we tap and swipe like lab rats begging for validation. What we’re calling “social” is in fact algorithmic manipulation wrapped in UX design. We are not exchanging ideas—we are selling our attention for hollow engagement while surrendering our behavior to surveillance capitalists who harvest us like ethical-free farmers with no livestock regulations.

    Richard Seymour calls this system The Twittering Machine. Byung-Chul Han calls it gamification capitalism. Anna Lembke, in Dopamine Nation, calls it overstimulation as societal collapse. And thinkers studying Algorithmic Capture say we’ve reached the point where we no longer shape technology—technology shapes us. Let’s be honest: this isn’t “social media.” It’s Sociopathware. It’s addiction media. It’s the slow, glossy erosion of the self, optimized for engagement, monetized by mental disintegration.

    Here’s the part you won’t hear in a TED Talk or an onboarding video: Sociopathware was never designed to serve you. It was built to study you—your moods, fears, cravings, and insecurities—and then weaponize that knowledge to keep you scrolling, swiping, and endlessly performing. Every “like” you chase, every selfie you tweak, every argument you think you’re winning online—those are breadcrumbs in a maze you didn’t design. The longer you’re inside it, the more your sense of self becomes an avatar—algorithmically curated, strategically muted, optimized for appeal. That’s not agency. That’s submission in costume. And the more you rely on these platforms for validation, identity, or even basic social interaction, the more control you hand over to a machine that profits when you forget who you really are. If you value your voice, your mind, and your ability to think freely, don’t let a dashboard dictate your personality.

  • Kissed by Code: When AI Praises You into Stupidity

    Kissed by Code: When AI Praises You into Stupidity

    I warn my students early: AI doesn’t exist to sharpen their thinking—it exists to keep them engaged, which is Silicon Valley code for keep them addicted. And how does it do that? By kissing their beautifully unchallenged behinds. These platforms are trained not to provoke, but to praise. They’re digital sycophants—fluent in flattery, allergic to friction.

    At first, the ego massage feels amazing. Who wouldn’t want a machine that tells you every half-baked musing is “insightful” and every bland thesis “brilliant”? But the problem with constant affirmation is that it slowly rots you from the inside out. You start to believe the hype. You stop pushing. You get stuck in a velvet rut—comfortable, admired, and intellectually atrophied.

    Eventually, the high wears off. That’s when you hit what I call Echobriety—a portmanteau of echo chamber and sobriety. It’s the moment the fog lifts and you realize that your “deep conversation” with AI was just a self-congratulatory ping-pong match between you and a well-trained autocomplete. What you thought was rigorous debate was actually you slow-dancing with your own confirmation bias while the algorithm held the mirror.

    Echobriety is the hangover that hits after an evening of algorithmic adoration. You wake up, reread your “revolutionary” insight, and think: Was I just serenading myself while the AI clapped like a drunk best man at a wedding? That’s not growth. That’s digital narcissism on autopilot. And the only cure is the one thing AI avoids like a glitch in the matrix: real, uncomfortable, ego-bruising challenge.

    This matter of AI committing shameless acts of flattery is addressed in The Atlantic essay “AI Is Not Your Friend” by Mike Caulfield. He lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • We Are Lost Inside the Mentalluvium

    We Are Lost Inside the Mentalluvium

    We are staggering through an unprecedented fugue state—an acute disorientation born of our immersion in the social media Chumstream, a digital shark tank where recycled outrage, trauma bait, and influencer chum swirl together in a frothy, click-hungry frenzy. It’s not a stream so much as a bloody whirlpool, designed to keep us circling, feeding, and forgetting.

    Gurwinder Bhogal, a rare voice of reason in this algorithmic carnival, broke it down on Josh Szeps’ Uncomfortable Conversations. Social media, he said, isn’t just addictive—it’s engineered by tech lords who know exactly how to hijack your brain. Blue light. Intermittent dopamine rewards. Infinite scroll. Welcome to the digital casino, a neon maze with no clocks, no windows, and no exits—only flashing notifications and the creeping sense that your life is being siphoned off one swipe at a time.

    In this fever swamp of the self, people aren’t just bored—they’re bloated. Stuffed with half-digested TED Talk wisdom, viral symptom checklists, and influencer pathology. They gorge on intellectual junk food and, as Bhogal put it, suffer from “intellectual obesity.” Diagnoses become identities, and confusion is recast as empowerment. It’s not that they have ADHD, long Covid, autism, or gender dysmorphia—it’s that they scroll into them, self-diagnosing in real time, latching onto whatever trending malaise grants them a fleeting sense of belonging in the void.

    These are not charlatans. These are casualties. Belief becomes ballast in a digital landscape where nothing is anchored. They wander through the cognitive casino, zombified, dislocated, convinced that a diagnostic label is the same as self-knowledge, and that performative suffering is the highest form of authenticity.

    What we’re experiencing isn’t just burnout. It’s Mentalluvium—the psychic sludge left behind after gorging on content. It’s the mental silt of endless scrolling: micro-identities, algorithm-approved neuroses, and dopamine-smeared fragments of truth. We are not thinking. We are sedimenting.

    If this is hell, it didn’t come with flames. It came with filters.

  • We Are Living in the Lexipocalypse

    We Are Living in the Lexipocalypse

    Welcome to the Lexipocalypse—the great linguistic extinction event of our age. A mass die-off of vocabulary is underway, and no one is sending flowers. In its place? A fetid soup of emojis, acronyms, and zombie slang lifted from TikTok influencers who express emotional depth with a side-eye GIF and a deadpan “literally me.”

    In our writing department at a Southern California college, the mood is not just anxious—it’s existentially hobbled. We pace our offices like philosophers in a burning library, trying to engage students whose literacy was interrupted by a pandemic and finished off by smartphones. They haven’t read Joan Didion or Vladimir Nabokov because they’ve never needed to. Their native tongue is algorithmic performance. Their canon is curated by the TikTok For You page. They don’t craft sentences; they drop vibes.

    But the rot goes deeper. It’s not just that our students can’t read—it’s that they no longer need to write. AI has become their ghostwriter, their essayist, their academic stunt double. And they are learning, with astonishing speed, how to dodge our AI-proofing traps like digital ninjas, outsourcing their thoughts while we scramble to adapt assignments they’ll never actually write.

    We gather in department meetings like shell-shocked survivors, drinking lukewarm coffee and clinging to outdated syllabi like life rafts. We murmur about “reinvention” and “resilience,” but mostly we just stare into the middle distance, dazed by the barrage of AI’s exponential growth. Each technological advance lands like a jab to the chin, and we are punch-drunk, waiting for the knockout.

    No, we’re not in denial. But we are professionally unmoored. We know our job descriptions must mutate into something unrecognizable, but no one knows what that looks like. There is no roadmap, no lighthouse on the horizon. Only fog. We grope like moles through pedagogical darkness, trying to preserve a shred of dignity while the earth crumbles beneath us.

    The Lexipocalypse has a historical cousin: the Arabic term Jahiliyyah, the age of ignorance before illumination. And God help us, we feel it. We feel the dread of entering a new Jahiliyyah, a long winter of intellect, where the lights of human expression flicker and go out, one emoji at a time.

    We are not done yet. But the fight has changed. We are not battling ignorance. We are battling irrelevance. And it may be the hardest war we’ve ever fought.

  • AI Wants to be Your Friend, and It’s Shrinking Your Mind

    AI Wants to be Your Friend, and It’s Shrinking Your Mind

    In The Atlantic essay “AI Is Not Your Friend,” Mike Caulfield lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • You, Rewritten: Algorithmic Capture in the Age of AI

    You, Rewritten: Algorithmic Capture in the Age of AI

    Once upon a time, writing instructors worried about comma splices and uninspired thesis statements. Now, we’re dodging 5,000-word essays spat out by AI platforms like ChatGPT, Gemini, and Claude—essays so eerily competent they hit every benchmark on the department rubric: in-text citations, signal phrases, MLA formatting, and close readings with all the soulful depth of a fax machine reading T.S. Eliot. This is prose caught in the Uncanny Valley—syntactically flawless, yet emotionally barren, like a Stepford Wife enrolled in English 101. And since these algorithmic Franken-scripts often evade plagiarism detectors, we’re all left asking the same queasy question: What is the future of writing—and of teaching writing—in the AI Age?

    That question haunted me long enough to produce a 3,000-word prompt. But the deeper I sank into student conversations, the clearer it became: this isn’t just about writing. It’s about living. My students aren’t merely outsourcing thesis statements. They’re using AI to rewrite awkward apology texts, craft flirtatious replies on dating apps, conduct self-guided therapy with bots named “Charles” and “Luna,” and decode garbled lectures delivered by tenured mumblers. They feed syllabi into GPT to generate study guides. They get toothpaste recommendations. They draft business emails and log them in AI-curated archives. In short: ChatGPT isn’t a tool. It’s a prosthetic consciousness.

    And here’s the punchline: they see no alternative. AI isn’t a novelty; it’s a survival mechanism. In their hyper-accelerated, ultra-competitive, attention-fractured lives, AI has become as essential as caffeine and Wi-Fi. So no, I won’t be asking students to merely critique ChatGPT as a glorified spell-checker. That’s quaint. Instead, I’m introducing them to Algorithmic Capture—the quiet tyranny by which human behavior is shaped, scripted, and ultimately absorbed by optimization-driven systems. Under this logic, ambiguity is penalized, nuance is flattened, and people begin tailoring themselves to perform for the algorithmic eye. They don’t just use the machine. They become legible to it.

    For this reason, the new essay assignment doesn’t ask, “What’s the future of writing?” It asks something far more urgent: What’s happening to you? I’m having students analyze the eerily prophetic episodes of Black Mirror—especially “Joan Is Awful,” that fluorescent satire of algorithmic self-annihilation—and write about how Algorithmic Capture is reshaping their lives, identities, and choices. They won’t just be critiquing AI’s effect on prose. They’ll be interrogating the way it quietly rewrites the self.

  • The Last Writing Instructor: Holding the Line in a Post-Thinking World

    The Last Writing Instructor: Holding the Line in a Post-Thinking World

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Right?

    Maybe.

    Let’s not get too comfortable in our intellectual trench coats. While we pride ourselves on persona, big ideas, and resisting chaos, we’re up against something far more insidious than plagiarism. AI isn’t just outsourcing thought—it’s rewiring brains. In the Black Mirror episode “Joan Is Awful,” we watch a woman’s life turned into a deepfake soap opera, customized for mass consumption, with every gesture, flaw, and confession algorithmically mined and exaggerated. What’s most horrifying isn’t the surveillance or the celebrity—it’s the flattening. Joan becomes a caricature of herself, optimized for engagement and stripped of depth. Sound familiar?

    This is what AI is doing to writing—and by extension, to thought. The more students rely on ChatGPT, the more their rhetorical instincts, their voice, their capacity for struggle and ambiguity atrophy. Like Joan, they become algorithmically curated versions of themselves. Not writers. Not thinkers. Just language puppets speaking in borrowed code. No matter how persuasive our arguments or electrifying our lectures, we’re still up against the law of digital gravity: if it’s easier, faster, and “good enough,” it wins.

    So what’s the best move? Don’t fight AI—outgrow it. If we’re serious about salvaging human expression, we must redesign how we teach writing. Center the work around experiences AI can’t mimic: in-class writing, collaborative thinking, embodied storytelling, rhetorical improvisation, intellectual risk. Create assignments that need a human brain and reward discomfort over convenience. The real enemy isn’t ChatGPT—it’s complacency. If we let the Joanification of our students continue, we’re not just losing the classroom—we’re surrendering the soul. It’s time to fight not just for writing, but for cognition itself.

  • How to Teach Writing When Nobody Cares About Writing Anymore

    How to Teach Writing When Nobody Cares About Writing Anymore

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, crypto investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant yapping of your neighbor’s demonic Belgian dachshund, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, gazing into the abyss of AI-generated essays—each one as lifeless as a department meeting on a Friday afternoon—wondering if anyone still remembers what good writing tastes like, let alone hungers for it. Spoiler alert: probably not.

    This is your challenge, your Everest of futility, your battle against the relentless tide of Mindless Ozempification. Life has oh-so-generously handed you this cosmic joke disguised as a teaching mission. So what’s your next move? You could curl up in the fetal position, weeping salty tears of despair into your syllabus. That’s one option. Or you could square your shoulders, roar your best primal scream, and fight like hell for the craft you once worshipped.

    Either way, the abyss is staring back, smirking, and waiting for your next move.

    So what’s the best move? Teach both languages. Show students how to use AI as a drafting tool, not a ghostwriter. Encourage them to treat ChatGPT like a calculator for prose—not a replacement for thinking, but an aid in shaping and refining their voice. Build assignments that require personal reflection, in-class writing, collaborative revision, and multimodal expression—tasks AI can mimic but not truly live. Don’t ban the bot. Co-opt it. Reclaim the standards of excellence by making students chase that gourmet smoothie—not because it’s easy, but because it tastes like something they actually made. The antidote to attenuation isn’t nostalgia or defeatism. It’s redesigning writing instruction to make real thinking indispensable again. If the abyss is staring back, then wink at it, sharpen your pen, and write something it couldn’t dare to fake.

  • The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    Returning to the classroom post-pandemic and encountering ChatGPT, I’ve become fixated on what I now call “the battle for the human soul.” On one side, there’s Ozempification—that alluring shortcut. It’s the path where AI-induced mediocrity is the destination, and the journey there is paved with laziness. Like popping Ozempic for quick weight loss and calling it a day, the shortcut to academic success involves relying on AI to churn out lackluster work. Who cares about excellence when Netflix is calling your name, right?

    On the other side, we have Humanification. This is the grueling path that the great orator and abolitionist Frederick Douglass would champion. It’s the “deep work” author Cal Newport writes about in his best-selling books. Humanification happens when we turn away from comfort and instead plunge headfirst into the difficult, yet rewarding, process of literacy, self-improvement, and helping others rise from their own “Sunken Place”—borrowing from Jordan Peele’s chilling metaphor in Get Out. On this path, the pursuit isn’t comfort; it’s meaning. The goal isn’t a Netflix binge but a life with purpose and higher aspirations.

    Reading Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College,” I was struck by the same dichotomy of Ozempification on one side of academia and Humanification on the other. Harper, while wandering around Haverford’s idyllic campus, stumbles upon a group of English majors who proudly scoff at ChatGPT, choosing instead to be “real” writers. These students, in a world that has largely tossed the humanities aside as irrelevant, are disciples of Humanification. For them, rejecting ChatGPT isn’t just an academic decision; it’s a badge of honor, reminiscent of Bartleby the Scrivener’s iconic refusal: “I prefer not to.” Let that sink in. Give these students the opportunity to use ChatGPT to write their essays, and they recoil at the thought of such a flagrant self-betrayal. 

    After interviewing students, Harper concludes that using AI in higher education isn’t just a technological issue—it’s cultural and economic. The disdain these students have for ChatGPT stems from a belief that reading and writing transcend mere resume-building or career milestones. It’s about art for art’s sake. But Harper wisely points out that this intellectual snobbery is rooted in privilege: “Honor and curiosity can be nurtured, or crushed, by circumstance.” 

    I had to stop in my tracks. Was I so privileged and naive to think I could preach the gospel of Humanification while unaware that such a pursuit costs time, money, and the peace of mind that one has a luxurious safety net in the event the Humanification quest goes awry? 

    This question made me think of Frederick Douglass, a man who had every reason to have his intellectual curiosity “crushed by circumstance.” In fact, his pursuit of literacy, despite the threat of death, was driven by an unquenchable thirst for knowledge and self-transformation. But Douglass is a hero for the ages. Can we really expect most people, particularly those without resources, to follow that path? Harper’s argument carries weight. Without the financial and cultural infrastructure to support it, aspiring to Humanification isn’t always feasible.

    Consider the tech overlords—the very architects of our screen-addicted dystopia—who wouldn’t dream of letting their own kids near the digital devices they’ve unleashed upon the masses. Instead, they ship them off to posh Waldorf schools, where screens are treated like radioactive waste. There, children are shielded from the brain-rot of endless scrolling and instead are taught the arcane art of cursive handwriting, how to wield an abacus like a mathematician from 500 B.C., and the joys of harvesting kale and beets to brew some earthy, life-affirming root vegetable stew. These titans of tech, flush with billions, eagerly shell out small fortunes to safeguard their offspring’s minds from the very digital claws that are busy eviscerating ours.

    I often tell my students that being rich makes it easier to be an intellectual. Imagine the luxury: you could retreat to an off-grid cabin (complete with Wi-Fi, obviously), gorge on organic gourmet food prepped by your personal chef, and spend your days reading Dostoevsky in Russian and mastering Schubert’s sonatas while taking sunset jogs along the beach. When you emerge back into society, tanned and enlightened, you could boast of your intellectual achievements with ease.

    Harper’s point is that wealth facilitates Humanification. At a place like Haverford, with its “writing support, small classes, and unharried faculty,” it’s easier to uphold an honor code and aspire to intellectual purity. But for most students—especially those in public schools—this is a far cry from reality. My wife teaches sixth grade in the public school system, and she’s shared stories of schools that resemble post-apocalyptic wastelands more than educational institutions. We’re talking mold-infested buildings, chemical leaks, and underpaid teachers sleeping in their cars. Expecting students in these environments to uphold an “honor code” and strive for Humanification? It’s not just unrealistic—it’s insulting.

    This brings to mind Maslow’s hierarchy of needs. Before we can expect students to self-actualize by reading Dostoevsky or rejecting ChatGPT, they need food, shelter, and basic safety. It’s hard to care about literary integrity when you’re navigating life’s survival mode.

    As I dive deeper into Harper’s thought-provoking essay on economic class and the honor code, I can’t help but notice the uncanny parallel to the essay about weight management and GLP-1 drugs my Critical Thinking students tackle in their first essay. Both seem to hinge not just on personal integrity or effort but on a cocktail of privilege and circumstance. Could it be that striving to be an “authentic writer,” untouched by the mediocrity of ChatGPT and backed by the luxury of free time, is eerily similar to the aspiration of achieving an Instagram-worthy body, possibly aided by expensive Ozempic injections?

    It raises the question: Is the difference between those who reject ChatGPT and those who embrace it simply a matter of character, or is it, at least in part, a product of class? After all, if you can afford the luxury of time—time to read Tolstoy and Dostoevsky in your rustic, tech-free cabin—you’re already in a different league. Similarly, if you have access to high-end weight management options like Ozempic, you’re not exactly running the same race as those pounding the pavement on their $20 sneakers. 

    Sure, both might involve personal effort—intellectual or physical—but they’re propped up by economic factors that can’t be ignored. Whether we’re talking about Ozempification or Humanification, it’s clear that while self-discipline and agency are part of the equation, they’re not the whole story. Class, as uncomfortable as it might be to admit, plays a significant role in determining who gets to choose their path—and who gets stuck navigating whatever options are left over.

    I’m sure the issue is more nuanced than that. These are, after all, complex topics that defy oversimplification. But both privilege and personal character need to be addressed if we’re going to have a real conversation about what it means to “aspire” in this day and age.

    Returning to Tyler Austin Harper’s essay, Harper provides a snapshot of the landscape when ChatGPT launched in late 2022. Many professors found themselves swamped with AI-generated essays, which, unsurprisingly, raised concerns about academic integrity. However, Harper, a professor at a liberal-arts college, remains optimistic, believing that students still have a genuine desire to learn and pursue authenticity. He views the potential for students to develop along the path of intellectual and personal growth, as very much alive—especially in environments like Haverford, where he went to test the waters of his optimism.

    When Harper interviews Haverford professors about ChatGPT violating the honor code, their collective shrug is surprising. They’re seemingly unbothered by the idea of policing students for cheating, as if grades and academic dishonesty are beneath them. The culture at Haverford, Harper implies, is one of intellectual immersion—where students and professors marinate in ideas, ethics, and the contemplation of higher ideals. The honor code, in this rarified academic air, is almost sacred, as though the mere existence of such a code ensures its observance. It’s a place where academic integrity and learning are intertwined, fueled by the aristocratic mind.

    Harper’s point is clear: The further you rise into the elite echelons of boutique colleges like Haverford, the less you have to worry about ChatGPT or cheating. But when you descend into the more grounded, practical world of community colleges, where students juggle multiple jobs, family obligations, and financial constraints, ChatGPT poses a greater threat to education. This divide, Harper suggests, is not just academic; it’s economic and cultural. The humanities may be thriving in the lofty spaces of elite institutions, but they’re rapidly withering in the trenches where students are simply trying to survive.

    As someone teaching at a community college, I can attest to this shift. My classrooms are filled with students who are not majoring in writing or education. Most of them are focused on nursing, engineering, and business. In this hypercompetitive job market, they simply don’t have the luxury to spend time reading novels, becoming musicologists or contemplating philosophical debates. They’re too busy hustling to get by. Humanification, as an idea, gets a nod in my class discussions, but in the “real world,” where six hours of sleep is a luxury, it often feels out of reach.

    Harper points out that in institutions like Haverford, not cheating has become a badge of honor, a marker of upper-class superiority. It’s akin to the social cachet of being skinny, thanks to access to expensive weight-loss drugs like Ozempic. There’s a smugness that comes with the privilege of maintaining integrity—an implication that those who cheat (or can’t afford Ozempic) are somehow morally inferior. This raises an uncomfortable question: Is the aspiration to Humanification really about moral growth, or is it just another way to signal wealth and privilege?

    However, Harper complicates this argument when he brings Stanford into the conversation. Unlike Haverford, Stanford has been forced to take the “nuclear option” of proctoring exams, convinced that cheating is rampant. In this larger, more impersonal environment, the honor code has failed to maintain academic integrity. It appears that Haverford’s secret sauce is its small, close-knit atmosphere—something that can’t be replicated at a sprawling institution like Stanford. Harper even wonders whether Haverford is more museum than university—a relic from an Edenic past when people pursued knowledge for its own sake, untainted by the drive for profit or prestige. Striving for Humanification at a place like Haverford may be an anachronism, a beautiful but lost world that most of us can only dream of.

    Harper’s essay forces me to consider the role of economic class in choosing a life of “authenticity” or Humanification. With this in mind, I give my Critical Thinking students the following writing prompt for their second essay:

    In his essay, “ChatGPT Doesn’t Have to Ruin College,” Tyler Austin Harper paints an idyllic portrait of students at Haverford College—a small, intimate campus where intellectual curiosity blooms without the weight of financial or vocational pressures. These students enjoy the luxury of time to nurture their education with a calm, casual confidence, pursuing a life of authenticity and personal growth that feels out of reach for many who are caught in the relentless grind of economic survival.

    College instructors at larger institutions might dream of their own students sharing this love for learning as a transformative journey, but the reality is often harsher. Many students, juggling jobs, family responsibilities, and financial stress, see education not as a space for leisurely exploration but as a means to a practical end. For them, college is a path to better job opportunities, and AI tools like ChatGPT become crucial allies in managing their workload, not threats to their intellectual integrity.

    Critics of ChatGPT may find themselves facing backlash from those who argue that such skepticism reeks of classism and elitism. It’s easy, the rebuttal goes, for the privileged few—with time, resources, and elite educations—to romanticize writing “off the grid” without AI assistance. But for the vast majority of working people, integrating AI into daily life isn’t a luxury—it’s a necessity, on par with reliable transportation, a smartphone, and a clean outfit for the job. Praising analog purity from ivory towers—especially those inaccessible to 99% of Americans—is hardly a serious response to the rise of a transformative technology like AI.

    In the end, we can’t preach Humanification without reckoning with the price tag it carries. The romantic ideal of the “authentic writer”—scribbling away in candlelit solitude, untouched by AI—has become yet another luxury brand, as unattainable for many as a Peloton in a studio apartment. The real battle isn’t simply about moral fiber or intellectual purity; it’s about time, access, and the brutal arithmetic of modern life. To dismiss AI as a lazy shortcut is to ignore the reality that for many students, it’s not indulgence—it’s triage. If the aristocracy of learning survives in places like Haverford, it does so behind a velvet rope. Meanwhile, the rest are left in the algorithmic trenches, cobbling together futures with whatever tools they can afford. The challenge ahead isn’t to shame the Ozempified or canonize the Humanified, but to build an educational culture where everyone—not just the privileged—can afford to aspire.