Tag: ai

  • Today I Gave My Students a Lesson on Real and Fake Engagement

    Today I Gave My Students a Lesson on Real and Fake Engagement

    I teach the student athletes at my college, and right now we’re exploring a question that cuts to the bone of modern life: Why are we so morally apathetic toward companies that feed our addictions, glamorize eating disorders, and employ CEOs who behave like cult leaders or predators?

    We’ve watched three documentaries to anchor our research: Brandy Hellville and the Cult of Fast Fashion (Max), Trainwreck: The Cult of American Apparel (Max), and White Hot: The Rise and Fall of Abercrombie & Fitch (Netflix). Each one dissects a different brand, but the pathology beneath them is the same—companies built not on fabric, but fantasy. For four weeks, I lectured about this moral rot. My students listened politely. Eyes glazed. Interest flatlined. It was clear: they didn’t care about fashion.

    So today, I tried something different. I told them to forget about the clothes. The essay, I explained, isn’t really about fashion—it’s about selling illusion. These brands were never peddling T-shirts or jeans; they were peddling a fantasy of beauty, popularity, and belonging. And they did it through something I called fake engagement—a kind of fever swamp where people mistake addiction for connection, and attention for meaning.

    Fake engagement is the psychological engine of our times. It’s the dopamine loop of social media and the endless scroll. People feed their insecurities into it and get rewarded with the mirage of significance: likes, follows, attention. It’s an addictive system built on FOMO and self-erasure. Fake engagement is a demon. The more you feed it, the hungrier it gets. You buy things, post things, watch things, all to feel visible—and yet every click deepens the void.

    This pathology is not confined to consumer brands. The entire economy now depends on it. Influencers sell fake authenticity. YouTubers stage “relatable” chaos. Politicians farm outrage to harvest donations. Every sphere of life—from entertainment to governance—has been infected by the logic of the algorithm: engagement above truth, virality above virtue.

    I told my students they weren’t hearing anything new. The technologist Jaron Lanier has been warning us for over a decade that digital platforms are rewiring our brains, turning us into unpaid content providers for an economy of distraction. But then I reminded them that as athletes, they actually hold a kind of immunity to this epidemic.

    Athletes can’t survive on fake engagement. They can’t pretend to win a game. They can’t filter a sprint or Photoshop a jump shot. Their work depends on real engagement—trust, discipline, and honest feedback. Their coaches demand evidence, not illusion. That’s what separates a competitor from a content creator. One lives in the real world; the other edits it out.

    In sports, there’s no algorithm to flatter you. Reality is merciless and fair. You either make the shot or you don’t. You either train or you coast. You either improve or you plateau. The scoreboard has no patience for your self-image.

    I contrasted this grounded reality with the digital circus we’ve come to call culture. Mukbang YouTubers stuff themselves with 10,000 calories on camera for likes. Watch obsessives blow through their savings chasing luxury dopamine. Influencers curate their “personal brand” as if selfhood were a marketing campaign. They call this engagement. I call it pathology. They’re chasing the same empty high that the fast-fashion industry monetized years ago: the belief that buying something—or becoming something online—can fill the hole of disconnection.

    This is the epistemic crisis of our time: a collective break from reality. People no longer ask whether something is true or good; they ask whether it’s viral. We’re a civilization medicated by attention, high on engagement, and bankrupt in meaning.

    That’s why I tell my athletes their essay isn’t about fashion—it’s about truth. About how human beings become spiritually and mentally sick when they lose their relationship to reality. You can’t heal what you refuse to see. And you can’t see anything clearly when your mind is hijacked by dopamine economics.

    The world doesn’t need more influencers. It needs coaches—people who ground others in trust, expertise, and evidence. Coaches model a form of mentorship that Silicon Valley can’t replicate. They give feedback that isn’t gamified. They remind players that mastery requires patience, and that progress is measured in skill, not clicks.

    When you think about it, the word coach itself has moral weight. It implies guidance, not manipulation. Accountability, not performance. A coach is the opposite of an influencer: they aren’t trying to be adored; they’re trying to make you better. They aren’t feeding your addiction to attention; they’re training your focus on reality.

    I told my students that if society is going to survive this digital delirium, we’ll need millions of new coaches—mentors, teachers, parents, and friends who can anchor us in truth when everything around us is optimized for illusion. The fast-fashion brands we study are just one symptom of a larger disease: the worship of surface.

    But reality, like sport, eventually wins. The body knows when it’s neglected. The mind knows when it’s being used. Truth has a way of breaking through even the loudest feed.

    The good news is that after four weeks of blank stares, something finally broke through. When I reframed the essay—not as a takedown of fashion, but as a diagnosis of fake engagement—the room changed. My athletes sat up straighter. They started nodding. Their eyes lit up. For the first time all semester, they were engaged.

    The irony wasn’t lost on me. The essay about fake engagement had just produced the real kind.

  • AI—Superpower for Learning or NPC Machine?

    AI—Superpower for Learning or NPC Machine?

    Essay Assignment:

    Background

    Students describe AI as a superpower for speed and convenience that leaves them with glossy, identical prose—clean, correct, and strangely vacant. Many say it “talks a lot without saying much,” flattens tone into AI-speak, and numbs them with sameness. The more they rely on it, the less they think; laziness becomes a habit, creativity atrophies, and personal voice is lost and replaced by AI-speak. Summaries replace books, notes replace listening, and polished drafts replace real inquiry. The result feels dehumanizing: work that reads like it was written by everyone and no one.

    Students also report a degradation in their education. Higher learning has become boring. Speed, making deadlines, and convenience are its key features, and the AI arms race pushes reluctant students to use AI just to keep pace. 

    The temptation to use AI because all the other students are using it rises with fatigue—copy-paste now, promise depth later—and “later” never arrives. Some call this “cognitive debt”: quick wins today, poorer thinking tomorrow. 

    Some students confess that using AI makes them feel like Non Player Characters in a video game. They’re not growing in their education. They have succumbed to apathy and cynicism as far as their education is concerned. 

    Other students admit that AI has stolen their attention and afflicted them with cognitive debt. Cognitive debt is the mental deficit we accumulate when we let technology do too much of our thinking for us. Like financial debt, it begins as a convenience—offloading memory, calculation, navigation, or decision-making to apps and algorithms—but over time it exacts interest in the form of diminished focus, weakened recall, and blunted problem-solving. Cognitive debt describes the gradual outsourcing of core mental functions—attention, critical reasoning, and creativity—to digital systems that promise efficiency but erode self-reliance. The result is a paradox: as our tools grow “smarter,” we grow more dependent on them to compensate for the very skills they dull. In short, cognitive debt is the quiet cost of convenience: each time we let technology think for us, we lose a little of our capacity to think for ourselves. 

    Yet the picture isn’t purely bleak. Several students say AI can be a lifeline: a steady partner for collaborating, brainstorming, organizing, or language support—especially for non-native speakers—when it’s used as a tutor rather than a ghostwriter. 

    When used deliberately rather than passively, AI writing tools can sharpen critical thinking and creativity by acting as intellectual sparring partners. They generate ideas, perspectives, and counterarguments that challenge students to clarify their own reasoning instead of settling for first thoughts. Rather than accepting the AI’s output, discerning writers critique, refine, and reshape AI writing tools—an exercise in metacognition that strengthens their analytical muscles. The process becomes less about outsourcing thought and more about editing thought—transforming AI from a shortcut into a mirror that reflects the quality, logic, and originality of one’s own mind.

    When used correctly, AI jump-starts drafts and levels the playing field; leaned on heavily, it erases voice, short-circuits struggle, and replaces learning with mindless convenience

    Question You Are Addressing in Your Essay

    But can AI be used effectively, or does our interaction with it, like any other product in the attention economy, reveal that it is designed to sink its talons into us and colonize our brains so that we become less like people with self-agency and more like Non Player Characters whose free will has been taken over by the machines? 

    Writing Prompt

    Write a 1,700-word argumentative essay that answers the above question. Be sure to have a counterargument and rebuttal section. Use four credible sources to support your claim. 

    Suggested Outline

    Paragraph 1: Write a 300-word personal reflection about the way you or someone you know uses AI effectively. Show how the person’s engagement with AI resulted in sharper critical thinking and creativity. Give a specific example of the project that revealed this process. 

    Paragraph 2: Write a 300-word personal reflection about the way you or someone you know abuses AI in a way that trades critical thinking with convenience.  How has AI changed the brain and the person’s approach to education? Write a detailed narrative that dramatizes these changes. 

    Paragraph 3: Write a thesis with 4 mapping components that will point to the topics in your supporting paragraphs.  

    Paragraphs 4-7: Your supporting paragraphs.

    Paragraphs 8 and 9: Your counterargument and rebuttal paragraphs.

    Paragraph 10: Your conclusion, a dramatic restatement of your thesis or a reiteration of a striking image you created in your introduction. 

    Your final page: MLA Works Cited with a minimum of 4 sources. 

  • 30 Student Responses to the Question “What Effect Does Using AI Have on Us?”

    30 Student Responses to the Question “What Effect Does Using AI Have on Us?”

    Prompt:

    Begin your essay with a 400-word personal reflection. This can be about your own experience or that of someone you interview. Focus on how using AI writing tools—like ChatGPT—has changed your habits when it comes to thinking, writing, or getting work done.

    From there, explore the deeper implications of relying on this technology. What happens when AI makes everything faster and easier? Does that convenience come at a cost? Reflect on the way AI can lull us into mental passivity—how it tempts us to hand over the hard work of thinking in exchange for quick results.

    Ask yourself: What kind of writing does AI actually produce? What are its strengths? Where does it fall short? And more importantly, what effect does using AI have on us? Does it sharpen our thinking, or make us more dependent? Do we risk becoming less original, less engaged—more like passive consumers of technology than active creators? As this process continues, are we becoming Non-Player Characters instead of humans with self-agency? Explain. 

    Finally, consider the trade-offs. Are we gaining a tool, or giving something up? Are we turning into characters in a Black Mirror episode—so enamored with convenience that we forget what it means to do the work ourselves? Use concrete examples and honest reflection to dig into the tension between human effort and technological assistance.

    Student Responses to Using AI tools for writing and education:

    1. “I am impressed with the speed and convenience but the final product is overly polished and hollow.”
    2. “I am amazed by the speed of production but all the sentences look the same. Honestly, it’s numbing after a while.”
    3. “The writing is frustrating because it talks a lot without saying much.”
    4. “I don’t have to think as much,” “I save time not having to think” and “I get used to this laziness.”
    5.  “AI writes better than I do, but it doesn’t have my unique voice.”
    6. “AI is like a steady writing partner, always there to help me and bounce off ideas, but lately I realize the more I depend on it, the less I challenge myself to think critically.”
    7. “Thanks to AI, I stopped reading books. Now I just get summaries of books. Now I get the information, but I no longer have a deep understanding.”
    8. “AI helps me take notes and organize ideas but it doesn’t help me truly listen, understand someone’s emotions, show empathy, or deal with uncertainty.”
    9. “AI writing is smooth and structured, but people aren’t. Real thought and emotions are messy. That’s where growth happens.”
    10. “When I’m tired, AI tempts me to just copy and paste it, and the more I use it in this manner, the stronger the temptation becomes.”
    11. “AI makes things really easy for me, but then I ask myself, ‘Am I really learning?’”
    12. “What started out as magical now has become woven into the fabric of daily life. Education has become boring.”
    13. “AI is a production superpower the way it inspires and organizes ideas, but I find over time I become more lazy, less creative, and rely on AI way too much.”
    14. “AI degrades the way we write and think. I can tell when something is written in AI-speak, without real human tone, and the whole experience is dehumanizing.”
    15. “I love AI because it saved me. I am not a native English speaker, so I rely on AI to help with my English. It’s like having a reliable tutor always by my side. But over time, I have become lazy and don’t have the same critical thinking I used to have. I see myself turning into an NPC.”
    16. “I have to use AI because the other students are using it. I should have the same advantages they have. But education has become less creative and more boring. It’s all about ease and convenience now.”
    17. “I used to love AI because it made me confident and motivated me to get my assignments in on my time. But over time, I lost my voice. Now everything is written with an AI voice.”
    18. “The more I use AI, the less I think things through on my own. I cut off my own thinking and just ask ChatGPT. It’s fast, but it kills creativity.”
    19. “When faced with a writing assignment and a blank mind, I would start things with ChatGPT, and it got things going. It wasn’t perfect, but I had something to start with, and I found this comforting. But as I got more confident with ChatGPT, I became less and less engaged with the education process. My default became convenience and nothing more.”
    20. “AI writing is so common, we don’t even ask if the writing is real anymore. No one cares. AI has made us all apathetic. We are NPCs in a video game.”
    21. “AI is a great tool because it helps everyone regardless of how much money we have, but it kills creativity and individuality. We’ve lost the pleasure of education. AI has become a mirror of our own superficial existence.”
    22. “When I first discovered AI to do my writing, I felt I had hit the jackpot, but then after taking so many shortcuts, I lost the love for what I was doing.”
    23. “It’s stressful to see a cursor blinking on a blank page, but thanks to AI, I can get something off and running quickly. The problem is that the words are clean and correct, but also generic. There is no depth to human emotion.”
    24. “I’ve been using AI since high school. A lot of its writing is useless. It doesn’t make sense. It’s inaccurate. It’s poorly written. It’s dehumanizing.”
    25. “AI is basically Google on steroids. I used to dread writing, but AI has pushed me to get my work done. The writing is polished but too perfect to be human writing. The biggest danger is that humans become too reliant on it.”
    26. “I barely use AI. It makes school trivial. It’s just another social media disease like TikTok, these streaming platforms that kill our attention spans and turn us into zombies.”
    27. “AI first felt like having the cheat code to get through school. But then I realized it puts us into a cognitive debt. We lose our tenacity and critical thinking.”
    28. “I am a mother and an avid reader, and I absolutely refuse to use AI for any purpose. AI can’t replace real writing, reading, or journaling. AI is a desecration of education and personal growth.”
    29. At first, I used AI to get ideas, but over time I realized I was no longer thinking. I wasn’t struggling to come up with what I really thought or what I really wanted to argue about. AI silenced who I really was.”
    30. “Using AI to do the heavy lifting doesn’t sit right with me, so I programmed my AI to tutor and guide me through studying, rather than using it as a crutch by providing prompts and tools to help me understand assignments.While my experience with AI has shown me its full capabilities, I’ve also learned that too much of it can ruin the entire experience, in this case, the learning experience.”
  • The Eight Ages of –ification

    The Eight Ages of –ification

    From Conformification to Enshittification: how every decade found a fresh way to ruin itself.


    The Age of Decline, Accelerated

    In Enshittification, Cory Doctorow argues that our decline isn’t gradual—it’s accelerating. Everything is turning to crap simultaneously, like civilization performing a synchronized swan dive into the sewer.

    The book’s subtitle, Why Everything Suddenly Got Worse and What to Do About It, suggests that degradation is now both universal and, somehow, fixable.

    Doctorow isn’t the first prophet to glimpse the digital abyss. Jaron Lanier, Jonathan Haidt, and other cultural Cassandras have long warned about the stupidification that comes from living inside the algorithmic aquarium. We swim in the same recycled sludge of dopamine and outrage, growing ever duller while congratulating ourselves on being “connected.”

    This numbness—the ethical anesthesia of the online age—makes us tolerate more crappiness from our corporate overlords. As the platforms enshittify, we invent our own little coping rituals. Some of us chant words with –ion suffixes as if they were incantations, linguistic ASMR to soothe our digital despair.

    When I saw Ozempic and ChatGPT promising frictionless perfection—weight loss without effort, prose without struggle—I coined Ozempification: the blissful surrender of self-agency to the cult of convenience.

    Now there’s an entire liturgy of –ifications, each describing a new layer of rot:


    • Enshittification — Doctorow’s coinage for the systematic decay of platforms that once worked.
    • Crapification / Encrappification — The transformation of quality into garbage in the name of efficiency.
    • Gamification — Turning life into a perpetual contest of meaningless points and dopamine rewards.
    • Attentionification — Reducing every act of expression to a plea for clicks.
    • Misinformationfication — When truth becomes a casualty of virality.
    • Ozempification — Replacing effort with optimization until we resemble our own avatars.
    • Stupidification — The great numbing: scrolling ourselves into idiocy while our neurons beg for mercy.

    But the crown jewel of this lexicon remains Enshittification—Doctorow’s diagnosis so precise that the American Dialect Society crowned it Word of the Year for 2023.

    Still, I’d like to push back on Doctorow’s suggestion that our current malaise is unique. Yes, technology accelerates decay, but each era has had its own pathology—its signature form of cultural rot. We’ve been creatively self-destructing for decades.

    So, let’s place Enshittification in historical context. Behold The Eight Ages of –ification: a timeline of civilization’s greatest hits in decline.


    1950s — Conformification

    The age of white fences and beige minds. America sold sameness as safety. Individuality was ironed flat, and television became the nation’s priest. Conformification is the fantasy that security comes from imitation—a tranquilized suburbia of identical dreams in identical ranch homes.


    1960s — Psychedelification

    When rebellion became transcendence through chemistry. Psychedelification was the belief that consciousness expansion could topple empires, if only the colors were bright enough. The result: self-absorption in tie-dye and the illusion that enlightenment could be mass-produced.


    1970s — Lustification

    A Freudian carnival of polyester and pelvic thrusts. From Deep Throat to Studio 54, desire was liberation and the body was both altar and marketplace. Lustification crowned pleasure as the last remaining ideology.


    1980s — Greedification

    When morality was replaced by market share. The decade baptized ambition in champagne and cocaine. Greedification is the conviction that money cleanses sin and that a Rolex can double as a rosary.


    1990s — Ironification

    The decade of smirks. Sincerity was cringe; irony was armor. Ironification made detachment the new intelligence: nothing believed, everything quoted, and feelings outsourced to sarcasm.


    2000s — Digitification

    Humanity uploaded itself. Digitification was the mass migration to the screen—the decade of Facebook envy, email anxiety, and dopamine disguised as connection. We stopped remembering and started refreshing.


    2010s — Influencification

    When everyone became a brand. Influencification turned authenticity into a business model and experience into content. The self became a product to be optimized for engagement.


    2020s — Enshittification

    Doctorow’s masterstroke: the final form of digital decay. Enshittification is what happens when every system optimizes for extraction—when user, worker, and platform all drown in the same algorithmic tar pit. It’s the exhaustion of meaning itself, disguised as progress.


    Epilogue: The 2030s — Reification

    If trends continue, we’ll soon enter Reification: a desperate attempt to make the unreal feel real again. After decades of filters, feeds, and frictionless fakery, we’ll long for something tangible—until, inevitably, we commodify that too.

    History repeats itself—only this time with better Wi-Fi.

  • Paul Bunyan Meets the Chainsaw in Freshman Comp

    Paul Bunyan Meets the Chainsaw in Freshman Comp

    During the Fall Semester of 2024, the English Department had one of those “brown bag” sessions—an optional gathering where instructors actually show up because the topic is like a flashing red light on the education highway. This particular crisis-in-the-making? AI. Would writing tools that millions were embracing at exponential speed render our job obsolete? The room was packed with nervous, coffee-chugging professors, myself included, all bracing for a Pandora’s box of AI-fueled dilemmas. They tossed scenario after scenario at us, and the existential angst was palpable.

    First up: What do you do when a foreign language student submits an essay written in their native tongue, then let’s play translator? Is it cheating? Does the term “English Department” even make sense anymore when our Los Angeles campus sounds like a United Nations general assembly? Are we teaching “English,” or are we, more accurately, teaching “the writing process” to people of many languages with AI now tagging along as a co-author?

    Next came the AI Tsunami, a term we all seemed to embrace with a mix of dread and resignation. What do we do when we’ve reached the point that 90% of the essays we receive are peppered with AI speak so robotic it sounds like Siri decided to write a term paper? We were all skeptical about AI detectors—about as reliable as a fortune teller reading tea leaves. I shared my go-to strategy: Instead of accusing a student of cheating (because who has time for that drama?), I simply leave a comment, dripping with professional distaste: “Your essay reeks of AI-generated pablum. I’m giving it a D because I cannot, in good conscience, grade this higher. If you’d like to rewrite it with actual human effort, be my guest.” The room nodded in approval.

    But here’s the thing: The real existential crisis hit when we realized that the hardworking, honest students are busting their butts for B’s, while the tech-savvy slackers are gaming the system, walking away with A’s by running their bland prose through the AI carwash. The room buzzed with a strange mixture of outrage and surrender—because let’s be honest, at least the grammar and spelling errors are nearly extinct.

    Our dean, ever the Zen master in a room full of jittery academics, calmly suggested that maybe—just maybe—we should incorporate personal reflection into our assignments. His idea? By having students spill a bit of their authentic thoughts onto the page, we could then compare those raw musings to their more polished, suspect, possibly ChatGPT-assisted essays. A clever idea. It’s harder to fake authenticity than to parrot a thesis on The Great Gatsby.

    I nodded thoughtfully, though with a rising sense of dread. How exactly was I supposed to integrate “personal reflections” into a syllabus built around the holy trinity of argumentation, counterarguments, and research? I teach composition and critical thinking, not a creative writing seminar for tortured souls. My job isn’t to sift through essays about existential crises or romantic disasters disguised as epiphanies. It’s to teach students how to build a coherent argument and take down a counterpoint without resorting to tired platitudes. Reflection has its place—but preferably somewhere far from my grading pile.

    Still, I had to admit the dean was on to something. If I didn’t get ahead of this, I’d end up buried under an avalanche of soul-searching essays that somehow all lead to a revelation about “balance in life.” I needed time to mull this over, to figure out how personal writing could serve my course objectives without turning it into group therapy on paper.

    But before I could even start strategizing, the Brown Bag session was over. I gathered my notes, bracing myself for the inevitable flood of “personal growth narratives” waiting for me next semester. 

    As I walked out of that meeting, I had a new writing prompt simmering in my head for my students: “Write an argumentative essay exploring how AI platforms like ChatGPT will reshape education. Project how these technologies might be used in the future and consider the ethical lines that AI use blurs. Should we embrace AI as a tool, or do we need hard rules to curb its misuse? Address academic integrity, critical thinking, and whether AI widens or narrows the education gap.”

    When I got home later that day, in a fit of efficiency, I stuffed my car with a mountain of e-waste—ancient laptops, decrepit tablets, and cell phones that could double as paperweights—and headed to the City of Torrance E-Waste Drive. The line of cars stretched for what seemed like miles, all of us dutifully purging our electronic skeletons to make room for the latest AI-compatible toys. As I waited, I tuned into a podcast with Mark Cuban chatting with Bill Maher, and Cuban was adamant: AI will never be regulated because it’s America’s golden goose for global dominance. And there I was, sitting in a snaking line of vehicles, all of us unwitting soldiers in the tech wars, dumping our outdated gadgets like a 21st-century arms race.

    As I edged closer to the dumpster, I imagined ripping open my shirt to reveal a Captain America emblem beneath, fully embracing the ridiculousness of it all. This wasn’t just teaching anymore—it was a revolution. And if I was going to lead it, I’d need to be like Moses descending from Mt. Sinai, armed with the Tablets of AI Laws. Without these laws, I’d be as helpless as a fish flopping on a dry riverbank. To face the coming storm unprepared wasn’t just unwise; it was professional malpractice. My survival depended on it.

    I thought I had outsmarted AI, like some literary Rambo armed with signal phrases, textual analysis, and in-text citations as my guerrilla tactics. ChatGPT couldn’t handle that level of academic sophistication, right? Wrong. One month later, the machine rolled up offering full signal phrase service like some overachieving valet at the Essay Ritz. That defense crumbled faster than a house of cards in a wind tunnel.

    Okay, I thought, I’ll outmaneuver it with source currency. ChatGPT didn’t do recent articles—perfect! I’d make my students cite cutting-edge research. Surely, that would stump the AI. Nope. Faster than you can say “breaking news,” ChatGPT was pulling up the latest articles like a know-it-all librarian with Wi-Fi in their brain.

    Every time I tried to pin it down, the AI just flexed and swelled, like some mutant Hulk fed on electricity and hubris. I was the noble natural bodybuilder, forged by sweat, discipline, and oceans of egg whites. ChatGPT? It was the juiced-up monster, marinated in digital steroids and algorithmic growth hormones. I’d strain to add ten pounds to my academic bench press; ChatGPT would casually slap on 500 and knock out reps while checking its reflection. I was a relic frozen on the dais, oil-slicked and flexing, while the AI steamrolled past me in the race for writing dominance.

    That’s when the obvious landed like a kettlebell on my chest: I wasn’t going to beat ChatGPT. It wasn’t a bug to patch or a fad to outlast—it was an evolutionary leap, a quantum steroid shot to the act of writing itself. So I stopped swinging at it. Instead, I strapped a saddle on the beast and started steering, learning to use its brute force as my tool instead of my rival.

    It reminded me of a childhood cartoon about Paul Bunyan, the original muscle god with an axe the size of a telephone pole. Then came the chainsaw. There was a contest: man versus machine. Paul roared and hacked, but the chainsaw shredded the forest into submission. The crowd went home knowing the age of the axe was dead. Likewise, the sprawling forest of language has a new lumberjack—and I look pathetic trying to keep up, like a guy standing on Hawthorne Boulevard with a toothbrush, vowing to scrub clean every city block from Lawndale to Palos Verdes.

  • Why Ideas Still Matter in a World of Machines

    Why Ideas Still Matter in a World of Machines

    One of my colleagues, an outstanding writing instructor for more than two decades, has mapped out her exit strategy. She earned a counseling master’s degree, recently completed her life coach certification, and told me she no longer believes in the mission of teaching college writing. Assigning prompts to students who submit AI-generated essays feels meaningless to her—and reading these machine-produced pages makes her physically ill.

    Her words jolted me. I have devoted nearly forty years to this vocation, a career sustained by the assumption that teaching the college essay is an essential skill for young people. We have long agreed that students must learn how to shape chaos into coherence, confront questions that matter to the human condition, write with clarity and force, construct persuasive arguments, examine counterpoints, form informed opinions, master formats, cultivate an authorial voice, and develop critical thinking in a world overflowing with fallacies and propaganda. We also teach students to live with “interiority”—to keep journals, build inner lives, and nurture ideas. These practices have been considered indispensable for personal and professional growth.

    But with AI in the picture, many of my colleagues, including the one planning her departure, now feel bitter and defeated. AI has supplanted us. To our students, AI is more than a tool; it is a counselor, therapist, life coach, tutor, content-generator, and editor that sits in their pockets. They have apps through which they converse with their AI “person.” Increasingly, students bond with these “people” more than with their teachers. They trust AI in ways they do not trust professionals, institutions, or the so-called “laptop class.”

    The sense of displacement is compounded by the quality of student work. Essays are now riddled with AI-speak, clichés, hollow uniformity, facile expressions, superficial analysis, misattributed quotations, hallucinated claims, and fabricated facts. And yet, for the professional world, this output will often suffice. Ninety-five percent of the time, AI’s mediocrity will be “good enough” as workplaces adjust to its speed and efficiency. Thus my colleagues suffer a third wound: irrelevance. If AI can produce serviceable writing quickly, bypassing the fundamentals we teach, then we are the dinosaurs of academia.

    On Monday, when I face my freshman composition students for the first time, I will have to address this reality. I will describe how AI—the merciless stochastic parrot—has unsettled instructors by generating uncanny-valley essays, winning the confidence of students, and leaving teachers uncertain about their place.

    Still, I am not entirely pessimistic about my role. Teaching writing has always required many hats, one of which is the salesman’s. I must sell my ideas, my syllabus, my assignments, and above all, the relevance of writing in students’ lives.

    This semester, I am teaching a class composed entirely of athletes, a measure designed to help with retention. On the first day, I will appeal to what they know best: drills. No athlete mistakes drills for performance. They exist to prepare the body and mind for the real contest. Football players run lateral and backward sprints to build stamina and muscle memory. Pianists practice scales and arpeggios to ready themselves for recitals. Writing drills serve the same purpose: they build the foundation beneath the performance.

    My second pitch will be about the human heart. Education does not begin in the brain; it begins when the heart opens. Just as the athlete “with heart” outperforms the one without it, the student who opens the heart to education learns lessons that endure for life.

    I will tell them about my childhood obsession with baseball. At nine, I devoured every Scholastic book on the subject I could order through Independent Elementary. Many of my heroes were African-American players who endured Jim Crow segregation—forced into separate hotels and restaurants, traveling at great risk. I read about legends like Satchel Paige and Josh Gibson, barred from Major League Baseball because of their race. Through their stories, I learned American history not as dates and facts, but through the eyes of men I revered. My heart opened, and I was educated in a way my schoolteachers never managed.

    I will also tell them about my lost years in college. I enrolled under threat of eviction from my mother and warnings that without higher education, I faced a life of poverty. I loathed classrooms, staring at the clock until I could escape to the gym for squats, deadlifts, and bench presses. Yet in an elective fiction class, I discovered Kafka—how he transmuted his nightmarish inner life into stories that illuminated his world. Then Nabokov, whose audacious style made me long to write with the same confidence, more than I ever longed for a luxury car. If I could capture Nabokov’s authority, I thought, I would be like the Tinman receiving his heart. I would be whole.

    These changes did not come from professors, institutions, or—certainly—not AI. They came from within me, from my heart opening to literature. And yet, a sobering realization remains: the spark for me came through reading, and I see little reading today. I am not dogmatic—perhaps today’s students can find their spark in a documentary on Netflix or an essay on their phones. What matters is the opening of the heart.

    I cannot deny my doubts about remaining relevant in the age of AI, but I believe in the enduring power of ideas. Ideas—true or false—shape lives. They can go viral, ignite movements, and alter history.

    That is why my first assignment will focus on the Liver King, a grifter who peddled “ancestral living” to young men desperate for discipline and belonging. Though he was exposed as a fraud, his message resonated because it spoke to a generation’s hunger for structure and meaning. My students will explore both the desperation of these young men and the manipulations of Bro Culture that preyed upon them.

    Ideas matter. They always have. They always will. My class will succeed or fail on the strength of the ideas I put before my students, and I must present them unapologetically—defended with both my brain and my heart.

  • Richard Brody vs. the Algorithm: A Critic’s Lament in a Post-Print World

    Richard Brody vs. the Algorithm: A Critic’s Lament in a Post-Print World

    In his essay “In Defense of the Traditional Review,” New Yorker critic Richard Brody goes to battle against The New York Times’ editorial decision to shift arts criticism—from the long-form written review to short-form videos designed for a digital audience. It’s a cultural downgrade, Brody argues, a move from substance to performance, from sustained reflection to algorithm-choked ephemera. The move may be pitched as modernization, but Brody sees it for what it is: intellectual compromise dressed up as digital innovation.

    Brody’s stance isn’t anti-technology. He concedes we can chew gum and walk at the same time—that written essays and short videos can coexist. But his core concern is that the center of criticism is the written word. Shift the balance too far toward video, and you risk gutting that center entirely. Worse, video reviews tend to drift toward celebrity interviews and promotional puffery. The fear isn’t hypothetical. When given the choice between a serious review and a clip featuring a celebrity making faces in a car, algorithms will reward the latter. And so criticism is flattened into entertainment, and standards dissolve beneath a rising tide of digital applause.

    Brody’s alarm resonates with me, because I’ve spent the last four decades teaching college writing and watching the same cultural drift. Long books are gone. In many cases, books are gone altogether. We assign short essays because that’s what students can handle. And yet, paradoxically, I’ve never seen such sharp classroom discussions, never written better prompts, never witnessed better argumentation than I do today. The intellectual work isn’t dead—it’s just found new vessels. Brody is right to warn against cultural decay, but the answer isn’t clinging to vanished ideals. It’s adaptation with integrity. If we don’t evolve, we lose our audience. But if we adapt wisely, we might still reach them—and even challenge them—where they are.

  • Death by Convenience: The AI Ads That Want to Rot Your Brain

    Death by Convenience: The AI Ads That Want to Rot Your Brain

    In his essay for The New Yorker, “What Do Commercials About A.I. Really Promise?”, Vinson Cunningham zeroes in on the unspoken premise of today’s AI hype: the dream of total disengagement. He poses the unsettling question: “If human workers don’t have to read, write, or even think, it’s unclear what’s left to do.” It’s a fair point. If ads are any indication, the only thing left for us is to stare blankly into our screens like mollusks waiting to be spoon-fed.

    These ads don’t sell a product; they sell a philosophy—one that flatters your laziness. Fix a leaky faucet? Too much trouble. Write a thank-you note? Are you kidding? Plan a meal, change a diaper, troubleshoot your noise-canceling headphones? Outrageous demands for a species that now views thinking as an optional activity. The machines will do it, and we’ll cheerfully slide into amoebic irrelevance.

    What’s most galling is the heroism layered into the pitch: You’re not shirking your responsibilities, you’re delegating. You’re optimizing your workflow. You’re buying back your precious time. You’re a genius. A disruptor. A life-hacking, boundary-pushing modern-day Prometheus who figured out how to get out of reading bedtime stories to your children.

    But Cunningham has a sharper take. The message behind the AI lovefest isn’t just about convenience—it’s about hollowing us out. As he puts it, “The preferred state, it seems, is a zoned-out semi-presence, the worker accounted for in body but absent in spirit.” That’s what the ads are pushing: a blissful vegetative state, where you’re physically upright but intellectually comatose.

    Why read to your kids when an AI avatar can do it in a soothing British accent? Why help them with their homework when a bot can explain algebra, write essays, correct their errors, and manage their grades—while you binge Breaking Bad for the third time? Why have a conversation with their teacher when your chatbot can send a perfectly passive-aggressive email on your behalf?

    This is not the frictionless future we were promised. It’s a slow lobotomy served on a platter of convenience. The ads imply that the life of the mind is outdated. And critical thinking? That’s for chumps with time to kill. Thinking takes bandwidth—something that would be better spent refining your custom coffee order via voice assistant.

    Cunningham sees the bitter punchline: In our rush to outsource everything, we’ve made ourselves obsolete. And the machines, coldly efficient and utterly indifferent, are more than happy to take it from here.

  • Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    In his New Yorker piece, “What Happens After A.I. Destroys College Writing?”, Hua Hsu mourns the slow-motion collapse of the take-home essay while grudgingly admitting there may be a chance—however slim—for higher education to reinvent itself before it becomes a museum.

    Hsu interviews two NYU undergrads, Alex and Eugene, who speak with the breezy candor of men who know they’ve already gotten away with it. Alex admits he uses A.I. to edit all his writing, from academic papers to flirty texts. Research? Reasoning? Explanation? No problem. Image generation? Naturally. He uses ChatGPT, Claude, DeepSeek, Gemini—the full polytheistic pantheon of large language models.

    Eugene is no different, and neither are their classmates. A.I. is now the roommate who never pays rent but always does your homework. The justifications come standard: the assignments are boring, the students are overworked, and—let’s face it—they’re more confident with a chatbot whispering sweet logic into their ears.

    Meanwhile, colleges are flailing. A.I. detection software is unreliable, grading is a time bomb, and most instructors don’t have the time, energy, or institutional backing to play academic detective. The truth is, universities were caught flat-footed. The essay, once a personal rite of passage, has become an A.I.-assisted production—sometimes stitched together with all the charm and coherence of a Frankenstein monster assembled in a dorm room at 2 a.m.

    Hsu—who teaches at a small liberal arts college—confesses that he sees the disconnect firsthand. He listens to students in class and then reads essays that sound like they were ghostwritten by Siri with a mild Xanax addiction. And in a twist both sobering and dystopian, students don’t even see this as cheating. To them, using A.I. is simply modern efficiency. “Keeping up with the times.” Not deception—just delegation.

    But A.I. doesn’t stop at homework. It’s styling outfits, dispensing therapy, recommending gadgets. It has insinuated itself into the bloodstream of daily life, quietly managing identity, desire, and emotion. The students aren’t cheating. They’re outsourcing. They’ve handed over the messy bits of being human to an algorithm that never sleeps.

    And so, the question hangs in the air like cigar smoke: Are writing departments quaint relics? Are we the Latin teachers of the 21st century, noble but unnecessary?

    Some professors are adapting. Blue books are making a comeback. Oral exams are back in vogue. Others lean into A.I., treating it like a co-writer instead of a threat. Still others swap out essays for short-form reflections and response journals. But nearly everyone agrees: the era of the generic prompt is over. If your essay question can be answered by ChatGPT, your students already know it—and so does the chatbot.

    Hsu, for his part, doesn’t offer solutions. He leaves us with a shrug.

    But I can’t shrug. I teach college writing. And for me, this isn’t just a job. It’s a love affair. A slow-burning obsession with language, thought, and the human condition. Either you fall in love with reading and writing—or you don’t. And if I can’t help students fall in love with this messy, incandescent process of making sense of the world through words, then maybe I should hang it up, binge-watch Love Is Blind, and polish my résumé.

    Because this isn’t about grammar. This is about soul. And I’m in the love business.

  • My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    If you over rely on AI, you risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    If your writing smacks of AI–the kind of polished mediocrity that will lead you down a road of being a functionary or a Non-Player Character, the grade you receive will reflect that sad fact. On the other hand, if your writing is animated by a strong authorial presence, evidence of an Architect, a person who strives for a life of excellence, self-agency, and pride, your grade will reflect that fact as well.