Category: Education in the AI Age

  • Lessons Learned from the Ring Light Apocalypse

    Lessons Learned from the Ring Light Apocalypse

    During lockdown, I never saw my wife more wrung out, more spiritually flattened, than the months her middle school forced her into the digital gladiator pit of live Zoom instruction. Every weekday morning, she stood before a pair of glaring monitors like a soldier manning twin turrets. At her feet, the giant ring light—a luminous, tripod-legged parasite—waited patiently to stub toes and sabotage serenity. It wasn’t just a lighting fixture; it was a metaphor for the pandemic’s unwanted intrusion into every square inch of our domestic life.

    My wife’s battle didn’t end with her students. She also took it upon herself to launch our twin daughters, then fifth-graders, into their own virtual classrooms—equally chaotic, equally doomed. I remember walking past their screens, peering at those sad little Brady Bunch tiles of glitchy faces and frozen smiles and thinking, This isn’t going to work. It didn’t feel like school. It felt like a pathetic simulation of order run by people trying to pilot a burning zeppelin from their kitchen tables.

    I, by contrast, got off scandalously easy. I teach college. My courses were asynchronous, quietly nestled in Canvas like pre-packed emergency rations. No live sessions. No tech panics. Just optional Zoom office hours, which no one attended. I sat in my garage doing kettlebell swings like a suburban monk, then retreated inside to play piano in the filtered afternoon light. The pandemic, for me, was a preview of early retirement: low-contact, low-stakes, and high in self-righteous tranquility.

    My wife envied me. She joked that teaching Zoom classes was like having your teeth drilled by a sadist who lectures you on standardized testing while fumbling with the pliers. And I laughed—too hard, because it wasn’t really a joke.

    The pandemic cracked open a truth I still wince at: the great domestic imbalance. I do chores, yes. I wipe counters, haul laundry, load the dishwasher. But my wife does the emotional heavy lifting—the million invisible tasks of motherhood, schooling, comforting, coordinating. During lockdown, that imbalance stopped being abstract. It stared me in the face.

    For me, quarantine was a hermit’s holiday. For her, it was a battlefield with bad Wi-Fi. And while I’m back to teaching and she’s back to something closer to normal, I haven’t forgotten the ring light, the glazed stare, or the guilt that hums quietly like a broken refrigerator in the back of my mind.

  • Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    As students increasingly rely on AI writing tools—sometimes even using one tool to generate an assignment and another to rewrite or “launder” it—we must adapt our teaching strategies to stay aligned with these evolving practices. To address this shift, I propose the following two updated Student Learning Outcomes that reflect the current landscape of AI-assisted writing:

    Student Learning Outcome #1: Using AI Tools Responsibly

    Students will integrate AI tools into their writing assignments in ways that enhance learning, demonstrate critical thinking, and reflect ethical and responsible use of technology.


    Definition of “Meaningfully, Ethically, and Responsibly”:

    To use AI tools meaningfully, ethically, and responsibly means students treat AI not as a shortcut to bypass thinking, but as a collaborative aid to deepen their writing, research, and revision process. Ethical use includes acknowledging when and how AI was used, avoiding plagiarism or misrepresentation, and understanding the limits and biases of these tools. Responsible use involves aligning AI usage with the assignment’s goals, maintaining academic integrity, and using AI to support—not replace—original thought and student voice.


    Five Assignment Strategies to Fulfill This Learning Outcome:

    1. AI Process Reflection Logs
      Require students to submit a short reflection with each assignment explaining if, how, and why they used AI tools (e.g., brainstorming, outlining, revising), and evaluate the effectiveness and ethics of their choices.
    2. Compare-and-Critique Tasks
      Assign students to generate an AI-written response to a prompt and then critique it—identifying weaknesses in reasoning, tone, or factual accuracy—and revise it with their own voice and insights.
    3. Source Verification Exercises
      Ask students to use AI to gather preliminary research, then verify, fact-check, and cite real sources that support or challenge the AI’s output, teaching them discernment and digital literacy.
    4. AI vs. Human Draft Workshops
      Have students bring both an AI-generated draft and a human-written draft of the same paragraph to class. In peer review, students analyze the differences in tone, structure, and depth of thought to develop judgment about when AI helps or hinders.
    5. Statement of Integrity Clause
      Include a required statement in the assignment where students attest to their use of AI tools, much like a bibliography or code of ethics, fostering transparency and self-awareness.

    Student Learning Outcome #2: Avoiding the Uncanny Valley Effect

    Students will produce writing that sounds natural, human, and authentic—free from the awkwardness, artificiality, or emotional flatness often associated with AI-generated content.


    Definition: The Uncanny Valley Effect in Writing

    The Uncanny Valley Effect in writing occurs when a piece of text almost sounds human—but not quite. It may be grammatically correct and well-structured, yet it feels emotionally hollow, overly generic, oddly formal, or just slightly “off.” Like a robot trying to pass as a person, the writing stirs discomfort or distrust because it mimics human tone without the depth, insight, or nuance of actual lived experience or authorial voice.


    5 Common Characteristics of the Uncanny Valley in Student Writing:

    1. Generic Language – Vague, overused phrases that sound like filler rather than specific, engaged thought (e.g., “Since the dawn of time…”).
    2. Overly Formal Tone – A stiff, robotic voice with little rhythm, personality, or variation in sentence structure.
    3. Surface-Level Thinking – Repetition of obvious or uncritical ideas with no deeper analysis, curiosity, or counterargument.
    4. Emotional Emptiness – Statements that lack genuine feeling, perspective, or a sense of human urgency.
    5. Odd Phrasing or Word Choice – Slightly off metaphors, synonyms, or transitions that feel misused or unnatural to a fluent reader.

    7 Ways Students Can Use AI Tools Without Falling into the Uncanny Valley:

    1. Always Revise the Output – Use AI-generated text as a rough draft or idea starter, but revise it with your own voice, style, and specific insights.
    2. Inject Lived Experience – Add personal examples, concrete details, or specific observations that an AI cannot generate from its data pool.
    3. Break the Pattern – Vary your sentence length, tone, and rhythm to avoid the AI’s predictable, formal cadence.
    4. Cut the Clichés – Watch for stale or filler phrases (“in today’s society,” “this essay will discuss…”) and replace them with clearer, more original statements.
    5. Ask the AI Better Questions – Use prompts that require nuance, comparison, or contradiction rather than shallow definitions or summaries.
    6. Fact-Check and Source – Don’t trust AI-generated facts or references. Verify claims with real sources and cite them properly.
    7. Read Aloud – If it sounds awkward or lifeless when spoken, revise. Authentic writing should sound like something a thoughtful person might actually say.
  • AI Wants to be Your Friend, and It’s Shrinking Your Mind

    AI Wants to be Your Friend, and It’s Shrinking Your Mind

    In The Atlantic essay “AI Is Not Your Friend,” Mike Caulfield lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • Cultural Fusion or Culinary Fraud?

    Cultural Fusion or Culinary Fraud?

    My Critical Thinking students are grappling with the sacred and the sacrilegious—namely, tacos.

    Their final essay asks a deceptively simple question: When it comes to iconic dishes like the taco, should we cling to tradition as if it were holy writ, treating every variation as culinary heresy? Or is riffing on a recipe a legitimate act of evolution—or worse, an opportunistic theft dressed up in aioli?

    To dig into this, we turn to Netflix’s Ugly Delicious, where chef David Chang hosts an episode simply titled “Tacos.” The episode plays like a beautifully constructed argumentative essay by Gustavo Arellano, who dismantles the idea of “Mexican food” as a static monolith. Instead, he presents it as a glorious, shape-shifting culture of flavor—one that thrives because of its openness to the outside world.

    Arellano celebrates Mexico’s culinary curiosity: how Lebanese immigrants brought shawarma and inspired tacos al pastor, a perfect example of cultural fusion that became canon. He contrasts this with the United States’ suspicious, xenophobic posture—a country that historically snarls at outsiders until they open a food truck and sell $2 magic on a paper plate.

    Roy Choi, creator of the legendary Kogi taco trucks, takes this further. He speaks of cooking as a street-level negotiation for dignity: Korean-Mexican fusion forged in the heat of shared kitchens, shaped by the scorn of American culture, and perfected not out of trendiness but out of survival. These tacos aren’t just delicious; they’re resistance with a salsa verde finish.

    But this isn’t just a story of open minds and flavor-blending utopias. There’s also the hard truth of survival and adaptation. Take Lucia Rodriguez, who immigrated from Jalisco and had to recreate her recipes using whatever ingredients she could find in San Bernardino. Her efforts became the foundation of Mitla Cafe, a restaurant still thriving since 1937. It also became the blueprint for Glen Bell—yes, that Glen Bell—who reverse-engineered her food to create Taco Bell, which is to Mexican cuisine what boxed wine is to Bordeaux.

    Still, not all spin-offs are sins. Rosio Sanchez, a Michelin-level chef, began her journey by mastering traditional Mexican food. Only then did she begin to improvise, like a jazz virtuoso honoring the standards before going off-script. Her reinvention is rooted in love, not opportunism. It’s a tribute, not a theft.

    And therein lies the moral fault line: intent, respect, and—let’s not forget—execution. As one student noted with appropriate outrage, white TikTok influencers once rebranded agua fresca as “spa water,” a cultural mugging wrapped in Pinterest aesthetics. And let’s not ignore the corporate vultures who buy beloved local chains only to gut their soul with frozen ingredients and bottom-line mediocrity.

    The lesson? Not all innovation is appropriation. But if your food disrespects its roots, dilutes its meaning, or simply tastes like disappointment, it’s not fusion—it’s a felony.

    The rule is simple: Make great food that honors its lineage and blows people away. Otherwise, what you’re serving is not cuisine. It’s edible disrespect.

  • Satan Wears Patek: The Couture Demons of Network TV

    Satan Wears Patek: The Couture Demons of Network TV

    After dinner, my wife and I collapsed onto the couch like two satiated lions, still riding the sugar high from a slice of chocolate cake so transcendent it could’ve been smuggled out of a Vatican vault. This wasn’t just dessert—it was a spiritual experience. Fudgy, rich, and topped with a ganache that whispered blasphemies in French, it left us in a state of chocolaty euphoria. And what better way to follow up divine confectionery than with a show called Evil—which, in tone and content, felt like dessert’s opposite number.

    Evil, for the uninitiated, is what happens when The X-Files and The Exorcist have a baby and then dress it in Prada. Our hero is David Acosta, a priest so genetically gifted he looks like he was sculpted during an abs day in Michelangelo’s studio. He partners with Kristen Bouchard, a forensic psychologist with both supermodel cheekbones and a Rolodex of PhDs, and Ben Shakir, a tech bro turned ghostbuster, who handles the EMF detectors and keeps the Wi-Fi strong enough to livestream from hell. Together, they investigate cases of alleged possession, miracles, and demonic mischief—all lurking, naturally, in two-story suburban homes with open-concept kitchens.

    What really juices the narrative is the will-they-won’t-they tension between Kristen and Father Abs. Their chemistry crackles with forbidden longing, as if every exorcism could end in a kiss—had David not taken a vow of celibacy (and the producers not wanted to nuke the Catholic viewership). It’s less faith versus science and more eye contact versus self-control.

    And then there’s Leland Townsend, the show’s resident demon in Dockers. He’s less Prince of Darkness and more Assistant Manager of Darkness—slick, smug, and oily enough to deep-fry a turkey. He slinks into scenes oozing unearned confidence and pathological glee, like Satan’s regional sales director. You can practically smell the Axe body spray of evil.

    Let’s pause here for fashion. The wardrobe department on Evil deserves an Emmy, a Pulitzer, and possibly a fragrance line. Everyone’s rocking cinematic outerwear that belongs in the Louvre. Kristen’s coats are so tailored they could cut glass. Acosta’s wrist is adorned with a Patek Philippe that suggests his vows may include poverty of the soul, but not of the Swiss variety. Honestly, the outfits are so distracting you half expect Satan to comment on the stitching.

    In one late-night scene, Kristen’s daughters are using ghost-detecting iPad apps at 3 a.m., their faces bathed in eerie blue light. It’s a chilling tableau of children, tech, and probable demonic activity—basically a 2024 parenting blog. Just as the show was about to unravel the mystery, my wife hit pause and delivered a horror story of her own: teachers using AI to grade papers with personalized comments. Comments so perfectly tailored they could bring a tear to a parent’s eye—and yet, no human had written them.

    “What’s the point of teachers anymore?” she asked, already knowing the answer. I nodded solemnly, watching the paused image of Father David, his coat pristine, his watch immaculate. I had neither. And I live in Los Angeles, where “winter” is defined as turning off the ceiling fan.

    But something in that moment shifted. The show wasn’t just mocking the digital devil—it was embodying him. That wristwatch mocked me. The coat judged me. I wasn’t watching Evil; I was being possessed by it. By envy, by consumer lust, by the creeping suspicion that maybe—just maybe—I wasn’t living my best, most stylized demon-fighting life.

    It’s not the show’s demons that haunt me. It’s their wardrobe.

  • There Is No Digital Kaffeeklatsch: The Lie of Social Media

    There Is No Digital Kaffeeklatsch: The Lie of Social Media

    For the last fifteen years, we’ve let the term social media slip into our lexicon like a charming grifter. It sounds benign, even wholesome—like we’re all gathered around a digital café table, sipping lattes and chatting about our lives in a warm, buzzing kaffeeklatsch. But that illusion is precisely the problem. The phrase “social media” is branding sleight-of-hand, a euphemism designed to lull us into thinking we’re having meaningful interactions when, in reality, we’re being drained like emotional batteries in a rigged arcade.

    This is not a friendly coffeehouse. It’s a dopamine-spewing Digital Skinner Box where you tap and scroll like a lab rat hoping for one more pellet of validation. What we’re calling “social” is, in fact, algorithmic manipulation dressed in a hoodie. We are not exchanging ideas—we are bartering our attention for scraps of engagement while surrendering personal data to tech oligarchs who harvest our behavior like bloodless farmers fattening up their cattle.

    Richard Seymour calls this hellscape The Twittering Machine, and he’s not exaggerating. Byung-Chul Han calls it gamification capitalism, a regime in which we perform our curated selves for likes while the real self, the vulnerable human beneath the filter, slowly atrophies. Anna Lembke describes our overstimulated descent in Dopamine Nation, while the concept of Algorithmic Capture suggests we no longer shape technology—technology shapes us.

    So let’s drop the charade. This isn’t “social media.” It’s addiction media, engineered to flatten nuance, hollow out identity, and leave us twitching in the glow of our screens like the last souls left in a flickering casino. Whatever this is, it’s not convivial, it’s not coffeehouse chatter, and it’s certainly not social. It’s the end of human discourse masquerading as connection.

  • The Great Rebrand: Why “Addiction Media” Tells the Truth

    The Great Rebrand: Why “Addiction Media” Tells the Truth

    Reading Richard Seymour’s The Twittering Machine is like discovering that Black Mirror isn’t speculative fiction—it’s documentary. Seymour paints our current digital reality as a propaganda-laced fever swamp, one where we aren’t just participants but livestock—bred for data, addicted to outrage, and stripped of self-agency. Watching tech-fueled sociopaths ascend to power begins to make sense once you realize that mass digital degradation is the new civic norm. We’re not on the cusp of dystopia; we’re marinating in it.

    Most of us are trapped in Seymour’s titular machine, flapping like digital pigeons in a Skinner Box, pecking for likes, retweets, or just one more dopamine hit. We scroll ourselves into a stupor, zombies hypnotized by grotesque clickbait and influencer gaucherie. And yet, a flicker of awareness remains. Some of us know our brains are rotting. We feel it in our foggy thoughts, our shortened attention spans, our craving to be “seen” by strangers.

    But Seymour offers no comfort. He cites a 2015 study where people tried to quit Facebook for 99 days. Most folded within 72 hours. Some switched to Instagram, TikTok, or Twitter—addiction by another name. Only a rare few truly escaped, and they reported something wild: clarity, peace, a sudden freedom from the exhausting treadmill of performance. They had unplugged from what philosopher Byung-Chul Han calls “gamification capitalism,” a system where every social interaction is a metric and every self is a brand.

    Seymour’s takeaway? Let’s retire the quaint euphemism “social media.” It’s not social. It’s not media in the traditional sense. It’s engineered compulsion. It’s addiction media—and we’re the lab rats with no exit key.

  • From Gutenberg to Doomscroll: A Brief History of Our Narrative Decline

    From Gutenberg to Doomscroll: A Brief History of Our Narrative Decline

    Richard Seymour, in The Twittering Machine, reminds us that writing was once a sacred act—a cerebral pilgrimage and a cultural compass. It charted the peaks of human enlightenment and the valleys of our collective idiocy. But ever since Gutenberg’s movable type cranked out the first printed tantrum, writing has also been big business. Seymour calls this “print capitalism”—a factory of words that forged what Benedict Anderson dubbed “imagined communities,” and what Yuval Noah Harari might call humanity’s favorite pastime: building civilizations on beautifully told lies.

    But that was then. Enter the computer—a Pandora’s box with a backspace key. We haven’t just changed how we write; we’ve scrambled the very code of our narrative DNA. Seymour scoffs at the term “social media.” He prefers something more honest and unflinching: “shorthand propaganda.” After all, writing was always social—scrolls, letters, manifestos scrawled in exile. The novelty isn’t the connection; it’s the industrialization of thought. Now, we produce a firehose of content—sloppy, vapid, weaponized by ideology, and monetized by tech lords playing dopamine dealers.

    The term “social media” flatters what is more accurately a “social industry”—a Leviathan of data-harvesting, behavioral conditioning, and emotional slot machines dressed in UX sugar-coating. The so-called “friends” we collect are nothing more than pawns in a gamified economy of clout, their every click tracked, sold, and repurposed to make us addicts. Sherry Turkle wasn’t being cute when she warned that our connections were making us lonelier: she was diagnosing a slow psychological implosion.

    We aren’t writing anymore. We’re twitching. We’re chirping. We’re flapping like those emaciated birds in Paul Klee’s the Twittering Machine, spinning an axle we no longer control, bait for the next poor soul. This isn’t communication. It’s entrapment, dressed up in hashtags and dopamine hits.

  • The Twittering Machine Never Sleeps

    The Twittering Machine Never Sleeps

    Richard Seymour, in his searing dissection of our digital descent, The Twittering Machine, argues that our compulsive scribbling across social media isn’t a charming side effect of modern communication—it’s a horror story. He calls our affliction “scripturient,” which sounds like a medieval disease and feels like one too: the raging, unquenchable urge to write, tweet, post, blog, caption, and meme ourselves into validation. According to Seymour, we’re not sharing—we’re hemorrhaging content, possessed by the hope that someone, somewhere, will finally pay attention. The platforms lap it up, feeding on our existential howl like pigs at a trough.

    But here’s the twist: these platforms don’t just amplify our words—they mutate us. We contort into parodies of ourselves, honed for likes, sharpened for outrage. Seymour’s reference to Paul Klee’s painting the Twittering Machine isn’t just arty window dressing—it’s prophecy. In it, skeletal birds crank a machine with the desperate chirps of bait, luring the next batch of fools into the algorithmic abyss. Once captured, these chirpers become part of the machine: chirp, crank, scroll, repeat. It’s not connection—it’s servitude with emojis.

    And yet, here I am. Writing this blog. Voluntarily. On WordPress, that semi-respectable cul-de-sac just outside the main drag of Social Media Hell. It’s not Facebook, which is a digital Thunderdome of outrage, memes, and unsolicited opinions from high school classmates you forgot existed. No, WordPress lets me stretch out. I can write without worrying that my paragraph won’t survive the swipe-happy thumbs of the attention-deficient. It feels almost…literary.

    But let’s not get smug. The moment I promote my posts on Twitter or check my analytics like a rat pressing a pellet bar, I’m caught in the same trap. I tell myself it’s different. That I’m writing for meaning, not metrics. But the line between writer and performer, between expression and spectacle, gets blurrier by the day. I’ve escaped the Twittering Machine before—unplugged, deleted, detoxed—but it still hums in the background, always ready to pull me back in with the promise of just one more click, one more like, one more little chirp of relevance.

  • You, Rewritten: Algorithmic Capture in the Age of AI

    You, Rewritten: Algorithmic Capture in the Age of AI

    Once upon a time, writing instructors worried about comma splices and uninspired thesis statements. Now, we’re dodging 5,000-word essays spat out by AI platforms like ChatGPT, Gemini, and Claude—essays so eerily competent they hit every benchmark on the department rubric: in-text citations, signal phrases, MLA formatting, and close readings with all the soulful depth of a fax machine reading T.S. Eliot. This is prose caught in the Uncanny Valley—syntactically flawless, yet emotionally barren, like a Stepford Wife enrolled in English 101. And since these algorithmic Franken-scripts often evade plagiarism detectors, we’re all left asking the same queasy question: What is the future of writing—and of teaching writing—in the AI Age?

    That question haunted me long enough to produce a 3,000-word prompt. But the deeper I sank into student conversations, the clearer it became: this isn’t just about writing. It’s about living. My students aren’t merely outsourcing thesis statements. They’re using AI to rewrite awkward apology texts, craft flirtatious replies on dating apps, conduct self-guided therapy with bots named “Charles” and “Luna,” and decode garbled lectures delivered by tenured mumblers. They feed syllabi into GPT to generate study guides. They get toothpaste recommendations. They draft business emails and log them in AI-curated archives. In short: ChatGPT isn’t a tool. It’s a prosthetic consciousness.

    And here’s the punchline: they see no alternative. AI isn’t a novelty; it’s a survival mechanism. In their hyper-accelerated, ultra-competitive, attention-fractured lives, AI has become as essential as caffeine and Wi-Fi. So no, I won’t be asking students to merely critique ChatGPT as a glorified spell-checker. That’s quaint. Instead, I’m introducing them to Algorithmic Capture—the quiet tyranny by which human behavior is shaped, scripted, and ultimately absorbed by optimization-driven systems. Under this logic, ambiguity is penalized, nuance is flattened, and people begin tailoring themselves to perform for the algorithmic eye. They don’t just use the machine. They become legible to it.

    For this reason, the new essay assignment doesn’t ask, “What’s the future of writing?” It asks something far more urgent: What’s happening to you? I’m having students analyze the eerily prophetic episodes of Black Mirror—especially “Joan Is Awful,” that fluorescent satire of algorithmic self-annihilation—and write about how Algorithmic Capture is reshaping their lives, identities, and choices. They won’t just be critiquing AI’s effect on prose. They’ll be interrogating the way it quietly rewrites the self.