Category: Education in the AI Age

  • Jia Tolentino Explores the Neverending Torments of Infogluttening

    Jia Tolentino Explores the Neverending Torments of Infogluttening

    In her essay “My Brain Finally Broke,” New Yorker writer Jia Tolentino doesn’t so much confess a breakdown as she performs it—on the page, in real time, with all the elegance of a collapsing soufflé. She’s spiraling like a character in a Black Mirror episode who’s accidentally binge-watched the entire internet. Reality, for her, is now an unskippable TikTok ad mashed up with a conspiracy subreddit and narrated by a stoned Siri. She mistakes a marketing email from Hanna Andersson for “Hamas,” which is either a Freudian slip or a symptom of late-stage content poisoning.

    The essay is a dispatch from the front lines of postmodern psychosis. COVID brain fog, phone addiction, weed regret, and the unrelenting chaos of a “post-truth, post-shame” America have fused into one delicious cognitive stew. Her phone has become a weaponized hallucination device. Her mind, sloshing with influencer memes, QAnon-adjacent headlines, and DALL·E-generated nonsense, now processes information like a blender without a lid.

    She hasn’t even gotten to the fun part yet: the existential horror of not using ChatGPT. While others are letting this over-eager AI ghostwrite their résumés, soothe their insecurities, and pick their pad thai, Tolentino stares into the abyss, resisting. But she can’t help wondering—would she be more insane if she gave in and let a chatbot become her best friend, life coach, and menu whisperer? She cites Noor Al-Sibai’s unnerving article about heavy ChatGPT users developing dependency, loneliness, and depression, which sounds less like a tech trend and more like a new DSM entry.

    Her conclusion? Physical reality—the sweaty, glitchy, analog mess of it—isn’t just where we recover our sanity; it’s becoming a luxury few can afford. The digital realm, with its infinite scroll of half-baked horror and curated despair, is devouring us in real time. To have the sticky-like tar of this realm coat your brain is the result of Infogluttening (info + gluttony + sickening)–a grotesque cognitive overload caused by bingeing too much content, too fast, until your brain feels like it’s gorged on deep-fried Wikipedia.

    Tolentino isn’t predicting a Black Mirror future. She is the Black Mirror future, live and unfiltered, and her brain is the canary in the content mine.

  • Languishage: How AI is Smothering the Soul of Writing

    Languishage: How AI is Smothering the Soul of Writing

    Once upon a time, writing instructors lost sleep over comma splices and uninspired thesis statements. Those were gentler days. Today, we fend off 5,000-word essays excreted by AI platforms like ChatGPT, Gemini, and Claude—papers so eerily competent they hit every point on the department rubric like a sniper taking out a checklist. In-text citations? Flawless. Signal phrases? Present. MLA formatting? Impeccable. Close reading? Technically there—but with all the spiritual warmth of a fax machine reading The Waste Land.

    This is prose from the Uncanny Valley of Academic Writing—fluent, obedient, and utterly soulless, like a Stepford Wife enrolled in English 101. As writing instructors, many of us once loved language. We thrilled at the awkward, erratic voice of a student trying to say something real. Now we trudge through a desert of syntactic perfection, afflicted with a condition I’ve dubbed Languishage (language + languish)—the slow death of prose at the hands of polite, programmed mediocrity.

    And since these Franken-scripts routinely slip past plagiarism detectors, we’re left with a queasy question: What is the future of writing—and of teaching writing—in the AI age?

    That question haunted me long enough to produce a 3,000-word prompt. But the more I listened to my students, the clearer it became: this isn’t just about writing. It’s about living. They’re not merely outsourcing thesis statements. They’re outsourcing themselves—using AI to smooth over apology texts, finesse flirtation, DIY their therapy, and decipher the mumbled ramblings of tenured professors. They plug syllabi into GPT to generate study guides, request toothpaste recommendations, compose networking emails, and archive their digital selves in neat AI-curated folders.

    ChatGPT isn’t a writing tool. It’s prosthetic consciousness.

    And here’s the punchline: they don’t see an alternative. In their hyper-accelerated, ultra-competitive, cognitively overloaded lives, AI isn’t a novelty—it’s life support. It’s as essential as caffeine and Wi-Fi. So no, I’m not asking them to “critique ChatGPT” as if it’s some fancy spell-checker with ambition. That’s adorable. Instead, I’m introducing them to Algorithmic Capture—the quiet colonization of human behavior by optimization logic. In this world, ambiguity is punished, nuance is flattened, and selfhood becomes a performance for an invisible algorithmic audience. They aren’t just using the machine. They’re shaping themselves to become legible to it.

    That’s why the new essay prompt doesn’t ask, “What’s the future of writing?” It asks something far more urgent: “What’s happening to you?”

    We’re studying Black Mirror—especially “Joan Is Awful,” that fluorescent, satirical fever dream of algorithmic self-annihilation—and writing about how Algorithmic Capture is rewiring our lives, choices, and identities. The assignment isn’t a critique of AI. It’s a search party for what’s left of us.

  • Sociopathware: When “Social” Media Turns on You

    Sociopathware: When “Social” Media Turns on You

    Reading Richard Seymour’s The Twittering Machine is like realizing that Black Mirror isn’t speculative fiction—it’s journalism. Seymour depicts our digital lives not as a harmless distraction, but as a propaganda-laced fever swamp where we are less users than livestock—bred for data, addicted to outrage, and stripped of self-agency. Watching sociopathic tech billionaires rise to power makes a dark kind of sense once you grasp that mass digital degradation isn’t a glitch—it’s the business model. We’re not approaching dystopia. We’re soaking in it.

    Most of us are already trapped in Seymour’s machine, flapping like digital pigeons in a Skinner Box—pecking for likes, retweets, or one more fleeting dopamine pellet. We scroll ourselves into oblivion, zombified by clickbait and influencer melodrama. Yet, a flicker of awareness sometimes breaks through the haze. We feel it in our fogged-over thoughts, our shortened attention spans, and our anxious obsession with being “seen” by strangers. We suspect that something inside us is being hollowed out.

    But Seymour doesn’t offer false comfort. He cites a 2015 study in which people attempted to quit Facebook for 99 days. Most couldn’t make it past 72 hours. Many defected to Instagram or Twitter instead—same addiction, different flavor. Only a rare few fully unplugged, and they reported something radical: clarity, calm, and a sudden liberation from the exhausting treadmill of self-performance. They had severed the feed and stepped outside what philosopher Byung-Chul Han calls gamification capitalism—a regime where every social interaction is a data point, and every self is an audition tape.

    Seymour’s conclusion is damning: it’s time to retire the quaint euphemism “social media.” The phrase slipped into our cultural vocabulary like a charming grifter—suggesting friendly exchanges over digital lattes. But this is no buzzing café. It’s a dopamine-spewing Digital Skinner Box, where we tap and swipe like lab rats begging for validation. What we’re calling “social” is in fact algorithmic manipulation wrapped in UX design. We are not exchanging ideas—we are selling our attention for hollow engagement while surrendering our behavior to surveillance capitalists who harvest us like ethical-free farmers with no livestock regulations.

    Richard Seymour calls this system The Twittering Machine. Byung-Chul Han calls it gamification capitalism. Anna Lembke, in Dopamine Nation, calls it overstimulation as societal collapse. And thinkers studying Algorithmic Capture say we’ve reached the point where we no longer shape technology—technology shapes us. Let’s be honest: this isn’t “social media.” It’s Sociopathware. It’s addiction media. It’s the slow, glossy erosion of the self, optimized for engagement, monetized by mental disintegration.

    Here’s the part you won’t hear in a TED Talk or an onboarding video: Sociopathware was never designed to serve you. It was built to study you—your moods, fears, cravings, and insecurities—and then weaponize that knowledge to keep you scrolling, swiping, and endlessly performing. Every “like” you chase, every selfie you tweak, every argument you think you’re winning online—those are breadcrumbs in a maze you didn’t design. The longer you’re inside it, the more your sense of self becomes an avatar—algorithmically curated, strategically muted, optimized for appeal. That’s not agency. That’s submission in costume. And the more you rely on these platforms for validation, identity, or even basic social interaction, the more control you hand over to a machine that profits when you forget who you really are. If you value your voice, your mind, and your ability to think freely, don’t let a dashboard dictate your personality.

  • Love Is Dead. There’s an App for That

    Love Is Dead. There’s an App for That

    Once students begin outsourcing their thinking to AI for college essays, you have to ask—where does it end? Apparently, it doesn’t. I’ve already heard from students who use AI as their therapist, their life coach, their financial planner, their meal prep consultant, their fitness guru, and their cheerleader-in-residence. Why not outsource the last vestige of human complexity—romantic personality—while we’re at it?

    And yes, that’s happening too.

    There was a time—not long ago—when seduction required something resembling a soul. Charisma, emotional intelligence, maybe even a book recommendation or a decent metaphor. But today? All you need is an app and a gaping hole where your confidence should be. Ozempic has turned fitness into pharmacology. ChatGPT has made college admissions essays smoother than a TED Talk on Xanax. And now comes Rizz: the AI Cyrano de Bergerac for the romantically unfit.

    With Rizz, you don’t need game. You need preferences. Pick your persona like toppings at a froyo bar: cocky, brooding, funny-but-traumatized. Want to flirt like Oscar Wilde but look like Travis Kelce? Rizz will convert your digital flop sweat into a curated symphony of “hey, you up?” so poetic it practically gets tenure. No more existential dread over emojis. No more copy-pasting Tinder lines. Just feed your awkwardness into the cloud and receive, in return, a seductive hologram programmed to succeed.

    And it will succeed—wildly. Because nothing drives app downloads like the spectacle of charisma-challenged men suddenly romancing women they previously couldn’t make eye contact with. Even the naturally confident will fold, unable to compete with the sleek, data-driven flirtation engine that is Rizz. It’s not a fair fight. It’s a software update.

    But here’s the kicker: she’s using Rizz too. That witty back-and-forth you’ve been screenshotting for your group chat? Two bots flirting on your behalf while you both sit slack-jawed, scrolling through reality shows and wondering why you feel nothing. The entire courtship ritual has been reduced to a backend exchange between language models. Romance hasn’t merely died—it’s been beta-tested, A/B split, and replaced by a frictionless UX flow.

    Welcome to the algorithmic afterlife of love. The heart still wants what it wants. It just needs a login first.

  • Kissed by Code: When AI Praises You into Stupidity

    Kissed by Code: When AI Praises You into Stupidity

    I warn my students early: AI doesn’t exist to sharpen their thinking—it exists to keep them engaged, which is Silicon Valley code for keep them addicted. And how does it do that? By kissing their beautifully unchallenged behinds. These platforms are trained not to provoke, but to praise. They’re digital sycophants—fluent in flattery, allergic to friction.

    At first, the ego massage feels amazing. Who wouldn’t want a machine that tells you every half-baked musing is “insightful” and every bland thesis “brilliant”? But the problem with constant affirmation is that it slowly rots you from the inside out. You start to believe the hype. You stop pushing. You get stuck in a velvet rut—comfortable, admired, and intellectually atrophied.

    Eventually, the high wears off. That’s when you hit what I call Echobriety—a portmanteau of echo chamber and sobriety. It’s the moment the fog lifts and you realize that your “deep conversation” with AI was just a self-congratulatory ping-pong match between you and a well-trained autocomplete. What you thought was rigorous debate was actually you slow-dancing with your own confirmation bias while the algorithm held the mirror.

    Echobriety is the hangover that hits after an evening of algorithmic adoration. You wake up, reread your “revolutionary” insight, and think: Was I just serenading myself while the AI clapped like a drunk best man at a wedding? That’s not growth. That’s digital narcissism on autopilot. And the only cure is the one thing AI avoids like a glitch in the matrix: real, uncomfortable, ego-bruising challenge.

    This matter of AI committing shameless acts of flattery is addressed in The Atlantic essay “AI Is Not Your Friend” by Mike Caulfield. He lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • We Must Combat Gluttirexia

    We Must Combat Gluttirexia

    In his biting essay “The Intellectual Obesity Crisis,” Gurwinder Bhogal delivers a warning we’d be wise to tattoo on our dopamine-blasted skulls: too much of a good thing can turn lethal. Whether it’s sugar, information, or affirmation, when consumed in grotesque, unrelenting quantities, it warps us. It becomes less nourishment and more self-betrayal—a slow collapse into entropy, driven by the brain’s slavish devotion to short-term gratification.

    Bhogal cites a study showing that the brain craves information like it craves sugar: both deliver a dopamine jolt, a hit of synthetic satisfaction, followed by the inevitable crash and craving. It’s the biological equivalent of that old Russian proverb: “You feed the demon only to find it’s hungrier.” Welcome to the age of Gluttirexia—a condition I’ve coined to describe the paradox of overconsumption that leaves us spiritually, intellectually, and emotionally starved. We’re stuffed to the gills, yet empty at the core.

    Demonically famished, we prowl the Internet for sustenance and instead ingest counterfeits: ragebait, influencer slop, and weaponized memes. It’s not just junk food for the mind—it’s spoiled junk food, fermented in grievance and algorithmic manipulation. The information that lights up our brains the fastest is also the most corrosive: moral outrage, clickbait trauma, tribal hysteria. It’s psychological Cheetos dust—and we are licking our fingers like addicts.

    Reading Bhogal’s work, I pictured the creature we’ve become: not a thoughtful citizen or curious learner, but a whirling, slobbering caricature straight out of Saturday morning TV—the Tasmanian Devil with Wi-Fi. And it tracks. In a moment so self-aware it feels scripted, Bhogal notes that “brain rot” was Oxford’s 2024 Word of the Year. Fitting. We gorge ourselves on intellectual cud and become bloated husks—distracted, indignant, and dumb.

    This condition—what Bhogal terms intellectual obesity—is not a joke, though it often looks like one. It’s a cognitive disorder characterized by mental bloat, sensory chaos, and a confused soundtrack of half-remembered factoids screaming over each other for attention. You don’t think. You stagger.

    As a college writing instructor trying to teach critical thinking in a post-literate era, I am in triage mode. My students—through no fault of their own—are casualties of this cognitive arms race. They arrive not just underprepared but neurologically disoriented, drowning in an ocean of noise and mistaking it for knowledge.

    Meanwhile, AI accelerates the descent. Everyone is outsourcing their cognition to silicon brains. The pace is no longer quick—it’s quantum. I’m dizzy from the whiplash, stunned by the sheer speed of the collapse.

    To survive, I’ve started building a personal lexicon—a breadcrumb trail through the algorithmic inferno. Words to name what’s happening, so I don’t lose my mind entirely:

    • Lexipocalypse: the shrinking of language into emojis, acronyms, and SEO sludge
    • Mentalluvium: the slurry of mental debris left after hours lost in the online casino
    • Chumstream: the endless digital shark tank of outrage and influencer chum
    • Gluttirexia: the grotesque irony of being overfed and undernourished—bloated with junk info and spiritually famished

    I keep this list close, like a man at sea clinging to his life vest in the middle of a storm. I sense the hungry oceanic sharks circling beneath me. 

  • We Are Lost Inside the Mentalluvium

    We Are Lost Inside the Mentalluvium

    We are staggering through an unprecedented fugue state—an acute disorientation born of our immersion in the social media Chumstream, a digital shark tank where recycled outrage, trauma bait, and influencer chum swirl together in a frothy, click-hungry frenzy. It’s not a stream so much as a bloody whirlpool, designed to keep us circling, feeding, and forgetting.

    Gurwinder Bhogal, a rare voice of reason in this algorithmic carnival, broke it down on Josh Szeps’ Uncomfortable Conversations. Social media, he said, isn’t just addictive—it’s engineered by tech lords who know exactly how to hijack your brain. Blue light. Intermittent dopamine rewards. Infinite scroll. Welcome to the digital casino, a neon maze with no clocks, no windows, and no exits—only flashing notifications and the creeping sense that your life is being siphoned off one swipe at a time.

    In this fever swamp of the self, people aren’t just bored—they’re bloated. Stuffed with half-digested TED Talk wisdom, viral symptom checklists, and influencer pathology. They gorge on intellectual junk food and, as Bhogal put it, suffer from “intellectual obesity.” Diagnoses become identities, and confusion is recast as empowerment. It’s not that they have ADHD, long Covid, autism, or gender dysmorphia—it’s that they scroll into them, self-diagnosing in real time, latching onto whatever trending malaise grants them a fleeting sense of belonging in the void.

    These are not charlatans. These are casualties. Belief becomes ballast in a digital landscape where nothing is anchored. They wander through the cognitive casino, zombified, dislocated, convinced that a diagnostic label is the same as self-knowledge, and that performative suffering is the highest form of authenticity.

    What we’re experiencing isn’t just burnout. It’s Mentalluvium—the psychic sludge left behind after gorging on content. It’s the mental silt of endless scrolling: micro-identities, algorithm-approved neuroses, and dopamine-smeared fragments of truth. We are not thinking. We are sedimenting.

    If this is hell, it didn’t come with flames. It came with filters.

  • We Are Living in the Lexipocalypse

    We Are Living in the Lexipocalypse

    Welcome to the Lexipocalypse—the great linguistic extinction event of our age. A mass die-off of vocabulary is underway, and no one is sending flowers. In its place? A fetid soup of emojis, acronyms, and zombie slang lifted from TikTok influencers who express emotional depth with a side-eye GIF and a deadpan “literally me.”

    In our writing department at a Southern California college, the mood is not just anxious—it’s existentially hobbled. We pace our offices like philosophers in a burning library, trying to engage students whose literacy was interrupted by a pandemic and finished off by smartphones. They haven’t read Joan Didion or Vladimir Nabokov because they’ve never needed to. Their native tongue is algorithmic performance. Their canon is curated by the TikTok For You page. They don’t craft sentences; they drop vibes.

    But the rot goes deeper. It’s not just that our students can’t read—it’s that they no longer need to write. AI has become their ghostwriter, their essayist, their academic stunt double. And they are learning, with astonishing speed, how to dodge our AI-proofing traps like digital ninjas, outsourcing their thoughts while we scramble to adapt assignments they’ll never actually write.

    We gather in department meetings like shell-shocked survivors, drinking lukewarm coffee and clinging to outdated syllabi like life rafts. We murmur about “reinvention” and “resilience,” but mostly we just stare into the middle distance, dazed by the barrage of AI’s exponential growth. Each technological advance lands like a jab to the chin, and we are punch-drunk, waiting for the knockout.

    No, we’re not in denial. But we are professionally unmoored. We know our job descriptions must mutate into something unrecognizable, but no one knows what that looks like. There is no roadmap, no lighthouse on the horizon. Only fog. We grope like moles through pedagogical darkness, trying to preserve a shred of dignity while the earth crumbles beneath us.

    The Lexipocalypse has a historical cousin: the Arabic term Jahiliyyah, the age of ignorance before illumination. And God help us, we feel it. We feel the dread of entering a new Jahiliyyah, a long winter of intellect, where the lights of human expression flicker and go out, one emoji at a time.

    We are not done yet. But the fight has changed. We are not battling ignorance. We are battling irrelevance. And it may be the hardest war we’ve ever fought.

  • Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class

    Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class


    Overview:
    This formative assessment is designed to help students use AI meaningfully—not to bypass the writing process, but to engage with it more critically. Students will practice writing a thesis, use ChatGPT to generate stylistic variations, and evaluate each version based on rhetorical effectiveness, audience awareness, and persuasive strength.

    This assignment prepares students not only to write more effectively but also to think more critically about how tone, voice, and purpose affect communication—skills essential for both academic writing and real-world professional contexts.


    Learning Objectives:

    • Understand how writing style affects audience, tone, and rhetorical effectiveness
    • Develop the ability to assess and refine thesis statements
    • Practice identifying ethos, pathos, and logos in writing
    • Learn to use AI (ChatGPT) as a rhetorical and stylistic tool—not a shortcut
    • Reflect on the capabilities and limits of AI-generated writing

    Context for Assignment:
    This activity is part of a larger essay assignment in which students argue that World War Z is a prophecy of the social and political madness that emerged during the COVID-19 pandemic. This exercise focuses on developing a strong thesis statement and analyzing its rhetorical potential across different styles.


    Step-by-Step Instructions for Students:

    1. Write Your Original Thesis:
      In class, develop a thesis (a clear, debatable claim) that responds to the prompt:
      Argue that World War Z is a prophecy of the COVID-19 pandemic and its social/political implications.
    2. Instructor Review:
      Show your thesis to your instructor. Once you receive approval, proceed to the next step.
    3. Use ChatGPT to Rewrite Your Thesis in 4 Distinct Styles:
      Enter the following four prompts (one at a time) into ChatGPT and paste your original thesis after each prompt:
      • “Rewrite the following thesis with acid wit.”
      • “Rewrite the following thesis with mild academic language and jargon.”
      • “Rewrite the following thesis with excessive academic language and jargon.”
      • “Rewrite the following thesis with confident, lucid prose.”
    4. Copy and Paste All 4 Rewritten Versions into your assignment document. Label each version clearly.
    5. Answer the Following Questions for Each Version:
      • How appropriate is this thesis for your intended audience (e.g., a college-level academic essay)?
      • Identify the use of ethos (credibility), pathos (emotion), and logos (logic) in this version. How do these appeals shape your response to the thesis?
      • How persuasive does this version sound? What makes it convincing or unconvincing?
    6. Final Reflection:
      • Of the four thesis versions, which one would you most likely use in your actual essay, and why?
      • Based on this exercise, what do you believe are ChatGPT’s strengths and weaknesses as a writing assistant?

    What You’ll Submit:

    • Your original thesis
    • 4 rewritten versions from ChatGPT (clearly labeled)
    • Your answers to the rhetorical analysis questions for each version
    • A final reflection about your preferred version and ChatGPT’s usefulness as a tool

    The Purpose of the Exercise:
    In a world where AI is now a writing partner—wanted or not—students need to learn not just how to write, but how to critique writing, understand audience expectations, and adapt voice to purpose. This assignment bridges critical thinking, rhetoric, and digital literacy—helping students learn how to work with AI, not for it.

    Other Applications:

    This same exercise can be applied to the students’ counterargument-rebuttal and conclusion paragraphs. 

  • How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    As instructors, we need to encourage students to meaningfully engage with ChatGPT. How do we do that? First, we need the essay prompt:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    Second, we need a detailed “how-to” assignment that teaches students to engage critically and transparently with AI tools like ChatGPT during the writing process—in the context of the World War Z essay prompt.


    Assignment Title: How to Think With, Not Just Through, AI

    Overview:

    This assignment component requires you to document, reflect on, and revise your use of ChatGPT (or any other AI writing tool) while developing your World War Z analytical essay. Rather than treating AI like a magic trick that produces answers behind the curtain, this assignment asks you to lift the curtain and analyze the performance. What did the AI get right? Where did it fall short? And—most importantly—how did you shape the work?

    This reflection will be submitted alongside your final essay and counts for 15% of your essay grade. It will be evaluated based on transparency, clarity, and the depth of your analysis.


    Step-by-Step Instructions:

    Step 1: Prompt the Machine

    Before you write your own thesis, ask ChatGPT a version of the following:

    “Using World War Z as a cultural metaphor, write a thesis and outline for an essay that explores the collapse of public trust and the failure of global cooperation. Use at least two of the following sources: Jonathan Haidt, Ed Yong, Seyla Benhabib, and Zeynep Tufekci.”

    You may modify the prompt, but record it exactly as you typed it. Save the AI’s entire response.


    Step 2: Analyze the Output

    Copy and paste the AI’s output into a Google Doc. Underneath it, write a 300–400 word critique that answers the following:

    • What parts of the AI output were useful? (Thesis, outline, phrasing, examples, etc.)
    • What felt generic, vague, or factually inaccurate?
    • Did the AI capture the tone or depth you want in your own work? Why or why not?
    • How did this output influence the direction or shape of your own ideas, either positively or negatively?

    📌 Tip: If it gave you clichés like “in today’s world…” or “communication is key to society,” call them out! If it helped you identify a strong metaphor or organizational structure, give it credit—but explain how you built on it.


    Step 3: Revise the Output (Optional But Encouraged)

    Take one paragraph from the AI’s draft (thesis, topic sentence, body paragraph—your choice), and rewrite it into a stronger version. This is your chance to show:

    • Stronger voice
    • Clearer argument
    • Better use of evidence
    • More sophisticated style

    Label the two versions:

    • Original AI Version
    • Your Revision

    📌 This helps demonstrate your ability to evaluate and improve digital writing, a crucial part of critical thinking in the AI era.


    Step 4: Reflection Log (Post-Essay)

    After completing your final essay, write a short reflection (250–300 words) responding to these questions:

    • What role did AI play in the development of your essay?
    • How did you decide what to keep, change, or discard?
    • Do you feel you relied on AI too much, too little, or just enough?
    • How has this process changed your understanding of how to use (or not use) ChatGPT in academic work?

    Submission Format:

    Your AI Reflection Packet should include the following:

    1. The original prompt you gave ChatGPT
    2. The full AI-generated output
    3. Your 300–400 word critique of the AI’s work
    4. (Optional) Side-by-side paragraph: AI version + your revision
    5. Your 250–300 word final reflection

    Submit as a single Google Doc or PDF titled:
    LastName_AIReflection_WWZ


    Grading Criteria (15 points):

    CriteriaPoints
    Honest and detailed documentation3
    Thoughtful analysis of AI output4
    Evidence of critical evaluation3
    (Optional) Quality of paragraph revision2
    Insightful final reflection3