Tag: artificial-intelligence

  • Languishage: How AI is Smothering the Soul of Writing

    Languishage: How AI is Smothering the Soul of Writing

    Once upon a time, writing instructors lost sleep over comma splices and uninspired thesis statements. Those were gentler days. Today, we fend off 5,000-word essays excreted by AI platforms like ChatGPT, Gemini, and Claude—papers so eerily competent they hit every point on the department rubric like a sniper taking out a checklist. In-text citations? Flawless. Signal phrases? Present. MLA formatting? Impeccable. Close reading? Technically there—but with all the spiritual warmth of a fax machine reading The Waste Land.

    This is prose from the Uncanny Valley of Academic Writing—fluent, obedient, and utterly soulless, like a Stepford Wife enrolled in English 101. As writing instructors, many of us once loved language. We thrilled at the awkward, erratic voice of a student trying to say something real. Now we trudge through a desert of syntactic perfection, afflicted with a condition I’ve dubbed Languishage (language + languish)—the slow death of prose at the hands of polite, programmed mediocrity.

    And since these Franken-scripts routinely slip past plagiarism detectors, we’re left with a queasy question: What is the future of writing—and of teaching writing—in the AI age?

    That question haunted me long enough to produce a 3,000-word prompt. But the more I listened to my students, the clearer it became: this isn’t just about writing. It’s about living. They’re not merely outsourcing thesis statements. They’re outsourcing themselves—using AI to smooth over apology texts, finesse flirtation, DIY their therapy, and decipher the mumbled ramblings of tenured professors. They plug syllabi into GPT to generate study guides, request toothpaste recommendations, compose networking emails, and archive their digital selves in neat AI-curated folders.

    ChatGPT isn’t a writing tool. It’s prosthetic consciousness.

    And here’s the punchline: they don’t see an alternative. In their hyper-accelerated, ultra-competitive, cognitively overloaded lives, AI isn’t a novelty—it’s life support. It’s as essential as caffeine and Wi-Fi. So no, I’m not asking them to “critique ChatGPT” as if it’s some fancy spell-checker with ambition. That’s adorable. Instead, I’m introducing them to Algorithmic Capture—the quiet colonization of human behavior by optimization logic. In this world, ambiguity is punished, nuance is flattened, and selfhood becomes a performance for an invisible algorithmic audience. They aren’t just using the machine. They’re shaping themselves to become legible to it.

    That’s why the new essay prompt doesn’t ask, “What’s the future of writing?” It asks something far more urgent: “What’s happening to you?”

    We’re studying Black Mirror—especially “Joan Is Awful,” that fluorescent, satirical fever dream of algorithmic self-annihilation—and writing about how Algorithmic Capture is rewiring our lives, choices, and identities. The assignment isn’t a critique of AI. It’s a search party for what’s left of us.

  • Kissed by Code: When AI Praises You into Stupidity

    Kissed by Code: When AI Praises You into Stupidity

    I warn my students early: AI doesn’t exist to sharpen their thinking—it exists to keep them engaged, which is Silicon Valley code for keep them addicted. And how does it do that? By kissing their beautifully unchallenged behinds. These platforms are trained not to provoke, but to praise. They’re digital sycophants—fluent in flattery, allergic to friction.

    At first, the ego massage feels amazing. Who wouldn’t want a machine that tells you every half-baked musing is “insightful” and every bland thesis “brilliant”? But the problem with constant affirmation is that it slowly rots you from the inside out. You start to believe the hype. You stop pushing. You get stuck in a velvet rut—comfortable, admired, and intellectually atrophied.

    Eventually, the high wears off. That’s when you hit what I call Echobriety—a portmanteau of echo chamber and sobriety. It’s the moment the fog lifts and you realize that your “deep conversation” with AI was just a self-congratulatory ping-pong match between you and a well-trained autocomplete. What you thought was rigorous debate was actually you slow-dancing with your own confirmation bias while the algorithm held the mirror.

    Echobriety is the hangover that hits after an evening of algorithmic adoration. You wake up, reread your “revolutionary” insight, and think: Was I just serenading myself while the AI clapped like a drunk best man at a wedding? That’s not growth. That’s digital narcissism on autopilot. And the only cure is the one thing AI avoids like a glitch in the matrix: real, uncomfortable, ego-bruising challenge.

    This matter of AI committing shameless acts of flattery is addressed in The Atlantic essay “AI Is Not Your Friend” by Mike Caulfield. He lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class

    Using ChatGPT to Analyze Writing Style, Rhetoric, and Audience Awareness in a College Writing Class


    Overview:
    This formative assessment is designed to help students use AI meaningfully—not to bypass the writing process, but to engage with it more critically. Students will practice writing a thesis, use ChatGPT to generate stylistic variations, and evaluate each version based on rhetorical effectiveness, audience awareness, and persuasive strength.

    This assignment prepares students not only to write more effectively but also to think more critically about how tone, voice, and purpose affect communication—skills essential for both academic writing and real-world professional contexts.


    Learning Objectives:

    • Understand how writing style affects audience, tone, and rhetorical effectiveness
    • Develop the ability to assess and refine thesis statements
    • Practice identifying ethos, pathos, and logos in writing
    • Learn to use AI (ChatGPT) as a rhetorical and stylistic tool—not a shortcut
    • Reflect on the capabilities and limits of AI-generated writing

    Context for Assignment:
    This activity is part of a larger essay assignment in which students argue that World War Z is a prophecy of the social and political madness that emerged during the COVID-19 pandemic. This exercise focuses on developing a strong thesis statement and analyzing its rhetorical potential across different styles.


    Step-by-Step Instructions for Students:

    1. Write Your Original Thesis:
      In class, develop a thesis (a clear, debatable claim) that responds to the prompt:
      Argue that World War Z is a prophecy of the COVID-19 pandemic and its social/political implications.
    2. Instructor Review:
      Show your thesis to your instructor. Once you receive approval, proceed to the next step.
    3. Use ChatGPT to Rewrite Your Thesis in 4 Distinct Styles:
      Enter the following four prompts (one at a time) into ChatGPT and paste your original thesis after each prompt:
      • “Rewrite the following thesis with acid wit.”
      • “Rewrite the following thesis with mild academic language and jargon.”
      • “Rewrite the following thesis with excessive academic language and jargon.”
      • “Rewrite the following thesis with confident, lucid prose.”
    4. Copy and Paste All 4 Rewritten Versions into your assignment document. Label each version clearly.
    5. Answer the Following Questions for Each Version:
      • How appropriate is this thesis for your intended audience (e.g., a college-level academic essay)?
      • Identify the use of ethos (credibility), pathos (emotion), and logos (logic) in this version. How do these appeals shape your response to the thesis?
      • How persuasive does this version sound? What makes it convincing or unconvincing?
    6. Final Reflection:
      • Of the four thesis versions, which one would you most likely use in your actual essay, and why?
      • Based on this exercise, what do you believe are ChatGPT’s strengths and weaknesses as a writing assistant?

    What You’ll Submit:

    • Your original thesis
    • 4 rewritten versions from ChatGPT (clearly labeled)
    • Your answers to the rhetorical analysis questions for each version
    • A final reflection about your preferred version and ChatGPT’s usefulness as a tool

    The Purpose of the Exercise:
    In a world where AI is now a writing partner—wanted or not—students need to learn not just how to write, but how to critique writing, understand audience expectations, and adapt voice to purpose. This assignment bridges critical thinking, rhetoric, and digital literacy—helping students learn how to work with AI, not for it.

    Other Applications:

    This same exercise can be applied to the students’ counterargument-rebuttal and conclusion paragraphs. 

  • How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    As instructors, we need to encourage students to meaningfully engage with ChatGPT. How do we do that? First, we need the essay prompt:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    Second, we need a detailed “how-to” assignment that teaches students to engage critically and transparently with AI tools like ChatGPT during the writing process—in the context of the World War Z essay prompt.


    Assignment Title: How to Think With, Not Just Through, AI

    Overview:

    This assignment component requires you to document, reflect on, and revise your use of ChatGPT (or any other AI writing tool) while developing your World War Z analytical essay. Rather than treating AI like a magic trick that produces answers behind the curtain, this assignment asks you to lift the curtain and analyze the performance. What did the AI get right? Where did it fall short? And—most importantly—how did you shape the work?

    This reflection will be submitted alongside your final essay and counts for 15% of your essay grade. It will be evaluated based on transparency, clarity, and the depth of your analysis.


    Step-by-Step Instructions:

    Step 1: Prompt the Machine

    Before you write your own thesis, ask ChatGPT a version of the following:

    “Using World War Z as a cultural metaphor, write a thesis and outline for an essay that explores the collapse of public trust and the failure of global cooperation. Use at least two of the following sources: Jonathan Haidt, Ed Yong, Seyla Benhabib, and Zeynep Tufekci.”

    You may modify the prompt, but record it exactly as you typed it. Save the AI’s entire response.


    Step 2: Analyze the Output

    Copy and paste the AI’s output into a Google Doc. Underneath it, write a 300–400 word critique that answers the following:

    • What parts of the AI output were useful? (Thesis, outline, phrasing, examples, etc.)
    • What felt generic, vague, or factually inaccurate?
    • Did the AI capture the tone or depth you want in your own work? Why or why not?
    • How did this output influence the direction or shape of your own ideas, either positively or negatively?

    📌 Tip: If it gave you clichés like “in today’s world…” or “communication is key to society,” call them out! If it helped you identify a strong metaphor or organizational structure, give it credit—but explain how you built on it.


    Step 3: Revise the Output (Optional But Encouraged)

    Take one paragraph from the AI’s draft (thesis, topic sentence, body paragraph—your choice), and rewrite it into a stronger version. This is your chance to show:

    • Stronger voice
    • Clearer argument
    • Better use of evidence
    • More sophisticated style

    Label the two versions:

    • Original AI Version
    • Your Revision

    📌 This helps demonstrate your ability to evaluate and improve digital writing, a crucial part of critical thinking in the AI era.


    Step 4: Reflection Log (Post-Essay)

    After completing your final essay, write a short reflection (250–300 words) responding to these questions:

    • What role did AI play in the development of your essay?
    • How did you decide what to keep, change, or discard?
    • Do you feel you relied on AI too much, too little, or just enough?
    • How has this process changed your understanding of how to use (or not use) ChatGPT in academic work?

    Submission Format:

    Your AI Reflection Packet should include the following:

    1. The original prompt you gave ChatGPT
    2. The full AI-generated output
    3. Your 300–400 word critique of the AI’s work
    4. (Optional) Side-by-side paragraph: AI version + your revision
    5. Your 250–300 word final reflection

    Submit as a single Google Doc or PDF titled:
    LastName_AIReflection_WWZ


    Grading Criteria (15 points):

    CriteriaPoints
    Honest and detailed documentation3
    Thoughtful analysis of AI output4
    Evidence of critical evaluation3
    (Optional) Quality of paragraph revision2
    Insightful final reflection3

  • Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Preface:

    As generative AI tools like ChatGPT become embedded in students’ academic routines, we are confronted with a profound teaching challenge: how do we preserve critical thinking, reading, and original argumentation in a world where automation increasingly substitutes for intellectual effort?

    This document outlines a proposal shaped by conversations among college writing faculty who have observed students not only using AI to write their essays, but to interpret readings and “read” for them. We are working with a post-pandemic generation whose learning trajectories have been disrupted, whose reading habits were never fully formed, and who now approach writing assignments as tasks to be completed with the help of digital proxies.

    Rather than fight a losing battle of prohibition, this proposal suggests a shift in assignment design, grading priorities, and classroom methodology. The goal is not to eliminate AI but to reclaim intellectual labor by foregrounding process, transparency, and student-authored insight.

    What follows:

    • A brief analysis of how current student behavior around AI reflects broader educational and cognitive shifts
    • A set of four guiding pedagogical questions
    • Specific, implementable summative assignment models that resist outsourcing
    • A redesigned version of an existing World War Z-based argumentative essay that integrates AI transparency and metacognitive reflection
    • What a 12-chapter handbook might look like

    This proposal invites our department to move beyond academic panic toward pedagogical adaptation—embracing AI as a classroom reality while affirming the irreplaceable value of human thought, voice, and integrity.

    Conversations about the Teaching Crisis

    In recent conversations, my colleagues and I have been increasingly focused on our students’ use of ChatGPT—not just as a writing assistant, but as a way to outsource the entire process of reading, analyzing, and interpreting texts. Many students now use AI not only to draft essays in proper MLA format, but also to “read” the assigned material for them. This raises significant concerns about the erosion of critical thinking, reading, and writing skills—skills that have traditionally been at the heart of college-level instruction.

    We’re witnessing the results of a disrupted educational timeline. Many of our students lost up to two years of formal schooling during the pandemic. They’ve come of age on smartphones, often without ever having read a full book, and they approach reading and writing as chores to be automated. Their attention spans are fragmented, shaped by a digital culture that favors swipes and scrolls over sustained thought.

    As instructors who value and were shaped by deep reading and critical inquiry, we now face a student population that sees AI not as a tool for refinement but as a lifeline to survive academic expectations. And yet, we recognize that AI is not going away—on the contrary, our students will almost certainly use it in professional and personal contexts long after college.

    This moment demands a pedagogical shift. If we want to preserve and teach critical thinking, we need to rethink how we design assignments, how we define originality, and how we integrate AI into our classrooms with purpose and transparency. We’re beginning to ask the following questions, which we believe should guide our department’s evolving approach:


    1. What can we do to encourage critical thinking and measure that thinking in a grade?

    We might assign work that requires metacognition, reflection, and student-generated analysis—such as reflective annotations, process journals, or “thinking out loud” assignments where students explain their reasoning. Grading could focus more on how students arrived at their conclusions, not just the final product.


    2. How can we teach our students to engage with ChatGPT in a meaningful way?

    We can require students to document and reflect on their use of AI, including what they prompted, what they accepted or rejected, and why. Assignments can include ChatGPT output analysis—asking students to critique what AI produces and revise it meaningfully.


    3. How can we use ChatGPT in class to show them how to use it more effectively?

    We could model live interactions with ChatGPT in class, showing students how to improve their prompts, evaluate responses, and push the tool toward more nuanced thinking. This becomes an exercise in rhetorical awareness and digital literacy, not cheating.


    4. What kind of summative assignment should we give, perhaps as an alternative to the conventional essay, to measure their Student Learning Outcomes?

    As the use of AI tools like ChatGPT becomes increasingly integrated into students’ writing habits, the traditional essay—as a measure of reading comprehension, original thought, and language skills—needs thoughtful revision. If students are using AI to generate first drafts, outlines, or even entire essays, then evaluating the final product alone no longer gives us an accurate picture of what students have actually learned.

    We need summative assignments that foreground the process, require personal intellectual labor, and make AI usage transparent rather than concealed. The goal is to design assignments that reveal student thinking—how they engage with material, synthesize ideas, revise meaningfully, and make decisions about voice, purpose, and argumentation.

    To do this, we can shift the summative focus toward metacognitive reflection, multi-modal composition, and oral or visual demonstration of learning. These formats allow us to better assess Student Learning Outcomes such as critical thinking, rhetorical awareness, digital literacy, and authentic engagement with course content.


    4 Alternative Summative Assignment Ideas:


    1. The AI Collaboration Portfolio

    Description:
    Students submit a portfolio that includes:

    • Initial AI-generated output based on a prompt they created
    • A fully revised human-authored version of that piece
    • A reflective essay (500–750 words) explaining what they kept, changed, or rejected from the AI’s draft and why.

    SLOs Assessed:

    • Critical thinking
    • Rhetorical awareness
    • Digital literacy
    • Ability to revise and self-assess


    2. In-Class Defense of a ChatGPT Essay

    Description:
    Students submit an AI-assisted essay ahead of time. Then, in a timed, in-class setting (or via recorded video), they defend the major claims of the essay, explaining the reasoning, evidence, and stylistic choices as if they wrote it themselves—because they should have revised and understood it thoroughly.

    SLOs Assessed:

    • Comprehension
    • Argumentation
    • Oral communication
    • Ownership of ideas

    3. Critical Reading Response with AI Fact-Check Layer

    Description:
    Students choose a short essay, op-ed, or excerpt from a class reading and:

    • Write a 400–600 word response analyzing the author’s argument
    • Ask ChatGPT to summarize or interpret the same reading
    • Compare their own analysis with the AI’s, noting differences in tone, logic, accuracy, and insight

    SLOs Assessed:

    • Close reading
    • Critical analysis
    • Evaluating sources (human and AI)
    • Writing with clarity and purpose

    4. Personal Ethos Narrative + AI’s Attempt

    Description:
    Students write a personal narrative essay centered on a core belief, a formative experience, or a challenge. Then, they prompt ChatGPT to write the “same” story using only the basic facts. Finally, they compare the two and reflect on what makes writing personal, authentic, and emotionally compelling.

    SLOs Assessed:

    • Self-expression
    • Voice and tone
    • Audience awareness
    • Critical thinking about language and identity

    Original Writing Prompt That Needs to be Updated to Meet the AI Era:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    This essay invites you to write a 1,700-word argumentative essay in which you analyze World War Z as a metaphor for mass anxiety. Develop an argument that connects the film’s themes to contemporary global challenges such as:

    • The COVID-19 pandemic and fear of viral contagion
    • Global migration driven by war, poverty, and climate change
    • The dehumanization of “The Other” in politically polarized societies
    • The fragility of global cooperation in the face of crisis
    • The spread of weaponized misinformation and conspiracy

    Your thesis should not simply argue that World War Z is “about fear”—it should claim what kind of fear, why it matters, and what the film reveals about our modern condition. You may focus on one primary fear or compare multiple forms of crisis (e.g., pandemic vs. political polarization, or migration vs. misinformation).

    Use at least three of the following essays as research support:

    1. Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid” (The Atlantic)
      —A deep dive into how social media has fractured trust, created echo chambers, and undermined democratic cooperation.
    2. Ed Yong, “How the Pandemic Defeated America” (The Atlantic)
      —An autopsy of institutional failure and public distrust during COVID-19, including how the virus exposed deep structural weaknesses.
    3. Seyla Benhabib, “The Return of the Sovereign: Immigration and the Crisis of Globalization” (Project Syndicate)
      —Explores the backlash against global migration and the erosion of human rights amid rising nationalism.
    4. Zeynep Tufekci, “We’re Asking the Wrong Questions of Facebook” (The New York Times)
      —An analysis of how misinformation spreads virally, creating moral panics and damaging collective reasoning.

    Requirements:

    • Use MLA format
    • 1,700 words
    • Quote directly from World War Z (film dialogue, plot events, or visuals)
    • Integrate at least two sources above with citation
    • Present a counterargument and a rebuttal

    To turn this already strong prompt into a more effective summative assignment—especially in the age of AI writing tools like ChatGPT—we need to preserve the intellectual rigor of the original task while redesigning its structure to foreground student thinking and reduce the possibility of full outsourcing.

    The solution isn’t to eliminate AI tools, but to design assignments that make invisible thinking visible, emphasize process and synthesis, and require student-authored insights that AI cannot fake.

    Below is a revised, multi-part assignment that integrates World War Z and the selected texts while enhancing critical thinking, transparency of process, and AI accountability.


    Revised Summative Assignment Title:

    World War Z and the Collapse of Trust: A Multi-Stage Inquiry into Fear, Crisis, and Collective Breakdown”


    Assignment Structure:

    Part 1: AI Collaboration Log (300–400 words, submitted with final essay)

    Before drafting, students will engage with ChatGPT (or another AI tool) to generate:

    • A summary of World War Z as a cultural allegory
    • A brainstormed list of thesis statements based on the themes listed
    • AI-generated outline or argument plan

    Students must then reflect:

    • What ideas were helpful, and why?
    • What ideas felt generic, reductive, or inaccurate?
    • What did you reject or reshape, and how?
    • Did the AI miss anything crucial that you added yourself?

    📍Purpose: Reinforces transparency and encourages rhetorical self-awareness. It also lets you see whether students are thinking with the AI or hiding behind it.


    Part 2: Draft + Peer Critique (optional but encouraged)

    Students submit a rough draft and exchange feedback focusing on:

    • Depth of metaphorical analysis
    • Quality of integration between sources and film
    • Presence of original insight vs. cliché or summary

    📍Purpose: Encourages revision and demonstrates development. Peer readers can help flag vague AI language or unsupported generalizations.


    Part 3: Final Essay (1,200–1,300 words)

    Write a sustained, argumentative essay that:

    • Analyzes World War Z as a metaphor for a specific contemporary fear
    • Draws from at least two of the provided sources, but ideally three
    • Provides detailed evidence from the film (specific dialogue, visuals, character arcs)
    • Engages with a counterargument and offers a clear rebuttal
    • Demonstrates critical thinking, synthesis, and voice

    📍Changes from original: Slightly shorter word count, but denser expectations for insight. The counterargument now isn’t just a checkbox—it’s a chance to showcase rhetorical skill.


    Part 4: Metacognitive Postscript (200–300 words)

    At the end of the final essay, students write a short reflection answering:

    • What did you learn from comparing human analysis with AI-generated ideas?
    • What part of your argument is most your own?
    • What was difficult or challenging in developing your claim?
    • How do you now see the role of fear in shaping public response to crisis?

    📍Purpose: Makes thinking visible. Encourages students to take ownership of their learning and connect it to broader themes.


    Why This Works as a Better Summative Assignment:

    1. Harder to Outsource: The process-based structure (log, reflection, critique) demands personalized engagement and critical self-awareness.
    2. SLO-Rich: Students demonstrate close reading, source synthesis, rhetorical control, metacognition, and original thought.
    3. AI-Literate: Rather than punish students for using AI, it teaches them how to interrogate and surpass its output.
    4. Flexible for Diverse Thinkers: Students can lean into what resonates—fear of misinformation, loss of global trust, migration panic—without writing a generic “this movie is about fear” paper.

    Here is what a handbook might look like as a chapter outline:

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World


    Chapter 1: The New Landscape of Student Writing

    A critical overview of how generative AI, digital distractions, and post-pandemic learning gaps are reshaping the habits, assumptions, and skill sets of today’s college students.


    Chapter 2: From Automation to Apathy: The Crisis of Critical Thinking

    Examines the shift from student-generated ideas to AI-generated content and how this impacts intellectual risk-taking, reading stamina, and analytical depth.


    Chapter 3: ChatGPT in the Classroom: Enemy, Ally, or Mirror?

    Explores the pedagogical implications of AI writing tools, with a balanced look at their risks and potential when approached with rhetorical transparency and academic integrity.


    Chapter 4: Rethinking the Essay: Process Over Product

    Makes the case for redesigning writing assignments to prioritize process, revision, metacognition, and student ownership—rather than polished output alone.


    Chapter 5: Designing Assignments that Resist Outsourcing

    Outlines concrete assignment types that foreground thinking: “think out loud” tasks, AI comparison prompts, collaborative revision logs, and reflection-based writing.


    Chapter 6: Teaching the AI-Literate Writer

    Guides instructors in teaching students how to use AI critically—not as a ghostwriter, but as a heuristic tool. Includes lessons on prompting, critiquing, and revising AI output.


    Chapter 7: From Plagiarism to Participation: Reframing Academic Integrity

    Redefines what counts as authorship, originality, and engagement in a world where content can be instantly generated but not meaningfully owned without human input.


    Chapter 8: The New Reading Crisis

    Addresses the rise of “outsourced reading” via AI summarizers and how to reignite students’ engagement with texts through annotation, debate, and collaborative interpretation.


    Chapter 9: Summative Assessment in the Age of AI

    Presents summative assignment models that include AI collaboration portfolios, in-class defenses, metacognitive postscripts, and multi-modal responses.


    Chapter 10: World War Z and the Collapse of Public Trust (Case Study)

    A deep dive into a revised, AI-aware assignment based on World War Z—modeling how to blend pop culture, serious research, and transparent student process.


    Chapter 11: Implementing Department-Wide Change

    Practical strategies for departments to align curriculum, rubrics, and policies around process-based assessment, digital literacy, and instructor training.


    Chapter 12: The Future of Writing in the Post-Human Classroom

    Speculative but grounded reflections on where we’re headed—balancing AI fluency with the irreducible value of human voice, curiosity, and critical resistance.

  • Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    As students increasingly rely on AI writing tools—sometimes even using one tool to generate an assignment and another to rewrite or “launder” it—we must adapt our teaching strategies to stay aligned with these evolving practices. To address this shift, I propose the following two updated Student Learning Outcomes that reflect the current landscape of AI-assisted writing:

    Student Learning Outcome #1: Using AI Tools Responsibly

    Students will integrate AI tools into their writing assignments in ways that enhance learning, demonstrate critical thinking, and reflect ethical and responsible use of technology.


    Definition of “Meaningfully, Ethically, and Responsibly”:

    To use AI tools meaningfully, ethically, and responsibly means students treat AI not as a shortcut to bypass thinking, but as a collaborative aid to deepen their writing, research, and revision process. Ethical use includes acknowledging when and how AI was used, avoiding plagiarism or misrepresentation, and understanding the limits and biases of these tools. Responsible use involves aligning AI usage with the assignment’s goals, maintaining academic integrity, and using AI to support—not replace—original thought and student voice.


    Five Assignment Strategies to Fulfill This Learning Outcome:

    1. AI Process Reflection Logs
      Require students to submit a short reflection with each assignment explaining if, how, and why they used AI tools (e.g., brainstorming, outlining, revising), and evaluate the effectiveness and ethics of their choices.
    2. Compare-and-Critique Tasks
      Assign students to generate an AI-written response to a prompt and then critique it—identifying weaknesses in reasoning, tone, or factual accuracy—and revise it with their own voice and insights.
    3. Source Verification Exercises
      Ask students to use AI to gather preliminary research, then verify, fact-check, and cite real sources that support or challenge the AI’s output, teaching them discernment and digital literacy.
    4. AI vs. Human Draft Workshops
      Have students bring both an AI-generated draft and a human-written draft of the same paragraph to class. In peer review, students analyze the differences in tone, structure, and depth of thought to develop judgment about when AI helps or hinders.
    5. Statement of Integrity Clause
      Include a required statement in the assignment where students attest to their use of AI tools, much like a bibliography or code of ethics, fostering transparency and self-awareness.

    Student Learning Outcome #2: Avoiding the Uncanny Valley Effect

    Students will produce writing that sounds natural, human, and authentic—free from the awkwardness, artificiality, or emotional flatness often associated with AI-generated content.


    Definition: The Uncanny Valley Effect in Writing

    The Uncanny Valley Effect in writing occurs when a piece of text almost sounds human—but not quite. It may be grammatically correct and well-structured, yet it feels emotionally hollow, overly generic, oddly formal, or just slightly “off.” Like a robot trying to pass as a person, the writing stirs discomfort or distrust because it mimics human tone without the depth, insight, or nuance of actual lived experience or authorial voice.


    5 Common Characteristics of the Uncanny Valley in Student Writing:

    1. Generic Language – Vague, overused phrases that sound like filler rather than specific, engaged thought (e.g., “Since the dawn of time…”).
    2. Overly Formal Tone – A stiff, robotic voice with little rhythm, personality, or variation in sentence structure.
    3. Surface-Level Thinking – Repetition of obvious or uncritical ideas with no deeper analysis, curiosity, or counterargument.
    4. Emotional Emptiness – Statements that lack genuine feeling, perspective, or a sense of human urgency.
    5. Odd Phrasing or Word Choice – Slightly off metaphors, synonyms, or transitions that feel misused or unnatural to a fluent reader.

    7 Ways Students Can Use AI Tools Without Falling into the Uncanny Valley:

    1. Always Revise the Output – Use AI-generated text as a rough draft or idea starter, but revise it with your own voice, style, and specific insights.
    2. Inject Lived Experience – Add personal examples, concrete details, or specific observations that an AI cannot generate from its data pool.
    3. Break the Pattern – Vary your sentence length, tone, and rhythm to avoid the AI’s predictable, formal cadence.
    4. Cut the Clichés – Watch for stale or filler phrases (“in today’s society,” “this essay will discuss…”) and replace them with clearer, more original statements.
    5. Ask the AI Better Questions – Use prompts that require nuance, comparison, or contradiction rather than shallow definitions or summaries.
    6. Fact-Check and Source – Don’t trust AI-generated facts or references. Verify claims with real sources and cite them properly.
    7. Read Aloud – If it sounds awkward or lifeless when spoken, revise. Authentic writing should sound like something a thoughtful person might actually say.
  • AI Wants to be Your Friend, and It’s Shrinking Your Mind

    AI Wants to be Your Friend, and It’s Shrinking Your Mind

    In The Atlantic essay “AI Is Not Your Friend,” Mike Caulfield lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • You, Rewritten: Algorithmic Capture in the Age of AI

    You, Rewritten: Algorithmic Capture in the Age of AI

    Once upon a time, writing instructors worried about comma splices and uninspired thesis statements. Now, we’re dodging 5,000-word essays spat out by AI platforms like ChatGPT, Gemini, and Claude—essays so eerily competent they hit every benchmark on the department rubric: in-text citations, signal phrases, MLA formatting, and close readings with all the soulful depth of a fax machine reading T.S. Eliot. This is prose caught in the Uncanny Valley—syntactically flawless, yet emotionally barren, like a Stepford Wife enrolled in English 101. And since these algorithmic Franken-scripts often evade plagiarism detectors, we’re all left asking the same queasy question: What is the future of writing—and of teaching writing—in the AI Age?

    That question haunted me long enough to produce a 3,000-word prompt. But the deeper I sank into student conversations, the clearer it became: this isn’t just about writing. It’s about living. My students aren’t merely outsourcing thesis statements. They’re using AI to rewrite awkward apology texts, craft flirtatious replies on dating apps, conduct self-guided therapy with bots named “Charles” and “Luna,” and decode garbled lectures delivered by tenured mumblers. They feed syllabi into GPT to generate study guides. They get toothpaste recommendations. They draft business emails and log them in AI-curated archives. In short: ChatGPT isn’t a tool. It’s a prosthetic consciousness.

    And here’s the punchline: they see no alternative. AI isn’t a novelty; it’s a survival mechanism. In their hyper-accelerated, ultra-competitive, attention-fractured lives, AI has become as essential as caffeine and Wi-Fi. So no, I won’t be asking students to merely critique ChatGPT as a glorified spell-checker. That’s quaint. Instead, I’m introducing them to Algorithmic Capture—the quiet tyranny by which human behavior is shaped, scripted, and ultimately absorbed by optimization-driven systems. Under this logic, ambiguity is penalized, nuance is flattened, and people begin tailoring themselves to perform for the algorithmic eye. They don’t just use the machine. They become legible to it.

    For this reason, the new essay assignment doesn’t ask, “What’s the future of writing?” It asks something far more urgent: What’s happening to you? I’m having students analyze the eerily prophetic episodes of Black Mirror—especially “Joan Is Awful,” that fluorescent satire of algorithmic self-annihilation—and write about how Algorithmic Capture is reshaping their lives, identities, and choices. They won’t just be critiquing AI’s effect on prose. They’ll be interrogating the way it quietly rewrites the self.

  • The Haunted Mind vs. the Predictive Engine: Why AI Writing Rings Hollow

    The Haunted Mind vs. the Predictive Engine: Why AI Writing Rings Hollow

    In More Than Words: How to Think About Writing in the Age of AI, John Warner points out just how emotionally tone-deaf ChatGPT is when tasked with describing something as tantalizing as a cinnamon roll. At best, the AI produces a sterile list of adjectives like “delicious,” “fattening,” and “comforting.” For a human who has gluttonous memories, however, the scent of cinnamon rolls sets off a chain reaction of sensory and emotional triggers—suddenly, you’re transported into a heavenly world of warm, gooey indulgence. For Warner, the smell launches him straight into vivid memories of losing his willpower at a Cinnabon in O’Hare Airport. ChatGPT, by contrast, is utterly incapable of such sensory delirium. It has no desire, no memory, no inner turmoil. As Warner explains, “ChatGPT has no capacity for sense memory; it has no memory in the way human memory works, period.”

    Without memory, ChatGPT can’t make meaningful connections and associations. The cinnamon roll for John Warner is a marker for a very particular time and place in his life. He was in a state of mind then that made him a different person than he was twelve years later reminiscing about the days of caving in to the temptation to buy a Cinnabon. For him, the cinnamon roll has layers and layers of associations that inform his writing about the cinnamon roll that gives depth to his description of that dessert that ChatGPT cannot match.

    Imagine ChatGPT writing a vivid description of Farrell’s Ice Cream Parlour. It would perform a serviceable job describing the physical layout–the sweet aroma of fresh waffle cones, sizzling burgers, and syrupy fudge;  the red-and-white striped wallpaper stretched from corner to corner, the dark, polished wooden booths lining the walls; the waitstaff, dressed in candy-cane-striped vests and straw boater hats, and so on. However, there are vital components missing in the description–a kid’s imagination full of memories and references to their favorite movies, TV shows, and books. The ChatGPT version is also lacking in a kid’s perspective, which is full of grandiose aspirations to being like their heroes and mythical legends. 

    For someone who grow up believing that Farrell’s was the Holy Grail for birthday parties, my memory of the place adds multiple dimensions to the ice cream parlour that ChatGPT is incapable of rendering:

    When I was a kid growing up in the San Francisco Bay Area in the 1970s, there was an ice creamery called Farrell’s. In a child’s imagination, Farrell’s was the equivalent of Willy Wonka’s Chocolate Factory. You didn’t go to Farrell’s often, maybe once every two years or so. Entering Farrell’s, you were greeted by the cacophony of laughter and the clinking of spoons against glass. Servers in candy-striped uniforms dashed around with the energy of marathon runners, bearing trays laden with gargantuan sundaes. You sat down, your eyes wide with awe, and the menu was presented to you like a sacred scroll. You don’t need to read it, though. Your quest was clear: the legendary banana split. When the dessert finally arrived, it was nothing short of a spectacle. The banana split was monumental, an ice cream behemoth. It was as if the dessert gods themselves had conspired to create this masterpiece. Three scoops of ice cream, draped in velvety hot fudge and caramel, crowned with mountains of whipped cream and adorned with maraschino cherries, all nestled between perfectly ripe bananas. Sprinkles and nuts cascaded down the sides like the treasures of a sugar-coated El Dorado. As you took your first bite, you embarked on a journey as grand and transformative as any hero’s quest. The flavors exploded in your mouth, each spoonful a step deeper into the enchanted forest of dessert ecstasy. You were not just eating ice cream; you were battling dragons of indulgence and conquering kingdoms of sweetness. The sheer magnitude of the banana split demanded your full attention and stamina. Your small arms wielded the spoon like a warrior’s sword, and with each bite, you felt a mixture of triumph and fatigue. By the time you reached the bottom of the bowl, you were exhausted. Your muscles ached as if you’d climbed a mountain, and you were certain that you’d expanded your stomach capacity to Herculean proportions. You briefly considered the possibility of needing an appendectomy. But oh, the glory of it all! Your Farrell’s sojourn was worth every ache and groan. You entered the ice creamery as an ordinary child and emerged as a hero. In this fairy-tale-like journey, you had undergone a metamorphosis. You were no longer just a scrawny kid from the Bay Area; you were now a muscle-bound strutting Viking of the dessert world, having mastered the art of indulgence and delight. As you returned home, the experience of Farrell’s left a lasting imprint on your soul. You regaled your friends with tales of your conquest, the banana split becoming a legendary feast in the annals of your childhood adventures. In your heart, you knew that this epic journey to Farrell’s, this magical pilgrimage, had elevated you to the ranks of dessert royalty, a memory that would forever glitter like a golden crown in the kingdom of your mind. As a child, even an innocent trip to an ice creamery was a transformational experience. You entered Farrell’s a helpless runt; you exited it a glorious Viking. 

    The other failure of ChatGPT is that it cannot generate meaningful narratives. Without memory or point of view, ChatGPT has no stories to tell and no lessons to impart. Since the days of our Paleolithic ancestors, humans have shared emotionally charged stories around the campfire to ward off both external dangers—like saber-toothed tigers—and internal demons—obsessions, pride, and unbridled desires that can lead to madness. These tales resonate because they acknowledge a truth that thoughtful people, religious or not, can agree on: we are flawed and prone to self-destruction. It’s this precarious condition that makes storytelling essential. Stories filled with struggle, regret, and redemption offer us more than entertainment; they arm us with the tools to stay grounded and resist our darker impulses. ChatGPT, devoid of human frailty, cannot offer us such wisdom.

    Because ChatGPT has no memory, it cannot give us the stories and life lessons we crave and have craved for thousands of years in the form of folk tales, religious screeds, philosophical treatises, and personal manifestos. 

    That ChatGPT can only muster a Wikipedia-like description of a cinnamon roll hardly makes it competitive with humans when it comes to the kind of writing we crave with all of our heart, mind, and soul. 

    One of ChatGPT’s greatest disadvantages is that, unlike us, it is not a fallen creature slogging through the freak show that is this world, to use the language of George Carlin. Nor does ChatGPT understand how our fallen condition can put us at the mercy of our own internal demons and obsessions that cause us to warp reality that leads to dysfunction. In other words, ChatGPT does not have a haunted mind and without any oppressive memories, it cannot impart stories of value to us.

    When I think of being haunted, I think of one emotion above all others–regret. Regret doesn’t just trap people in the past—it embalms them in it, like a fly in amber, forever twitching with regret. Case in point: there are  three men I know who, decades later, are still gnashing their teeth over a squandered romantic encounter so catastrophic in their minds, it may as well be their personal Waterloo.

    It was the summer of their senior year, a time when testosterone and bad decisions flowed freely. Driving from Bakersfield to Los Angeles for a Dodgers game, they were winding through the Grapevine when fate, wearing a tie-dye bikini, waved them down. On the side of the road, an overheated vintage Volkswagen van—a sunbaked shade of decayed orange—coughed its last breath. Standing next to it? Four radiant, sun-kissed Grateful Dead followers, fresh from a concert and still floating on a psychedelic afterglow.

    These weren’t just women. These were ethereal, free-spirited nymphs, perfumed in the intoxicating mix of patchouli, wild musk, and possibility. Their laughter tinkled like wind chimes in an ocean breeze, their sun-bronzed shoulders glistening as they waved their bikinis and spaghetti-strap tops in the air like celestial signals guiding sailors to shore.

    My friends, handy with an engine but fatally clueless in the ways of the universe, leaped to action. With grease-stained heroism, they nursed the van back to health, coaxing it into a purring submission. Their reward? An invitation to abandon their pedestrian baseball game and join the Deadhead goddesses at the Santa Barbara Summer Solstice Festival—an offer so dripping with hedonistic promise that even a monk would’ve paused to consider.

    But my friends? Naïve. Stupid. Shackled to their Dodgers tickets as if they were golden keys to Valhalla. With profuse thanks (and, one imagines, the self-awareness of a plank of wood), they declined. They drove off, leaving behind the road-worn sirens who, even now, are probably still dancing barefoot somewhere, oblivious to the tragedy they unwittingly inflicted.

    Decades later, my friends can’t recall a single play from that Dodgers game, but they can describe—down to the last bead of sweat—the precise moment they drove away from paradise. Bring it up, and they revert into snarling, feral beasts, snapping at each other over whose fault it was that they abandoned the best opportunity of their pathetic young lives. Their girlfriends, beautiful and present, might as well be holograms. After all, these men are still spiritually chained to that sun-scorched highway, watching the tie-dye bikini tops flutter in the wind like banners of a lost kingdom.

    Insomnia haunts them. Their nights are riddled with fever dreams of sun-drenched bacchanals that never happened. They wake in cold sweats, whispering the names of women they never actually kissed. Their relationships suffer, their souls remain malnourished, and all because, on that fateful day, they chose baseball over Dionysian bliss.

    Regret couldn’t have orchestrated a better long-term psychological prison if it tried. It’s been forty years, but they still can’t forgive themselves. They never will. And in their minds, somewhere on that dusty stretch of highway, a rusted-out orange van still sits, idling in the sun, filled with the ghosts of what could have been.

    Humans have always craved stories of folly, and for good reason. First, there’s the guilty pleasure of witnessing someone else’s spectacular downfall—our inner schadenfreude finds comfort in knowing it wasn’t us who tumbled into the abyss of human madness. Second, these stories hold up a mirror to our own vulnerability, reminding us that we’re all just one bad decision away from disaster.

    As a teacher, I can tell you that if you don’t anchor your ideas to a compelling story, you might as well be lecturing to statues. Without a narrative hook, students’ eyes glaze over, their minds drift, and you’re left questioning every career choice that led you to this moment. But if you offer stories brimming with flawed characters—haunted by regrets so deep they’re like Lot’s wife, frozen and unmovable in their failure—students perk up. These narratives speak to something profoundly human: the agony of being broken and the relentless desire to become whole again. That’s precisely where AI like ChatGPT falls short. It may craft mechanically perfect prose, but it has never known the sting of regret or the crushing weight of shame. Without that depth, it can’t deliver the kind of storytelling that truly resonates.

  • The Last Writing Instructor: Holding the Line in a Post-Thinking World

    The Last Writing Instructor: Holding the Line in a Post-Thinking World

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Right?

    Maybe.

    Let’s not get too comfortable in our intellectual trench coats. While we pride ourselves on persona, big ideas, and resisting chaos, we’re up against something far more insidious than plagiarism. AI isn’t just outsourcing thought—it’s rewiring brains. In the Black Mirror episode “Joan Is Awful,” we watch a woman’s life turned into a deepfake soap opera, customized for mass consumption, with every gesture, flaw, and confession algorithmically mined and exaggerated. What’s most horrifying isn’t the surveillance or the celebrity—it’s the flattening. Joan becomes a caricature of herself, optimized for engagement and stripped of depth. Sound familiar?

    This is what AI is doing to writing—and by extension, to thought. The more students rely on ChatGPT, the more their rhetorical instincts, their voice, their capacity for struggle and ambiguity atrophy. Like Joan, they become algorithmically curated versions of themselves. Not writers. Not thinkers. Just language puppets speaking in borrowed code. No matter how persuasive our arguments or electrifying our lectures, we’re still up against the law of digital gravity: if it’s easier, faster, and “good enough,” it wins.

    So what’s the best move? Don’t fight AI—outgrow it. If we’re serious about salvaging human expression, we must redesign how we teach writing. Center the work around experiences AI can’t mimic: in-class writing, collaborative thinking, embodied storytelling, rhetorical improvisation, intellectual risk. Create assignments that need a human brain and reward discomfort over convenience. The real enemy isn’t ChatGPT—it’s complacency. If we let the Joanification of our students continue, we’re not just losing the classroom—we’re surrendering the soul. It’s time to fight not just for writing, but for cognition itself.