Tag: education

  • How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    As instructors, we need to encourage students to meaningfully engage with ChatGPT. How do we do that? First, we need the essay prompt:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    Second, we need a detailed “how-to” assignment that teaches students to engage critically and transparently with AI tools like ChatGPT during the writing process—in the context of the World War Z essay prompt.


    Assignment Title: How to Think With, Not Just Through, AI

    Overview:

    This assignment component requires you to document, reflect on, and revise your use of ChatGPT (or any other AI writing tool) while developing your World War Z analytical essay. Rather than treating AI like a magic trick that produces answers behind the curtain, this assignment asks you to lift the curtain and analyze the performance. What did the AI get right? Where did it fall short? And—most importantly—how did you shape the work?

    This reflection will be submitted alongside your final essay and counts for 15% of your essay grade. It will be evaluated based on transparency, clarity, and the depth of your analysis.


    Step-by-Step Instructions:

    Step 1: Prompt the Machine

    Before you write your own thesis, ask ChatGPT a version of the following:

    “Using World War Z as a cultural metaphor, write a thesis and outline for an essay that explores the collapse of public trust and the failure of global cooperation. Use at least two of the following sources: Jonathan Haidt, Ed Yong, Seyla Benhabib, and Zeynep Tufekci.”

    You may modify the prompt, but record it exactly as you typed it. Save the AI’s entire response.


    Step 2: Analyze the Output

    Copy and paste the AI’s output into a Google Doc. Underneath it, write a 300–400 word critique that answers the following:

    • What parts of the AI output were useful? (Thesis, outline, phrasing, examples, etc.)
    • What felt generic, vague, or factually inaccurate?
    • Did the AI capture the tone or depth you want in your own work? Why or why not?
    • How did this output influence the direction or shape of your own ideas, either positively or negatively?

    📌 Tip: If it gave you clichés like “in today’s world…” or “communication is key to society,” call them out! If it helped you identify a strong metaphor or organizational structure, give it credit—but explain how you built on it.


    Step 3: Revise the Output (Optional But Encouraged)

    Take one paragraph from the AI’s draft (thesis, topic sentence, body paragraph—your choice), and rewrite it into a stronger version. This is your chance to show:

    • Stronger voice
    • Clearer argument
    • Better use of evidence
    • More sophisticated style

    Label the two versions:

    • Original AI Version
    • Your Revision

    📌 This helps demonstrate your ability to evaluate and improve digital writing, a crucial part of critical thinking in the AI era.


    Step 4: Reflection Log (Post-Essay)

    After completing your final essay, write a short reflection (250–300 words) responding to these questions:

    • What role did AI play in the development of your essay?
    • How did you decide what to keep, change, or discard?
    • Do you feel you relied on AI too much, too little, or just enough?
    • How has this process changed your understanding of how to use (or not use) ChatGPT in academic work?

    Submission Format:

    Your AI Reflection Packet should include the following:

    1. The original prompt you gave ChatGPT
    2. The full AI-generated output
    3. Your 300–400 word critique of the AI’s work
    4. (Optional) Side-by-side paragraph: AI version + your revision
    5. Your 250–300 word final reflection

    Submit as a single Google Doc or PDF titled:
    LastName_AIReflection_WWZ


    Grading Criteria (15 points):

    CriteriaPoints
    Honest and detailed documentation3
    Thoughtful analysis of AI output4
    Evidence of critical evaluation3
    (Optional) Quality of paragraph revision2
    Insightful final reflection3

  • How to Use a Process Journal to Teach Critical Thinking to Students

    How to Use a Process Journal to Teach Critical Thinking to Students

    One of the most urgent challenges in today’s writing classroom is not getting students to submit essays—it’s getting them to think while doing it. As generative AI continues to automate grammar, structure, and even “voice,” the real question is this: How do we reward intellectual work in an age when polished prose can be faked?

    One answer is deceptively simple: grade the thinking, not just the product.

    To do that, we must build assignments that expose the messy, iterative, and reflective nature of real analysis. We’re talking about work that requires metacognition, self-assessment, and visible decision-making—tools like reflective annotations, process journals, and “thinking out loud” assignments. These formats ask students not just to present a claim but to show how they arrived at it.

    Let’s take the following essay prompt as a case study:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    To ensure students are doing the cognitive heavy lifting, pair this prompt with a process journal designed to track how students analyze, revise, and reflect. Here’s how that works:


    Assignment Title: Thinking in the Rubble: A Process Journal for the Collapse of Trust Essay

    Overview:
    As students build their World War Z argument, they’ll also keep a process journal—a candid record of how they think, doubt, change direction, and use (or resist) AI tools. Think of it as a behind-the-scenes cut of their essay in the making. The journal is worth 20% of the final grade and will be assessed for clarity, critical insight, and honest engagement with the writing process.


    Journal Requirements:

    1. Reflective Annotations (Pre-Writing)

    Choose one paragraph from each of the three sources you plan to use. For each, write a 4–5 sentence annotation addressing:

    • Why you chose it
    • What it reveals about trust, misinformation, or institutional failure
    • How you might use it in your essay

    📌 Goal: Show how you’re thinking with your sources—not just cherry-picking quotes.


    2. Thesis Evolution Timeline

    Document your thesis at 2–3 stages of development. For each version:

    • State your working thesis (even if it’s a mess)
    • Explain what caused you to change or clarify it
    • Note the moment of insight or struggle that sparked the revision

    📌 Goal: Track the intellectual arc of your argument.


    3. Thinking Out Loud Log

    Choose one option:

    • Audio: Record a 3–5 minute voice memo in which you talk through a draft issue (e.g., integrating a source, clarifying your angle, or refining a counterargument)
    • Written: Compose a 300-word journal entry about a problem spot in your draft and how you’re trying to fix it

    📌 Goal: Reveal the inner dialogue behind your writing decisions.


    4. AI Transparency Statement (If Applicable)

    If you used ChatGPT or any AI tool at any point, briefly document:

    • Your prompt(s)
    • The output you received
    • What you kept, changed, or rejected
    • Why

    📌 Goal: Reflect on AI’s influence—not to punish, but to encourage digital literacy and self-awareness.


    5. Final Reflection (Post-Essay, 300 Words)

    After submitting your essay, write a closing reflection that answers:

    • What new insight did you gain about public trust or misinformation?
    • What was the hardest part of the process—and how did you push through?
    • What part of your final paper are you proudest of, and why?

    📌 Goal: Practice self-assessment and connect the work to broader learning.


    Submission Format:

    Submit as a single Google Doc or PDF titled:
    LastName_ThinkingInTheRubble


    Assessment Criteria (20 Points Total):

    • Depth and honesty of reflection
    • Evidence of critical engagement with readings and ideas
    • Clear documentation of thesis development and revision
    • Intellectual transparency (especially regarding AI use)
    • Clarity, specificity, and personal insight across all entries

    This process journal does more than scaffold an essay—it teaches students how to think. And more importantly, it gives instructors a way to see that thinking, reward it, and design grading practices that can’t be hijacked by a chatbot with decent syntax.

  • Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Preface:

    As generative AI tools like ChatGPT become embedded in students’ academic routines, we are confronted with a profound teaching challenge: how do we preserve critical thinking, reading, and original argumentation in a world where automation increasingly substitutes for intellectual effort?

    This document outlines a proposal shaped by conversations among college writing faculty who have observed students not only using AI to write their essays, but to interpret readings and “read” for them. We are working with a post-pandemic generation whose learning trajectories have been disrupted, whose reading habits were never fully formed, and who now approach writing assignments as tasks to be completed with the help of digital proxies.

    Rather than fight a losing battle of prohibition, this proposal suggests a shift in assignment design, grading priorities, and classroom methodology. The goal is not to eliminate AI but to reclaim intellectual labor by foregrounding process, transparency, and student-authored insight.

    What follows:

    • A brief analysis of how current student behavior around AI reflects broader educational and cognitive shifts
    • A set of four guiding pedagogical questions
    • Specific, implementable summative assignment models that resist outsourcing
    • A redesigned version of an existing World War Z-based argumentative essay that integrates AI transparency and metacognitive reflection
    • What a 12-chapter handbook might look like

    This proposal invites our department to move beyond academic panic toward pedagogical adaptation—embracing AI as a classroom reality while affirming the irreplaceable value of human thought, voice, and integrity.

    Conversations about the Teaching Crisis

    In recent conversations, my colleagues and I have been increasingly focused on our students’ use of ChatGPT—not just as a writing assistant, but as a way to outsource the entire process of reading, analyzing, and interpreting texts. Many students now use AI not only to draft essays in proper MLA format, but also to “read” the assigned material for them. This raises significant concerns about the erosion of critical thinking, reading, and writing skills—skills that have traditionally been at the heart of college-level instruction.

    We’re witnessing the results of a disrupted educational timeline. Many of our students lost up to two years of formal schooling during the pandemic. They’ve come of age on smartphones, often without ever having read a full book, and they approach reading and writing as chores to be automated. Their attention spans are fragmented, shaped by a digital culture that favors swipes and scrolls over sustained thought.

    As instructors who value and were shaped by deep reading and critical inquiry, we now face a student population that sees AI not as a tool for refinement but as a lifeline to survive academic expectations. And yet, we recognize that AI is not going away—on the contrary, our students will almost certainly use it in professional and personal contexts long after college.

    This moment demands a pedagogical shift. If we want to preserve and teach critical thinking, we need to rethink how we design assignments, how we define originality, and how we integrate AI into our classrooms with purpose and transparency. We’re beginning to ask the following questions, which we believe should guide our department’s evolving approach:


    1. What can we do to encourage critical thinking and measure that thinking in a grade?

    We might assign work that requires metacognition, reflection, and student-generated analysis—such as reflective annotations, process journals, or “thinking out loud” assignments where students explain their reasoning. Grading could focus more on how students arrived at their conclusions, not just the final product.


    2. How can we teach our students to engage with ChatGPT in a meaningful way?

    We can require students to document and reflect on their use of AI, including what they prompted, what they accepted or rejected, and why. Assignments can include ChatGPT output analysis—asking students to critique what AI produces and revise it meaningfully.


    3. How can we use ChatGPT in class to show them how to use it more effectively?

    We could model live interactions with ChatGPT in class, showing students how to improve their prompts, evaluate responses, and push the tool toward more nuanced thinking. This becomes an exercise in rhetorical awareness and digital literacy, not cheating.


    4. What kind of summative assignment should we give, perhaps as an alternative to the conventional essay, to measure their Student Learning Outcomes?

    As the use of AI tools like ChatGPT becomes increasingly integrated into students’ writing habits, the traditional essay—as a measure of reading comprehension, original thought, and language skills—needs thoughtful revision. If students are using AI to generate first drafts, outlines, or even entire essays, then evaluating the final product alone no longer gives us an accurate picture of what students have actually learned.

    We need summative assignments that foreground the process, require personal intellectual labor, and make AI usage transparent rather than concealed. The goal is to design assignments that reveal student thinking—how they engage with material, synthesize ideas, revise meaningfully, and make decisions about voice, purpose, and argumentation.

    To do this, we can shift the summative focus toward metacognitive reflection, multi-modal composition, and oral or visual demonstration of learning. These formats allow us to better assess Student Learning Outcomes such as critical thinking, rhetorical awareness, digital literacy, and authentic engagement with course content.


    4 Alternative Summative Assignment Ideas:


    1. The AI Collaboration Portfolio

    Description:
    Students submit a portfolio that includes:

    • Initial AI-generated output based on a prompt they created
    • A fully revised human-authored version of that piece
    • A reflective essay (500–750 words) explaining what they kept, changed, or rejected from the AI’s draft and why.

    SLOs Assessed:

    • Critical thinking
    • Rhetorical awareness
    • Digital literacy
    • Ability to revise and self-assess


    2. In-Class Defense of a ChatGPT Essay

    Description:
    Students submit an AI-assisted essay ahead of time. Then, in a timed, in-class setting (or via recorded video), they defend the major claims of the essay, explaining the reasoning, evidence, and stylistic choices as if they wrote it themselves—because they should have revised and understood it thoroughly.

    SLOs Assessed:

    • Comprehension
    • Argumentation
    • Oral communication
    • Ownership of ideas

    3. Critical Reading Response with AI Fact-Check Layer

    Description:
    Students choose a short essay, op-ed, or excerpt from a class reading and:

    • Write a 400–600 word response analyzing the author’s argument
    • Ask ChatGPT to summarize or interpret the same reading
    • Compare their own analysis with the AI’s, noting differences in tone, logic, accuracy, and insight

    SLOs Assessed:

    • Close reading
    • Critical analysis
    • Evaluating sources (human and AI)
    • Writing with clarity and purpose

    4. Personal Ethos Narrative + AI’s Attempt

    Description:
    Students write a personal narrative essay centered on a core belief, a formative experience, or a challenge. Then, they prompt ChatGPT to write the “same” story using only the basic facts. Finally, they compare the two and reflect on what makes writing personal, authentic, and emotionally compelling.

    SLOs Assessed:

    • Self-expression
    • Voice and tone
    • Audience awareness
    • Critical thinking about language and identity

    Original Writing Prompt That Needs to be Updated to Meet the AI Era:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    This essay invites you to write a 1,700-word argumentative essay in which you analyze World War Z as a metaphor for mass anxiety. Develop an argument that connects the film’s themes to contemporary global challenges such as:

    • The COVID-19 pandemic and fear of viral contagion
    • Global migration driven by war, poverty, and climate change
    • The dehumanization of “The Other” in politically polarized societies
    • The fragility of global cooperation in the face of crisis
    • The spread of weaponized misinformation and conspiracy

    Your thesis should not simply argue that World War Z is “about fear”—it should claim what kind of fear, why it matters, and what the film reveals about our modern condition. You may focus on one primary fear or compare multiple forms of crisis (e.g., pandemic vs. political polarization, or migration vs. misinformation).

    Use at least three of the following essays as research support:

    1. Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid” (The Atlantic)
      —A deep dive into how social media has fractured trust, created echo chambers, and undermined democratic cooperation.
    2. Ed Yong, “How the Pandemic Defeated America” (The Atlantic)
      —An autopsy of institutional failure and public distrust during COVID-19, including how the virus exposed deep structural weaknesses.
    3. Seyla Benhabib, “The Return of the Sovereign: Immigration and the Crisis of Globalization” (Project Syndicate)
      —Explores the backlash against global migration and the erosion of human rights amid rising nationalism.
    4. Zeynep Tufekci, “We’re Asking the Wrong Questions of Facebook” (The New York Times)
      —An analysis of how misinformation spreads virally, creating moral panics and damaging collective reasoning.

    Requirements:

    • Use MLA format
    • 1,700 words
    • Quote directly from World War Z (film dialogue, plot events, or visuals)
    • Integrate at least two sources above with citation
    • Present a counterargument and a rebuttal

    To turn this already strong prompt into a more effective summative assignment—especially in the age of AI writing tools like ChatGPT—we need to preserve the intellectual rigor of the original task while redesigning its structure to foreground student thinking and reduce the possibility of full outsourcing.

    The solution isn’t to eliminate AI tools, but to design assignments that make invisible thinking visible, emphasize process and synthesis, and require student-authored insights that AI cannot fake.

    Below is a revised, multi-part assignment that integrates World War Z and the selected texts while enhancing critical thinking, transparency of process, and AI accountability.


    Revised Summative Assignment Title:

    World War Z and the Collapse of Trust: A Multi-Stage Inquiry into Fear, Crisis, and Collective Breakdown”


    Assignment Structure:

    Part 1: AI Collaboration Log (300–400 words, submitted with final essay)

    Before drafting, students will engage with ChatGPT (or another AI tool) to generate:

    • A summary of World War Z as a cultural allegory
    • A brainstormed list of thesis statements based on the themes listed
    • AI-generated outline or argument plan

    Students must then reflect:

    • What ideas were helpful, and why?
    • What ideas felt generic, reductive, or inaccurate?
    • What did you reject or reshape, and how?
    • Did the AI miss anything crucial that you added yourself?

    📍Purpose: Reinforces transparency and encourages rhetorical self-awareness. It also lets you see whether students are thinking with the AI or hiding behind it.


    Part 2: Draft + Peer Critique (optional but encouraged)

    Students submit a rough draft and exchange feedback focusing on:

    • Depth of metaphorical analysis
    • Quality of integration between sources and film
    • Presence of original insight vs. cliché or summary

    📍Purpose: Encourages revision and demonstrates development. Peer readers can help flag vague AI language or unsupported generalizations.


    Part 3: Final Essay (1,200–1,300 words)

    Write a sustained, argumentative essay that:

    • Analyzes World War Z as a metaphor for a specific contemporary fear
    • Draws from at least two of the provided sources, but ideally three
    • Provides detailed evidence from the film (specific dialogue, visuals, character arcs)
    • Engages with a counterargument and offers a clear rebuttal
    • Demonstrates critical thinking, synthesis, and voice

    📍Changes from original: Slightly shorter word count, but denser expectations for insight. The counterargument now isn’t just a checkbox—it’s a chance to showcase rhetorical skill.


    Part 4: Metacognitive Postscript (200–300 words)

    At the end of the final essay, students write a short reflection answering:

    • What did you learn from comparing human analysis with AI-generated ideas?
    • What part of your argument is most your own?
    • What was difficult or challenging in developing your claim?
    • How do you now see the role of fear in shaping public response to crisis?

    📍Purpose: Makes thinking visible. Encourages students to take ownership of their learning and connect it to broader themes.


    Why This Works as a Better Summative Assignment:

    1. Harder to Outsource: The process-based structure (log, reflection, critique) demands personalized engagement and critical self-awareness.
    2. SLO-Rich: Students demonstrate close reading, source synthesis, rhetorical control, metacognition, and original thought.
    3. AI-Literate: Rather than punish students for using AI, it teaches them how to interrogate and surpass its output.
    4. Flexible for Diverse Thinkers: Students can lean into what resonates—fear of misinformation, loss of global trust, migration panic—without writing a generic “this movie is about fear” paper.

    Here is what a handbook might look like as a chapter outline:

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World


    Chapter 1: The New Landscape of Student Writing

    A critical overview of how generative AI, digital distractions, and post-pandemic learning gaps are reshaping the habits, assumptions, and skill sets of today’s college students.


    Chapter 2: From Automation to Apathy: The Crisis of Critical Thinking

    Examines the shift from student-generated ideas to AI-generated content and how this impacts intellectual risk-taking, reading stamina, and analytical depth.


    Chapter 3: ChatGPT in the Classroom: Enemy, Ally, or Mirror?

    Explores the pedagogical implications of AI writing tools, with a balanced look at their risks and potential when approached with rhetorical transparency and academic integrity.


    Chapter 4: Rethinking the Essay: Process Over Product

    Makes the case for redesigning writing assignments to prioritize process, revision, metacognition, and student ownership—rather than polished output alone.


    Chapter 5: Designing Assignments that Resist Outsourcing

    Outlines concrete assignment types that foreground thinking: “think out loud” tasks, AI comparison prompts, collaborative revision logs, and reflection-based writing.


    Chapter 6: Teaching the AI-Literate Writer

    Guides instructors in teaching students how to use AI critically—not as a ghostwriter, but as a heuristic tool. Includes lessons on prompting, critiquing, and revising AI output.


    Chapter 7: From Plagiarism to Participation: Reframing Academic Integrity

    Redefines what counts as authorship, originality, and engagement in a world where content can be instantly generated but not meaningfully owned without human input.


    Chapter 8: The New Reading Crisis

    Addresses the rise of “outsourced reading” via AI summarizers and how to reignite students’ engagement with texts through annotation, debate, and collaborative interpretation.


    Chapter 9: Summative Assessment in the Age of AI

    Presents summative assignment models that include AI collaboration portfolios, in-class defenses, metacognitive postscripts, and multi-modal responses.


    Chapter 10: World War Z and the Collapse of Public Trust (Case Study)

    A deep dive into a revised, AI-aware assignment based on World War Z—modeling how to blend pop culture, serious research, and transparent student process.


    Chapter 11: Implementing Department-Wide Change

    Practical strategies for departments to align curriculum, rubrics, and policies around process-based assessment, digital literacy, and instructor training.


    Chapter 12: The Future of Writing in the Post-Human Classroom

    Speculative but grounded reflections on where we’re headed—balancing AI fluency with the irreducible value of human voice, curiosity, and critical resistance.

  • Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    As students increasingly rely on AI writing tools—sometimes even using one tool to generate an assignment and another to rewrite or “launder” it—we must adapt our teaching strategies to stay aligned with these evolving practices. To address this shift, I propose the following two updated Student Learning Outcomes that reflect the current landscape of AI-assisted writing:

    Student Learning Outcome #1: Using AI Tools Responsibly

    Students will integrate AI tools into their writing assignments in ways that enhance learning, demonstrate critical thinking, and reflect ethical and responsible use of technology.


    Definition of “Meaningfully, Ethically, and Responsibly”:

    To use AI tools meaningfully, ethically, and responsibly means students treat AI not as a shortcut to bypass thinking, but as a collaborative aid to deepen their writing, research, and revision process. Ethical use includes acknowledging when and how AI was used, avoiding plagiarism or misrepresentation, and understanding the limits and biases of these tools. Responsible use involves aligning AI usage with the assignment’s goals, maintaining academic integrity, and using AI to support—not replace—original thought and student voice.


    Five Assignment Strategies to Fulfill This Learning Outcome:

    1. AI Process Reflection Logs
      Require students to submit a short reflection with each assignment explaining if, how, and why they used AI tools (e.g., brainstorming, outlining, revising), and evaluate the effectiveness and ethics of their choices.
    2. Compare-and-Critique Tasks
      Assign students to generate an AI-written response to a prompt and then critique it—identifying weaknesses in reasoning, tone, or factual accuracy—and revise it with their own voice and insights.
    3. Source Verification Exercises
      Ask students to use AI to gather preliminary research, then verify, fact-check, and cite real sources that support or challenge the AI’s output, teaching them discernment and digital literacy.
    4. AI vs. Human Draft Workshops
      Have students bring both an AI-generated draft and a human-written draft of the same paragraph to class. In peer review, students analyze the differences in tone, structure, and depth of thought to develop judgment about when AI helps or hinders.
    5. Statement of Integrity Clause
      Include a required statement in the assignment where students attest to their use of AI tools, much like a bibliography or code of ethics, fostering transparency and self-awareness.

    Student Learning Outcome #2: Avoiding the Uncanny Valley Effect

    Students will produce writing that sounds natural, human, and authentic—free from the awkwardness, artificiality, or emotional flatness often associated with AI-generated content.


    Definition: The Uncanny Valley Effect in Writing

    The Uncanny Valley Effect in writing occurs when a piece of text almost sounds human—but not quite. It may be grammatically correct and well-structured, yet it feels emotionally hollow, overly generic, oddly formal, or just slightly “off.” Like a robot trying to pass as a person, the writing stirs discomfort or distrust because it mimics human tone without the depth, insight, or nuance of actual lived experience or authorial voice.


    5 Common Characteristics of the Uncanny Valley in Student Writing:

    1. Generic Language – Vague, overused phrases that sound like filler rather than specific, engaged thought (e.g., “Since the dawn of time…”).
    2. Overly Formal Tone – A stiff, robotic voice with little rhythm, personality, or variation in sentence structure.
    3. Surface-Level Thinking – Repetition of obvious or uncritical ideas with no deeper analysis, curiosity, or counterargument.
    4. Emotional Emptiness – Statements that lack genuine feeling, perspective, or a sense of human urgency.
    5. Odd Phrasing or Word Choice – Slightly off metaphors, synonyms, or transitions that feel misused or unnatural to a fluent reader.

    7 Ways Students Can Use AI Tools Without Falling into the Uncanny Valley:

    1. Always Revise the Output – Use AI-generated text as a rough draft or idea starter, but revise it with your own voice, style, and specific insights.
    2. Inject Lived Experience – Add personal examples, concrete details, or specific observations that an AI cannot generate from its data pool.
    3. Break the Pattern – Vary your sentence length, tone, and rhythm to avoid the AI’s predictable, formal cadence.
    4. Cut the Clichés – Watch for stale or filler phrases (“in today’s society,” “this essay will discuss…”) and replace them with clearer, more original statements.
    5. Ask the AI Better Questions – Use prompts that require nuance, comparison, or contradiction rather than shallow definitions or summaries.
    6. Fact-Check and Source – Don’t trust AI-generated facts or references. Verify claims with real sources and cite them properly.
    7. Read Aloud – If it sounds awkward or lifeless when spoken, revise. Authentic writing should sound like something a thoughtful person might actually say.
  • Teaching College Writing in the Pre-Canvas Days

    Teaching College Writing in the Pre-Canvas Days

    I’m glad academia has gone digital. No more heavy boxes of printed essays to lug home. No more gradebooks with smeared records.

    I remember we used to have to bring our grade and attendance records to campus during the semester break and get our records approved before we were truly free to enjoy our vacation.

    Like a beleaguered instructor sent on a doomed mission, I had to drag myself to the campus, lugging a mountain of paper that looked like it had survived the apocalypse.

    My stack of grades and attendance records—yellowed, dog-eared, and adorned with enough coffee stains and White-Out smudges to pass as a Jackson Pollock reject—was a bureaucratic nightmare in physical form. I found myself in line with a hundred other sleep-deprived, caffeine-fueled professors, each clutching their own messy masterpieces like they were carrying the Dead Sea Scrolls. The line outside the Office of Records was so long it could have served as an endurance test for Navy SEALs. To stave off starvation and existential dread, I had packed a comically oversized sack of protein bars and apples, as if I were preparing for a month-long siege rather than a simple bureaucratic ritual.

    There I was, supposed to be basking in the sweet, sweet nothingness of semester break, but instead, I was condemned to a gauntlet of waiting that made Dante’s Inferno look like a walk in the park. For what felt like hours, waited for the privilege of sitting at a table and enduring the laser-like glare of humorless bureaucrats who would scrutinize my records as if they were forensic experts analyzing evidence from a high-profile murder case.

    Once I finally managed to wade through the outdoor line, I advanced to the foyer for the second, even more soul-crushing phase of The Great Wait. Inside, rows of desks manned by expressionless drones awaited, each one peering over piles of grading records that seemed to stretch back to the dawn of civilization. Behind the staff of functionaries who examined the professors’ gradebooks were towers of file boxes stacked so precariously that a single sneeze could have transformed them into a cataclysmic eruption of dust and possibly asbestos.

    Eventually, I was summoned to one of the desks where an eagle-eyed Attendance Priestess scrutinized my records with the intensity of a customs officer suspecting I had smuggled contraband. She licked her fingertips with the solemnity of a high priestess preparing for a sacred ritual, only to cast me a look of such disdain you’d think I’d just handed her a wad of toilet paper instead of my gradebook.

    Finally, when the pinch-faced administrator deemed my records sufficiently unblemished and granted me the bureaucratic blessing to leave, it felt like I had just been handed the keys to the Pearly Gates. I didn’t walk to my car. I windsprinted because I feared the Attendance Priestess may have found fault with my records and would call me back to start the whole process all over again.

  • The Last Writing Instructor: Holding the Line in a Post-Thinking World

    The Last Writing Instructor: Holding the Line in a Post-Thinking World

    Last night, I was trapped in a surreal nightmare—a bureaucratic limbo masquerading as a college elective. The course had no purpose other than to grant students enough credits to graduate. No curriculum, no topics, no teaching—just endless hours of supervised inertia. My role? Clock in, clock out, and do absolutely nothing.

    The students were oddly cheerful, like campers at some low-budget retreat. They brought packed lunches, sprawled across desks, and killed time with card games and checkers. They socialized, laughed, and blissfully ignored the fact that this whole charade was a colossal waste of time. Meanwhile, I sat there, twitching with existential dread. The urge to teach something—anything—gnawed at my gut. But that was forbidden. I was there to babysit, not educate.

    The shame hung on me like wet clothes. I felt obsolete, like a relic from the days when education had meaning. The minutes dragged by like a DMV line, each one stretching into a slow, agonizing eternity. I wondered if this Kafkaesque hell was a punishment for still believing that teaching is more than glorified daycare.

    This dream echoes a fear many writing instructors share: irrelevance. Daniel Herman explores this anxiety in his essay, “The End of High-School English.” He laments how students have always found shortcuts to learning—CliffsNotes, YouTube summaries—but still had to confront the terror of a blank page. Now, with AI tools like ChatGPT, that gatekeeping moment is gone. Writing is no longer a “metric for intelligence” or a teachable skill, Herman claims.

    I agree to an extent. Yes, AI can generate competent writing faster than a student pulling an all-nighter. But let’s not pretend this is new. Even in pre-ChatGPT days, students outsourced essays to parents, tutors, and paid services. We were always grappling with academic honesty. What’s different now is the scale of disruption.

    Herman’s deeper question—just how necessary are writing instructors in the age of AI—is far more troubling. Can ChatGPT really replace us? Maybe it can teach grammar and structure well enough for mundane tasks. But writing instructors have a higher purpose: teaching students to recognize the difference between surface-level mediocrity and powerful, persuasive writing.

    Herman himself admits that ChatGPT produces essays that are “adequate” but superficial. Sure, it can churn out syntactically flawless drivel, but syntax isn’t everything. Writing that leaves a lasting impression—“Higher Writing”—is built on sharp thought, strong argumentation, and a dynamic authorial voice. Think Baldwin, Didion, or Nabokov. That’s the standard. I’d argue it’s our job to steer students away from lifeless, task-oriented prose and toward writing that resonates.

    Herman’s pessimism about students’ indifference to rhetorical nuance and literary flair is half-baked at best. Sure, dive too deep into the murky waters of Shakespearean arcana or Melville’s endless tangents, and you’ll bore them stiff—faster than an unpaid intern at a three-hour faculty meeting. But let’s get real. You didn’t go into teaching to serve as a human snooze button. You went into sales, whether you like it or not. And what are you selling? Persona, ideas, and the antidote to chaos.

    First up: persona. It’s not just about writing—it’s about becoming. How do you craft an identity, project it with swagger, and use it to navigate life’s messiness? When students read Oscar Wilde, Frederick Douglass, or Octavia Butler, they don’t just see words on a page—they see mastery. A fully-realized persona commands attention with wit, irony, and rhetorical flair. Wilde nailed it when he said, “The first task in life is to assume a pose.” He wasn’t joking. That pose—your persona—grows stronger through mastery of language and argumentation. Once students catch a glimpse of that, they want it. They crave the power to command a room, not just survive it. And let’s be clear—ChatGPT isn’t in the persona business. That’s your turf.

    Next: ideas. You became a teacher because you believe in the transformative power of ideas. Great ideas don’t just fill word counts; they ignite brains and reshape worldviews. Over the years, students have thanked me for introducing them to concepts that stuck with them like intellectual tattoos. Take Bread and Circus—the idea that a tiny elite has always controlled the masses through cheap food and mindless entertainment. Students eat that up (pun intended). Or nihilism—the grim doctrine that nothing matters and we’re all here just killing time before we die. They’ll argue over that for hours. And Rousseau’s “noble savage” versus the myth of human hubris? They’ll debate whether we’re pure souls corrupted by society or doomed from birth by faulty wiring like it’s the Super Bowl of philosophy.

    ChatGPT doesn’t sell ideas. It regurgitates language like a well-trained parrot, but without the fire of intellectual curiosity. You, on the other hand, are in the idea business. If you’re not selling your students on the thrill of big ideas, you’re failing at your job.

    Finally: chaos. Most people live in a swirling mess of dysfunction and anxiety. You sell your students the tools to push back: discipline, routine, and what Cal Newport calls “deep work.” Writers like Newport, Oliver Burkeman, Phil Stutz, and Angela Duckworth offer blueprints for repelling chaos and replacing it with order. ChatGPT can’t teach students to prioritize, strategize, or persevere. That’s your domain.

    So keep honing your pitch. You’re selling something AI can’t: a powerful persona, the transformative power of ideas, and the tools to carve order from the chaos. ChatGPT can crunch words all it wants, but when it comes to shaping human beings, it’s just another cog. You? You’re the architect.

    Right?

    Maybe.

    Let’s not get too comfortable in our intellectual trench coats. While we pride ourselves on persona, big ideas, and resisting chaos, we’re up against something far more insidious than plagiarism. AI isn’t just outsourcing thought—it’s rewiring brains. In the Black Mirror episode “Joan Is Awful,” we watch a woman’s life turned into a deepfake soap opera, customized for mass consumption, with every gesture, flaw, and confession algorithmically mined and exaggerated. What’s most horrifying isn’t the surveillance or the celebrity—it’s the flattening. Joan becomes a caricature of herself, optimized for engagement and stripped of depth. Sound familiar?

    This is what AI is doing to writing—and by extension, to thought. The more students rely on ChatGPT, the more their rhetorical instincts, their voice, their capacity for struggle and ambiguity atrophy. Like Joan, they become algorithmically curated versions of themselves. Not writers. Not thinkers. Just language puppets speaking in borrowed code. No matter how persuasive our arguments or electrifying our lectures, we’re still up against the law of digital gravity: if it’s easier, faster, and “good enough,” it wins.

    So what’s the best move? Don’t fight AI—outgrow it. If we’re serious about salvaging human expression, we must redesign how we teach writing. Center the work around experiences AI can’t mimic: in-class writing, collaborative thinking, embodied storytelling, rhetorical improvisation, intellectual risk. Create assignments that need a human brain and reward discomfort over convenience. The real enemy isn’t ChatGPT—it’s complacency. If we let the Joanification of our students continue, we’re not just losing the classroom—we’re surrendering the soul. It’s time to fight not just for writing, but for cognition itself.

  • The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    The Honor Code and the Price Tag: AI, Class, and the Illusion of Academic Integrity

    Returning to the classroom post-pandemic and encountering ChatGPT, I’ve become fixated on what I now call “the battle for the human soul.” On one side, there’s Ozempification—that alluring shortcut. It’s the path where AI-induced mediocrity is the destination, and the journey there is paved with laziness. Like popping Ozempic for quick weight loss and calling it a day, the shortcut to academic success involves relying on AI to churn out lackluster work. Who cares about excellence when Netflix is calling your name, right?

    On the other side, we have Humanification. This is the grueling path that the great orator and abolitionist Frederick Douglass would champion. It’s the “deep work” author Cal Newport writes about in his best-selling books. Humanification happens when we turn away from comfort and instead plunge headfirst into the difficult, yet rewarding, process of literacy, self-improvement, and helping others rise from their own “Sunken Place”—borrowing from Jordan Peele’s chilling metaphor in Get Out. On this path, the pursuit isn’t comfort; it’s meaning. The goal isn’t a Netflix binge but a life with purpose and higher aspirations.

    Reading Tyler Austin Harper’s essay “ChatGPT Doesn’t Have to Ruin College,” I was struck by the same dichotomy of Ozempification on one side of academia and Humanification on the other. Harper, while wandering around Haverford’s idyllic campus, stumbles upon a group of English majors who proudly scoff at ChatGPT, choosing instead to be “real” writers. These students, in a world that has largely tossed the humanities aside as irrelevant, are disciples of Humanification. For them, rejecting ChatGPT isn’t just an academic decision; it’s a badge of honor, reminiscent of Bartleby the Scrivener’s iconic refusal: “I prefer not to.” Let that sink in. Give these students the opportunity to use ChatGPT to write their essays, and they recoil at the thought of such a flagrant self-betrayal. 

    After interviewing students, Harper concludes that using AI in higher education isn’t just a technological issue—it’s cultural and economic. The disdain these students have for ChatGPT stems from a belief that reading and writing transcend mere resume-building or career milestones. It’s about art for art’s sake. But Harper wisely points out that this intellectual snobbery is rooted in privilege: “Honor and curiosity can be nurtured, or crushed, by circumstance.” 

    I had to stop in my tracks. Was I so privileged and naive to think I could preach the gospel of Humanification while unaware that such a pursuit costs time, money, and the peace of mind that one has a luxurious safety net in the event the Humanification quest goes awry? 

    This question made me think of Frederick Douglass, a man who had every reason to have his intellectual curiosity “crushed by circumstance.” In fact, his pursuit of literacy, despite the threat of death, was driven by an unquenchable thirst for knowledge and self-transformation. But Douglass is a hero for the ages. Can we really expect most people, particularly those without resources, to follow that path? Harper’s argument carries weight. Without the financial and cultural infrastructure to support it, aspiring to Humanification isn’t always feasible.

    Consider the tech overlords—the very architects of our screen-addicted dystopia—who wouldn’t dream of letting their own kids near the digital devices they’ve unleashed upon the masses. Instead, they ship them off to posh Waldorf schools, where screens are treated like radioactive waste. There, children are shielded from the brain-rot of endless scrolling and instead are taught the arcane art of cursive handwriting, how to wield an abacus like a mathematician from 500 B.C., and the joys of harvesting kale and beets to brew some earthy, life-affirming root vegetable stew. These titans of tech, flush with billions, eagerly shell out small fortunes to safeguard their offspring’s minds from the very digital claws that are busy eviscerating ours.

    I often tell my students that being rich makes it easier to be an intellectual. Imagine the luxury: you could retreat to an off-grid cabin (complete with Wi-Fi, obviously), gorge on organic gourmet food prepped by your personal chef, and spend your days reading Dostoevsky in Russian and mastering Schubert’s sonatas while taking sunset jogs along the beach. When you emerge back into society, tanned and enlightened, you could boast of your intellectual achievements with ease.

    Harper’s point is that wealth facilitates Humanification. At a place like Haverford, with its “writing support, small classes, and unharried faculty,” it’s easier to uphold an honor code and aspire to intellectual purity. But for most students—especially those in public schools—this is a far cry from reality. My wife teaches sixth grade in the public school system, and she’s shared stories of schools that resemble post-apocalyptic wastelands more than educational institutions. We’re talking mold-infested buildings, chemical leaks, and underpaid teachers sleeping in their cars. Expecting students in these environments to uphold an “honor code” and strive for Humanification? It’s not just unrealistic—it’s insulting.

    This brings to mind Maslow’s hierarchy of needs. Before we can expect students to self-actualize by reading Dostoevsky or rejecting ChatGPT, they need food, shelter, and basic safety. It’s hard to care about literary integrity when you’re navigating life’s survival mode.

    As I dive deeper into Harper’s thought-provoking essay on economic class and the honor code, I can’t help but notice the uncanny parallel to the essay about weight management and GLP-1 drugs my Critical Thinking students tackle in their first essay. Both seem to hinge not just on personal integrity or effort but on a cocktail of privilege and circumstance. Could it be that striving to be an “authentic writer,” untouched by the mediocrity of ChatGPT and backed by the luxury of free time, is eerily similar to the aspiration of achieving an Instagram-worthy body, possibly aided by expensive Ozempic injections?

    It raises the question: Is the difference between those who reject ChatGPT and those who embrace it simply a matter of character, or is it, at least in part, a product of class? After all, if you can afford the luxury of time—time to read Tolstoy and Dostoevsky in your rustic, tech-free cabin—you’re already in a different league. Similarly, if you have access to high-end weight management options like Ozempic, you’re not exactly running the same race as those pounding the pavement on their $20 sneakers. 

    Sure, both might involve personal effort—intellectual or physical—but they’re propped up by economic factors that can’t be ignored. Whether we’re talking about Ozempification or Humanification, it’s clear that while self-discipline and agency are part of the equation, they’re not the whole story. Class, as uncomfortable as it might be to admit, plays a significant role in determining who gets to choose their path—and who gets stuck navigating whatever options are left over.

    I’m sure the issue is more nuanced than that. These are, after all, complex topics that defy oversimplification. But both privilege and personal character need to be addressed if we’re going to have a real conversation about what it means to “aspire” in this day and age.

    Returning to Tyler Austin Harper’s essay, Harper provides a snapshot of the landscape when ChatGPT launched in late 2022. Many professors found themselves swamped with AI-generated essays, which, unsurprisingly, raised concerns about academic integrity. However, Harper, a professor at a liberal-arts college, remains optimistic, believing that students still have a genuine desire to learn and pursue authenticity. He views the potential for students to develop along the path of intellectual and personal growth, as very much alive—especially in environments like Haverford, where he went to test the waters of his optimism.

    When Harper interviews Haverford professors about ChatGPT violating the honor code, their collective shrug is surprising. They’re seemingly unbothered by the idea of policing students for cheating, as if grades and academic dishonesty are beneath them. The culture at Haverford, Harper implies, is one of intellectual immersion—where students and professors marinate in ideas, ethics, and the contemplation of higher ideals. The honor code, in this rarified academic air, is almost sacred, as though the mere existence of such a code ensures its observance. It’s a place where academic integrity and learning are intertwined, fueled by the aristocratic mind.

    Harper’s point is clear: The further you rise into the elite echelons of boutique colleges like Haverford, the less you have to worry about ChatGPT or cheating. But when you descend into the more grounded, practical world of community colleges, where students juggle multiple jobs, family obligations, and financial constraints, ChatGPT poses a greater threat to education. This divide, Harper suggests, is not just academic; it’s economic and cultural. The humanities may be thriving in the lofty spaces of elite institutions, but they’re rapidly withering in the trenches where students are simply trying to survive.

    As someone teaching at a community college, I can attest to this shift. My classrooms are filled with students who are not majoring in writing or education. Most of them are focused on nursing, engineering, and business. In this hypercompetitive job market, they simply don’t have the luxury to spend time reading novels, becoming musicologists or contemplating philosophical debates. They’re too busy hustling to get by. Humanification, as an idea, gets a nod in my class discussions, but in the “real world,” where six hours of sleep is a luxury, it often feels out of reach.

    Harper points out that in institutions like Haverford, not cheating has become a badge of honor, a marker of upper-class superiority. It’s akin to the social cachet of being skinny, thanks to access to expensive weight-loss drugs like Ozempic. There’s a smugness that comes with the privilege of maintaining integrity—an implication that those who cheat (or can’t afford Ozempic) are somehow morally inferior. This raises an uncomfortable question: Is the aspiration to Humanification really about moral growth, or is it just another way to signal wealth and privilege?

    However, Harper complicates this argument when he brings Stanford into the conversation. Unlike Haverford, Stanford has been forced to take the “nuclear option” of proctoring exams, convinced that cheating is rampant. In this larger, more impersonal environment, the honor code has failed to maintain academic integrity. It appears that Haverford’s secret sauce is its small, close-knit atmosphere—something that can’t be replicated at a sprawling institution like Stanford. Harper even wonders whether Haverford is more museum than university—a relic from an Edenic past when people pursued knowledge for its own sake, untainted by the drive for profit or prestige. Striving for Humanification at a place like Haverford may be an anachronism, a beautiful but lost world that most of us can only dream of.

    Harper’s essay forces me to consider the role of economic class in choosing a life of “authenticity” or Humanification. With this in mind, I give my Critical Thinking students the following writing prompt for their second essay:

    In his essay, “ChatGPT Doesn’t Have to Ruin College,” Tyler Austin Harper paints an idyllic portrait of students at Haverford College—a small, intimate campus where intellectual curiosity blooms without the weight of financial or vocational pressures. These students enjoy the luxury of time to nurture their education with a calm, casual confidence, pursuing a life of authenticity and personal growth that feels out of reach for many who are caught in the relentless grind of economic survival.

    College instructors at larger institutions might dream of their own students sharing this love for learning as a transformative journey, but the reality is often harsher. Many students, juggling jobs, family responsibilities, and financial stress, see education not as a space for leisurely exploration but as a means to a practical end. For them, college is a path to better job opportunities, and AI tools like ChatGPT become crucial allies in managing their workload, not threats to their intellectual integrity.

    Critics of ChatGPT may find themselves facing backlash from those who argue that such skepticism reeks of classism and elitism. It’s easy, the rebuttal goes, for the privileged few—with time, resources, and elite educations—to romanticize writing “off the grid” without AI assistance. But for the vast majority of working people, integrating AI into daily life isn’t a luxury—it’s a necessity, on par with reliable transportation, a smartphone, and a clean outfit for the job. Praising analog purity from ivory towers—especially those inaccessible to 99% of Americans—is hardly a serious response to the rise of a transformative technology like AI.

    In the end, we can’t preach Humanification without reckoning with the price tag it carries. The romantic ideal of the “authentic writer”—scribbling away in candlelit solitude, untouched by AI—has become yet another luxury brand, as unattainable for many as a Peloton in a studio apartment. The real battle isn’t simply about moral fiber or intellectual purity; it’s about time, access, and the brutal arithmetic of modern life. To dismiss AI as a lazy shortcut is to ignore the reality that for many students, it’s not indulgence—it’s triage. If the aristocracy of learning survives in places like Haverford, it does so behind a velvet rope. Meanwhile, the rest are left in the algorithmic trenches, cobbling together futures with whatever tools they can afford. The challenge ahead isn’t to shame the Ozempified or canonize the Humanified, but to build an educational culture where everyone—not just the privileged—can afford to aspire.

  • The Future of Writing in the Age of A.I.: A College Essay Prompt

    The Future of Writing in the Age of A.I.: A College Essay Prompt

    INTRODUCTION & CONTEXT
    In the not-so-distant past, writing was a slow, solitary act—a process that demanded time, introspection, and labor. But with the rise of generative AI tools like ChatGPT, Sudowrite, and GrammarlyGO, composition now has a button. Language can be mass-produced at scale, tuned to sound pleasant, neutral, polite—and eerily interchangeable. What once felt personal and arduous is now instantaneous and oddly soulless.

    In “The Great Language Flattening,” Victoria Turk argues that A.I. is training us to speak and write in “saccharine, sterile, synthetic” prose. She warns that our desire to optimize communication has come at the expense of voice, friction, and even individuality. Similarly, Cal Newport’s “What Kind of Writer is ChatGPT?” insists that while A.I. tools may mimic surface-level structure, they lack the “struggle” that gives rise to genuine insight. Their words float, untethered by thought, context, or consequences.

    But are these critiques overblown? In “ChatGPT Doesn’t Have to Ruin College,” Tyler Austin Harper suggests that the real danger isn’t A.I.—it’s a pedagogical failure. Writing assignments that can be done by A.I. were never meaningful to begin with. Harper argues that educators should double down on originality, reflection, and assignments that resist automation. Meanwhile, in “Will the Humanities Survive Artificial Intelligence?,” the author explores the institutional panic: as machine-generated writing becomes the norm, will critical thinking and close reading—the bedrock of the humanities—be considered obsolete?

    Adding complexity to this discussion, Lila Shroff’s “The Gen Z Lifestyle Subsidy” examines how young people increasingly outsource tasks once seen as rites of passage—cooking, cleaning, dating, even thinking. Is using A.I. to write your essay any different from using DoorDash to eat, Bumble to flirt, or TikTok to learn? And in “Why Even Try If You Have A.I.?,” Joshua Rothman diagnoses a deeper ennui: if machines can do everything better, faster, and cheaper—why struggle at all? What, if anything, is the value of effort in an automated world?

    This prompt asks you to grapple with a provocative and unavoidable question: What is the future of human writing in an age when machines can write for us?


    ASSIGNMENT INSTRUCTIONS

    Write a 1,700 word argumentative essay that answers the following question:

    Should the rise of generative A.I. mark the end of traditional writing instruction—or should it inspire us to reinvent writing as a deeply human, irreplaceable act?

    You must take a clear position on this question and argue it persuasively using at least four of the assigned readings. You are also encouraged to draw on personal experience, classroom observations, or examples from digital culture, but your essay must engage with the ideas and arguments presented in the texts.


    STRUCTURE AND EXPECTATIONS

    Your essay should include the following sections:


    I. INTRODUCTION (Approx. 300 words)

    • Hook your reader with a compelling anecdote, statistic, or image from your own experience with A.I. (e.g., using ChatGPT to brainstorm, cheating, rewriting, etc.).
    • Briefly introduce the conversation surrounding A.I. and the act of writing. Frame the debate: Is writing becoming obsolete? Or is it being reborn?
    • End with a sharply focused thesis that takes a clear, defensible position on the prompt.

    Sample thesis:

    While A.I. can generate fluent prose, it cannot replicate the messiness, insight, and moral weight of human writing—therefore, the role of writing instruction should not be reduced, but radically reinvented to prioritize voice, thought, and originality.


    II. BACKGROUND AND DEFINITIONAL FRAMING (Approx. 250

    • Define key terms like “generative A.I.,” “writing instruction,” and “voice.” Be precise.
    • Briefly explain how generative A.I. systems (like ChatGPT) work and how they are currently being used in educational and workplace settings.
    • Set up the stakes: Why does this conversation matter? What do we lose (or gain) if writing becomes largely machine-generated?

    III. ARGUMENT #1 – A.I. Is Flattening Language (Approx. 300 words)

    • Engage deeply with “The Great Language Flattening” by Victoria Turk.
    • Analyze how A.I.-generated language may lead to a homogenization of voice, tone, and personality.
    • Provide examples—either from your own experiments with A.I. or from the essay—that illustrate this flattening.
    • Connect to Newport’s argument: If writing becomes too “safe,” does it also become meaningless?

    IV. ARGUMENT #2 – The Need for Reinvention, Not Abandonment (Approx. 300 words)

    • Use Harper’s “ChatGPT Doesn’t Have to Ruin College” and the humanities-focused essay to argue that A.I. doesn’t spell the death of writing—it exposes the weakness of uninspired assignments.
    • Defend the idea that writing pedagogy should evolve by embracing personal narratives, critical analysis, and rhetorical complexity—tasks that A.I. can’t perform well (yet).
    • Address the counterpoint that some students prefer to use A.I. out of necessity, not laziness (e.g., time constraints, language barriers).

    V. ARGUMENT #3 – A Culture of Outsourcing (Approx. 300 words)

    • Bring in Lila Shroff’s “The Gen Z Lifestyle Subsidy” to examine the cultural shift toward convenience, automation, and outsourcing.
    • Ask the difficult question: If we already outsource our food, our shopping, our dates, and even our emotions (via TikTok), isn’t outsourcing our writing the logical next step?
    • Argue whether this mindset is sustainable—or whether it erodes something essential to human development and self-expression.

    VI. ARGUMENT #4 – Why Write at All? (Approx. 300  words)

    • Engage with Joshua Rothman’s existential meditation on motivation in “Why Even Try If You Have A.I.?”
    • Discuss the psychological toll of competing with A.I.—and whether effort still has value in an age of frictionless automation.
    • Make the case for writing as not just a skill, but a process of becoming: intellectual, emotional, and ethical maturation.

    VII. COUNTERARGUMENT AND REBUTTAL (Approx. 250  words)

    • Consider the argument that A.I. tools democratize writing by making it easier for non-native speakers, neurodiverse students, and time-strapped workers.
    • Acknowledge the appeal and utility of A.I. assistance.
    • Then rebut: Can ease and access coexist with depth and authenticity? Where is the line between tool and crutch? What happens when we no longer need to wrestle with words?

    VIII. CONCLUSION (Approx. 200 words)

    • Revisit your thesis in a way that reflects the journey of your argument.
    • Reflect on your own evolving relationship with writing and A.I.
    • Offer a call to action for educators, institutions, or individuals: What kind of writers—and thinkers—do we want to become in the A.I. age?

    REQUIREMENTS CHECKLIST

    • Word Count: 1,700 words
    • Minimum of four cited sources from the six assigned
    • Direct quotes and/or paraphrases with MLA-style in-text citations
    • Works Cited page using MLA format
    • Clear argumentative thesis
    • At least one counterargument with a rebuttal
    • Original title that reflects your position

    ESSAY EVALUATION RUBRIC (Simplified)

    CRITERIADESCRIPTION
    Thesis & ArgumentStrong, debatable thesis; clear stance maintained throughout
    Use of SourcesEffective integration of at least four assigned texts; accurate and meaningful engagement with the ideas presented
    Organization & FlowLogical structure; strong transitions; each paragraph develops a single, coherent idea
    Voice & StyleClear, vivid prose with a balance of analytical and personal voice
    Depth of ThoughtInsightful analysis; complex thinking; engagement with nuance and counterpoints
    Mechanics & MLA FormattingCorrect grammar, punctuation, and MLA citations; properly formatted Works Cited page
    Word CountMeets or exceeds minimum word requirement

    MLA Citations (Works Cited Format):

    Turk, Victoria. “The Great Language Flattening.” Wired, Condé Nast, 21 Apr. 2023, www.wired.com/story/the-great-language-flattening/.

    Harper, Tyler Austin. “ChatGPT Doesn’t Have to Ruin College.” The Atlantic, Atlantic Media Company, 27 Jan. 2023, www.theatlantic.com/technology/archive/2023/01/chatgpt-college-students-ai-writing/672879/.

    Shroff, Lila. “The Gen Z Lifestyle Subsidy.” The Cut, New York Media, 25 Oct. 2023, www.thecut.com/article/gen-z-lifestyle-subsidy-tiktok.html.

    Burnett, D. Graham. “Will the Humanities Survive Artificial Intelligence?” The New York Review of Books, 8 Feb. 2024, www.nybooks.com/articles/2024/02/08/will-the-humanities-survive-artificial-intelligence-burnett/.

    Newport, Cal. “What Kind of Writer Is ChatGPT?” The New Yorker, Condé Nast, 16 Jan. 2023, www.newyorker.com/news/essay/what-kind-of-writer-is-chatgpt.

    Rothman, Joshua. “Why Even Try If You Have A.I.?” The New Yorker, Condé Nast, 10 July 2023, www.newyorker.com/magazine/2023/07/10/why-even-try-if-you-have-ai.


    OPTIONAL DISCUSSION STARTERS FOR CLASSROOM USE

    To help students brainstorm and debate, consider using the following prompts in small groups or class discussions:

    1. Is it “cheating” to use A.I. if the result is better than what you could write on your own?
    2. Have you ever used A.I. to help write something? Were you satisfied—or unsettled?
    3. If everyone uses A.I. to write, will “good writing” become meaningless?
    4. Should English professors teach students how to use A.I. ethically, or ban it outright?
    5. What makes writing feel human?
  • How to Pretend You’re Still Alive at Week Eleven

    How to Pretend You’re Still Alive at Week Eleven

    After ninety minutes of hammering out lesson plans in my academic cave—also known as my college office—I realized my legs had entered that special purgatory between rigor mortis and a blood clot. So I stood up, performed a stretch that felt like a rusty marionette being yanked upright, and took a walk down the hallway.

    Out in our little shared faculty suite, I found my colleague from Foreign Languages hunched behind a desk like a war-weary translator decoding enemy communiqués. She looked up briefly from a pile of student papers, and when I asked how she was holding up, she gave the most honest answer academia ever produces: “Exhausted.” It was 2 p.m., and she still had a five-hour sentence left on her campus shift. I nodded grimly. The semester was two-thirds over, the point in the academic calendar when everything begins to sag—mood, posture, faith in humanity.

    “I get it,” I told her. “The late-semester ennui is baked into the profession.” I’ve been battling it for decades. It seeps into your bones and makes your students shuffle into class like underfed extras from a Civil War hospital drama—late, listless, and visibly haunted by their own poor decisions. Their faces are a collage of sleep deprivation, existential dread, and the dawning realization that the syllabus waits for no one.

    This is when you have to throw them a curveball. You can’t coast on grammar worksheets and MLA citation reviews. The status quo is the problem. I tell them to try yoga, breathing exercises, isometrics. If they’re feeling especially apocalyptic, I might even roll a zombie movie and spin it as a cautionary tale about pandemics and the erosion of civic trust. It’s a reach—but sometimes you need to swing for the fences, even if all you hit is a foul ball.

    Most of these tricks will fail. The semester will end the way all semesters do—in caffeine, chaos, and emotional triage. But at least you went down swinging. At least you reminded yourself, in that bleak final inning, that you’re not just a grading machine—you’re still alive.

  • Cork Dorks and the Road to Nowhere

    Cork Dorks and the Road to Nowhere

    In the mid-1980s, I funded my so-called college education as an English major by slinging bottles at Jackson’s Wine & Spirits in Berkeley, strategically nestled near the ritzy Claremont Hotel on Ashby Avenue. The job itself was an exercise in absurdity, not because of the work, but because of my coworkers—an ensemble of walking encyclopedias who were grossly overqualified to stock shelves and ring up Chardonnay. We’re talking PhDs in linguistics, anthropology, chemistry, physics, philosophy, and musicology—each degree worth less than a tenured spot in a clown college, yet brandished like medals in an intellectual arms race. These were people who read Flaubert in the original French and practically spat on anyone who dared pick up an English translation. The mere thought of working for a corporation or any institution that might impose a dress code or, heaven forbid, expect them to “synergize” was beneath their dignity. Selling fine wines and imported beers became their ironic playground, a place where they could cultivate a sense of elitism thicker than the crust on a neglected wheel of Brie. Their unofficial motto? “Service with a smirk.”

    These intellectual peacocks, not particularly rich or buff, took immense pride in flexing the one muscle they deemed worthy: the brain. Their idea of a power pose wasn’t a bulging bicep but a razor-sharp quip delivered with surgical precision. For them, intellectual one-upmanship was the true path, with the mind as the muscle to be sculpted. Their version of bodybuilding legend Sergio Oliva’s “Myth Pose” was a finely tuned discussion about Adorno’s critique of culture or a multi-hour debate comparing two French Beaujolais, all sprinkled with quotes from Camus. They taught me that flexing didn’t require dumbbells; it just needed the right amount of pretension and a willingness to alienate everyone around you.

    During slow hours, we gathered near the cash registers like a cabal of cynical sages, dissecting the philosophical curiosities of Nietzsche, the overwrought bombast of Wagner, and the labyrinthine despair of Kafka. The job became less of an occupation and more of a sanctuary for delusional self-importance. I found myself believing that I was somehow smarter than most, despite the glaring fact that I was working in a retail wine store with zero career prospects. But who needed money when you could live on the heady fumes of intellectual superiority? The longer I marinated in that environment, the more I realized I was becoming gloriously, irreparably unemployable.

    While shuffling between dead-end teaching gigs at various colleges—where my enthusiasm quickly flatlined—I always found solace in returning to my wine snob cocoon. There, surrounded by these proud misfits who’d traded ambition for esoteric chatter, I could pretend that debating the nuances of Hegel was more fulfilling than climbing any traditional career ladder. Truth be told, I might’ve happily stagnated in that dead-end job forever if fate hadn’t intervened in the form of an administrator at Merritt College who inexplicably liked my teaching style. He pulled me aside one day and whispered that there was a full-time gig open at some desert outpost called Bakersfield. He and his colleagues were prepared to write me “sterling letters of recommendation” to ensure I got the job.

    “What’s Bakersfield like?” I asked, a vague unease bubbling up as memories of my family stopping there to gas up our station wagon drifted into my mind like a bad smell.

    “Don’t worry about that,” he replied, his tone thick with the kind of unearned confidence that only comes from never having to live in a place like Bakersfield. “Just move your butt down there and take things as they come.”

    And so, in the span of a few short months, I traded intellectual elitism for a one-way ticket to the middle of nowhere, chasing a full-time paycheck while my wine store days—and the delusions that came with them—slowly receded into the rearview mirror.