Category: Education in the AI Age

  • How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    How to Grade Students’ Use of ChatGPT in Preparing for Their Essay

    As instructors, we need to encourage students to meaningfully engage with ChatGPT. How do we do that? First, we need the essay prompt:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    Second, we need a detailed “how-to” assignment that teaches students to engage critically and transparently with AI tools like ChatGPT during the writing process—in the context of the World War Z essay prompt.


    Assignment Title: How to Think With, Not Just Through, AI

    Overview:

    This assignment component requires you to document, reflect on, and revise your use of ChatGPT (or any other AI writing tool) while developing your World War Z analytical essay. Rather than treating AI like a magic trick that produces answers behind the curtain, this assignment asks you to lift the curtain and analyze the performance. What did the AI get right? Where did it fall short? And—most importantly—how did you shape the work?

    This reflection will be submitted alongside your final essay and counts for 15% of your essay grade. It will be evaluated based on transparency, clarity, and the depth of your analysis.


    Step-by-Step Instructions:

    Step 1: Prompt the Machine

    Before you write your own thesis, ask ChatGPT a version of the following:

    “Using World War Z as a cultural metaphor, write a thesis and outline for an essay that explores the collapse of public trust and the failure of global cooperation. Use at least two of the following sources: Jonathan Haidt, Ed Yong, Seyla Benhabib, and Zeynep Tufekci.”

    You may modify the prompt, but record it exactly as you typed it. Save the AI’s entire response.


    Step 2: Analyze the Output

    Copy and paste the AI’s output into a Google Doc. Underneath it, write a 300–400 word critique that answers the following:

    • What parts of the AI output were useful? (Thesis, outline, phrasing, examples, etc.)
    • What felt generic, vague, or factually inaccurate?
    • Did the AI capture the tone or depth you want in your own work? Why or why not?
    • How did this output influence the direction or shape of your own ideas, either positively or negatively?

    📌 Tip: If it gave you clichés like “in today’s world…” or “communication is key to society,” call them out! If it helped you identify a strong metaphor or organizational structure, give it credit—but explain how you built on it.


    Step 3: Revise the Output (Optional But Encouraged)

    Take one paragraph from the AI’s draft (thesis, topic sentence, body paragraph—your choice), and rewrite it into a stronger version. This is your chance to show:

    • Stronger voice
    • Clearer argument
    • Better use of evidence
    • More sophisticated style

    Label the two versions:

    • Original AI Version
    • Your Revision

    📌 This helps demonstrate your ability to evaluate and improve digital writing, a crucial part of critical thinking in the AI era.


    Step 4: Reflection Log (Post-Essay)

    After completing your final essay, write a short reflection (250–300 words) responding to these questions:

    • What role did AI play in the development of your essay?
    • How did you decide what to keep, change, or discard?
    • Do you feel you relied on AI too much, too little, or just enough?
    • How has this process changed your understanding of how to use (or not use) ChatGPT in academic work?

    Submission Format:

    Your AI Reflection Packet should include the following:

    1. The original prompt you gave ChatGPT
    2. The full AI-generated output
    3. Your 300–400 word critique of the AI’s work
    4. (Optional) Side-by-side paragraph: AI version + your revision
    5. Your 250–300 word final reflection

    Submit as a single Google Doc or PDF titled:
    LastName_AIReflection_WWZ


    Grading Criteria (15 points):

    CriteriaPoints
    Honest and detailed documentation3
    Thoughtful analysis of AI output4
    Evidence of critical evaluation3
    (Optional) Quality of paragraph revision2
    Insightful final reflection3

  • How to Use a Process Journal to Teach Critical Thinking to Students

    How to Use a Process Journal to Teach Critical Thinking to Students

    One of the most urgent challenges in today’s writing classroom is not getting students to submit essays—it’s getting them to think while doing it. As generative AI continues to automate grammar, structure, and even “voice,” the real question is this: How do we reward intellectual work in an age when polished prose can be faked?

    One answer is deceptively simple: grade the thinking, not just the product.

    To do that, we must build assignments that expose the messy, iterative, and reflective nature of real analysis. We’re talking about work that requires metacognition, self-assessment, and visible decision-making—tools like reflective annotations, process journals, and “thinking out loud” assignments. These formats ask students not just to present a claim but to show how they arrived at it.

    Let’s take the following essay prompt as a case study:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    To ensure students are doing the cognitive heavy lifting, pair this prompt with a process journal designed to track how students analyze, revise, and reflect. Here’s how that works:


    Assignment Title: Thinking in the Rubble: A Process Journal for the Collapse of Trust Essay

    Overview:
    As students build their World War Z argument, they’ll also keep a process journal—a candid record of how they think, doubt, change direction, and use (or resist) AI tools. Think of it as a behind-the-scenes cut of their essay in the making. The journal is worth 20% of the final grade and will be assessed for clarity, critical insight, and honest engagement with the writing process.


    Journal Requirements:

    1. Reflective Annotations (Pre-Writing)

    Choose one paragraph from each of the three sources you plan to use. For each, write a 4–5 sentence annotation addressing:

    • Why you chose it
    • What it reveals about trust, misinformation, or institutional failure
    • How you might use it in your essay

    📌 Goal: Show how you’re thinking with your sources—not just cherry-picking quotes.


    2. Thesis Evolution Timeline

    Document your thesis at 2–3 stages of development. For each version:

    • State your working thesis (even if it’s a mess)
    • Explain what caused you to change or clarify it
    • Note the moment of insight or struggle that sparked the revision

    📌 Goal: Track the intellectual arc of your argument.


    3. Thinking Out Loud Log

    Choose one option:

    • Audio: Record a 3–5 minute voice memo in which you talk through a draft issue (e.g., integrating a source, clarifying your angle, or refining a counterargument)
    • Written: Compose a 300-word journal entry about a problem spot in your draft and how you’re trying to fix it

    📌 Goal: Reveal the inner dialogue behind your writing decisions.


    4. AI Transparency Statement (If Applicable)

    If you used ChatGPT or any AI tool at any point, briefly document:

    • Your prompt(s)
    • The output you received
    • What you kept, changed, or rejected
    • Why

    📌 Goal: Reflect on AI’s influence—not to punish, but to encourage digital literacy and self-awareness.


    5. Final Reflection (Post-Essay, 300 Words)

    After submitting your essay, write a closing reflection that answers:

    • What new insight did you gain about public trust or misinformation?
    • What was the hardest part of the process—and how did you push through?
    • What part of your final paper are you proudest of, and why?

    📌 Goal: Practice self-assessment and connect the work to broader learning.


    Submission Format:

    Submit as a single Google Doc or PDF titled:
    LastName_ThinkingInTheRubble


    Assessment Criteria (20 Points Total):

    • Depth and honesty of reflection
    • Evidence of critical engagement with readings and ideas
    • Clear documentation of thesis development and revision
    • Intellectual transparency (especially regarding AI use)
    • Clarity, specificity, and personal insight across all entries

    This process journal does more than scaffold an essay—it teaches students how to think. And more importantly, it gives instructors a way to see that thinking, reward it, and design grading practices that can’t be hijacked by a chatbot with decent syntax.

  • Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World

    Preface:

    As generative AI tools like ChatGPT become embedded in students’ academic routines, we are confronted with a profound teaching challenge: how do we preserve critical thinking, reading, and original argumentation in a world where automation increasingly substitutes for intellectual effort?

    This document outlines a proposal shaped by conversations among college writing faculty who have observed students not only using AI to write their essays, but to interpret readings and “read” for them. We are working with a post-pandemic generation whose learning trajectories have been disrupted, whose reading habits were never fully formed, and who now approach writing assignments as tasks to be completed with the help of digital proxies.

    Rather than fight a losing battle of prohibition, this proposal suggests a shift in assignment design, grading priorities, and classroom methodology. The goal is not to eliminate AI but to reclaim intellectual labor by foregrounding process, transparency, and student-authored insight.

    What follows:

    • A brief analysis of how current student behavior around AI reflects broader educational and cognitive shifts
    • A set of four guiding pedagogical questions
    • Specific, implementable summative assignment models that resist outsourcing
    • A redesigned version of an existing World War Z-based argumentative essay that integrates AI transparency and metacognitive reflection
    • What a 12-chapter handbook might look like

    This proposal invites our department to move beyond academic panic toward pedagogical adaptation—embracing AI as a classroom reality while affirming the irreplaceable value of human thought, voice, and integrity.

    Conversations about the Teaching Crisis

    In recent conversations, my colleagues and I have been increasingly focused on our students’ use of ChatGPT—not just as a writing assistant, but as a way to outsource the entire process of reading, analyzing, and interpreting texts. Many students now use AI not only to draft essays in proper MLA format, but also to “read” the assigned material for them. This raises significant concerns about the erosion of critical thinking, reading, and writing skills—skills that have traditionally been at the heart of college-level instruction.

    We’re witnessing the results of a disrupted educational timeline. Many of our students lost up to two years of formal schooling during the pandemic. They’ve come of age on smartphones, often without ever having read a full book, and they approach reading and writing as chores to be automated. Their attention spans are fragmented, shaped by a digital culture that favors swipes and scrolls over sustained thought.

    As instructors who value and were shaped by deep reading and critical inquiry, we now face a student population that sees AI not as a tool for refinement but as a lifeline to survive academic expectations. And yet, we recognize that AI is not going away—on the contrary, our students will almost certainly use it in professional and personal contexts long after college.

    This moment demands a pedagogical shift. If we want to preserve and teach critical thinking, we need to rethink how we design assignments, how we define originality, and how we integrate AI into our classrooms with purpose and transparency. We’re beginning to ask the following questions, which we believe should guide our department’s evolving approach:


    1. What can we do to encourage critical thinking and measure that thinking in a grade?

    We might assign work that requires metacognition, reflection, and student-generated analysis—such as reflective annotations, process journals, or “thinking out loud” assignments where students explain their reasoning. Grading could focus more on how students arrived at their conclusions, not just the final product.


    2. How can we teach our students to engage with ChatGPT in a meaningful way?

    We can require students to document and reflect on their use of AI, including what they prompted, what they accepted or rejected, and why. Assignments can include ChatGPT output analysis—asking students to critique what AI produces and revise it meaningfully.


    3. How can we use ChatGPT in class to show them how to use it more effectively?

    We could model live interactions with ChatGPT in class, showing students how to improve their prompts, evaluate responses, and push the tool toward more nuanced thinking. This becomes an exercise in rhetorical awareness and digital literacy, not cheating.


    4. What kind of summative assignment should we give, perhaps as an alternative to the conventional essay, to measure their Student Learning Outcomes?

    As the use of AI tools like ChatGPT becomes increasingly integrated into students’ writing habits, the traditional essay—as a measure of reading comprehension, original thought, and language skills—needs thoughtful revision. If students are using AI to generate first drafts, outlines, or even entire essays, then evaluating the final product alone no longer gives us an accurate picture of what students have actually learned.

    We need summative assignments that foreground the process, require personal intellectual labor, and make AI usage transparent rather than concealed. The goal is to design assignments that reveal student thinking—how they engage with material, synthesize ideas, revise meaningfully, and make decisions about voice, purpose, and argumentation.

    To do this, we can shift the summative focus toward metacognitive reflection, multi-modal composition, and oral or visual demonstration of learning. These formats allow us to better assess Student Learning Outcomes such as critical thinking, rhetorical awareness, digital literacy, and authentic engagement with course content.


    4 Alternative Summative Assignment Ideas:


    1. The AI Collaboration Portfolio

    Description:
    Students submit a portfolio that includes:

    • Initial AI-generated output based on a prompt they created
    • A fully revised human-authored version of that piece
    • A reflective essay (500–750 words) explaining what they kept, changed, or rejected from the AI’s draft and why.

    SLOs Assessed:

    • Critical thinking
    • Rhetorical awareness
    • Digital literacy
    • Ability to revise and self-assess


    2. In-Class Defense of a ChatGPT Essay

    Description:
    Students submit an AI-assisted essay ahead of time. Then, in a timed, in-class setting (or via recorded video), they defend the major claims of the essay, explaining the reasoning, evidence, and stylistic choices as if they wrote it themselves—because they should have revised and understood it thoroughly.

    SLOs Assessed:

    • Comprehension
    • Argumentation
    • Oral communication
    • Ownership of ideas

    3. Critical Reading Response with AI Fact-Check Layer

    Description:
    Students choose a short essay, op-ed, or excerpt from a class reading and:

    • Write a 400–600 word response analyzing the author’s argument
    • Ask ChatGPT to summarize or interpret the same reading
    • Compare their own analysis with the AI’s, noting differences in tone, logic, accuracy, and insight

    SLOs Assessed:

    • Close reading
    • Critical analysis
    • Evaluating sources (human and AI)
    • Writing with clarity and purpose

    4. Personal Ethos Narrative + AI’s Attempt

    Description:
    Students write a personal narrative essay centered on a core belief, a formative experience, or a challenge. Then, they prompt ChatGPT to write the “same” story using only the basic facts. Finally, they compare the two and reflect on what makes writing personal, authentic, and emotionally compelling.

    SLOs Assessed:

    • Self-expression
    • Voice and tone
    • Audience awareness
    • Critical thinking about language and identity

    Original Writing Prompt That Needs to be Updated to Meet the AI Era:

    In World War Z, a global pandemic rapidly spreads, unleashing chaos, institutional breakdown, and the fragmentation of global cooperation. Though fictional, the film can be read as an allegory for the very real dysfunction and distrust that characterized the COVID-19 pandemic. Using World War Z as a cultural lens, write an essay in which you argue how the film metaphorically captures the collapse of public trust, the dangers of misinformation, and the failure of collective action in a hyper-polarized world. Support your argument with at least three of the following sources: Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Ed Yong’s “How the Pandemic Defeated America,” Seyla Benhabib’s “The Return of the Sovereign,” and Zeynep Tufekci’s “We’re Asking the Wrong Questions of Facebook.”

    This essay invites you to write a 1,700-word argumentative essay in which you analyze World War Z as a metaphor for mass anxiety. Develop an argument that connects the film’s themes to contemporary global challenges such as:

    • The COVID-19 pandemic and fear of viral contagion
    • Global migration driven by war, poverty, and climate change
    • The dehumanization of “The Other” in politically polarized societies
    • The fragility of global cooperation in the face of crisis
    • The spread of weaponized misinformation and conspiracy

    Your thesis should not simply argue that World War Z is “about fear”—it should claim what kind of fear, why it matters, and what the film reveals about our modern condition. You may focus on one primary fear or compare multiple forms of crisis (e.g., pandemic vs. political polarization, or migration vs. misinformation).

    Use at least three of the following essays as research support:

    1. Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid” (The Atlantic)
      —A deep dive into how social media has fractured trust, created echo chambers, and undermined democratic cooperation.
    2. Ed Yong, “How the Pandemic Defeated America” (The Atlantic)
      —An autopsy of institutional failure and public distrust during COVID-19, including how the virus exposed deep structural weaknesses.
    3. Seyla Benhabib, “The Return of the Sovereign: Immigration and the Crisis of Globalization” (Project Syndicate)
      —Explores the backlash against global migration and the erosion of human rights amid rising nationalism.
    4. Zeynep Tufekci, “We’re Asking the Wrong Questions of Facebook” (The New York Times)
      —An analysis of how misinformation spreads virally, creating moral panics and damaging collective reasoning.

    Requirements:

    • Use MLA format
    • 1,700 words
    • Quote directly from World War Z (film dialogue, plot events, or visuals)
    • Integrate at least two sources above with citation
    • Present a counterargument and a rebuttal

    To turn this already strong prompt into a more effective summative assignment—especially in the age of AI writing tools like ChatGPT—we need to preserve the intellectual rigor of the original task while redesigning its structure to foreground student thinking and reduce the possibility of full outsourcing.

    The solution isn’t to eliminate AI tools, but to design assignments that make invisible thinking visible, emphasize process and synthesis, and require student-authored insights that AI cannot fake.

    Below is a revised, multi-part assignment that integrates World War Z and the selected texts while enhancing critical thinking, transparency of process, and AI accountability.


    Revised Summative Assignment Title:

    World War Z and the Collapse of Trust: A Multi-Stage Inquiry into Fear, Crisis, and Collective Breakdown”


    Assignment Structure:

    Part 1: AI Collaboration Log (300–400 words, submitted with final essay)

    Before drafting, students will engage with ChatGPT (or another AI tool) to generate:

    • A summary of World War Z as a cultural allegory
    • A brainstormed list of thesis statements based on the themes listed
    • AI-generated outline or argument plan

    Students must then reflect:

    • What ideas were helpful, and why?
    • What ideas felt generic, reductive, or inaccurate?
    • What did you reject or reshape, and how?
    • Did the AI miss anything crucial that you added yourself?

    📍Purpose: Reinforces transparency and encourages rhetorical self-awareness. It also lets you see whether students are thinking with the AI or hiding behind it.


    Part 2: Draft + Peer Critique (optional but encouraged)

    Students submit a rough draft and exchange feedback focusing on:

    • Depth of metaphorical analysis
    • Quality of integration between sources and film
    • Presence of original insight vs. cliché or summary

    📍Purpose: Encourages revision and demonstrates development. Peer readers can help flag vague AI language or unsupported generalizations.


    Part 3: Final Essay (1,200–1,300 words)

    Write a sustained, argumentative essay that:

    • Analyzes World War Z as a metaphor for a specific contemporary fear
    • Draws from at least two of the provided sources, but ideally three
    • Provides detailed evidence from the film (specific dialogue, visuals, character arcs)
    • Engages with a counterargument and offers a clear rebuttal
    • Demonstrates critical thinking, synthesis, and voice

    📍Changes from original: Slightly shorter word count, but denser expectations for insight. The counterargument now isn’t just a checkbox—it’s a chance to showcase rhetorical skill.


    Part 4: Metacognitive Postscript (200–300 words)

    At the end of the final essay, students write a short reflection answering:

    • What did you learn from comparing human analysis with AI-generated ideas?
    • What part of your argument is most your own?
    • What was difficult or challenging in developing your claim?
    • How do you now see the role of fear in shaping public response to crisis?

    📍Purpose: Makes thinking visible. Encourages students to take ownership of their learning and connect it to broader themes.


    Why This Works as a Better Summative Assignment:

    1. Harder to Outsource: The process-based structure (log, reflection, critique) demands personalized engagement and critical self-awareness.
    2. SLO-Rich: Students demonstrate close reading, source synthesis, rhetorical control, metacognition, and original thought.
    3. AI-Literate: Rather than punish students for using AI, it teaches them how to interrogate and surpass its output.
    4. Flexible for Diverse Thinkers: Students can lean into what resonates—fear of misinformation, loss of global trust, migration panic—without writing a generic “this movie is about fear” paper.

    Here is what a handbook might look like as a chapter outline:

    Teaching in the Age of Automation: Reclaiming Critical Thinking in an AI World


    Chapter 1: The New Landscape of Student Writing

    A critical overview of how generative AI, digital distractions, and post-pandemic learning gaps are reshaping the habits, assumptions, and skill sets of today’s college students.


    Chapter 2: From Automation to Apathy: The Crisis of Critical Thinking

    Examines the shift from student-generated ideas to AI-generated content and how this impacts intellectual risk-taking, reading stamina, and analytical depth.


    Chapter 3: ChatGPT in the Classroom: Enemy, Ally, or Mirror?

    Explores the pedagogical implications of AI writing tools, with a balanced look at their risks and potential when approached with rhetorical transparency and academic integrity.


    Chapter 4: Rethinking the Essay: Process Over Product

    Makes the case for redesigning writing assignments to prioritize process, revision, metacognition, and student ownership—rather than polished output alone.


    Chapter 5: Designing Assignments that Resist Outsourcing

    Outlines concrete assignment types that foreground thinking: “think out loud” tasks, AI comparison prompts, collaborative revision logs, and reflection-based writing.


    Chapter 6: Teaching the AI-Literate Writer

    Guides instructors in teaching students how to use AI critically—not as a ghostwriter, but as a heuristic tool. Includes lessons on prompting, critiquing, and revising AI output.


    Chapter 7: From Plagiarism to Participation: Reframing Academic Integrity

    Redefines what counts as authorship, originality, and engagement in a world where content can be instantly generated but not meaningfully owned without human input.


    Chapter 8: The New Reading Crisis

    Addresses the rise of “outsourced reading” via AI summarizers and how to reignite students’ engagement with texts through annotation, debate, and collaborative interpretation.


    Chapter 9: Summative Assessment in the Age of AI

    Presents summative assignment models that include AI collaboration portfolios, in-class defenses, metacognitive postscripts, and multi-modal responses.


    Chapter 10: World War Z and the Collapse of Public Trust (Case Study)

    A deep dive into a revised, AI-aware assignment based on World War Z—modeling how to blend pop culture, serious research, and transparent student process.


    Chapter 11: Implementing Department-Wide Change

    Practical strategies for departments to align curriculum, rubrics, and policies around process-based assessment, digital literacy, and instructor training.


    Chapter 12: The Future of Writing in the Post-Human Classroom

    Speculative but grounded reflections on where we’re headed—balancing AI fluency with the irreducible value of human voice, curiosity, and critical resistance.

  • Lessons Learned from the Ring Light Apocalypse

    Lessons Learned from the Ring Light Apocalypse

    During lockdown, I never saw my wife more wrung out, more spiritually flattened, than the months her middle school forced her into the digital gladiator pit of live Zoom instruction. Every weekday morning, she stood before a pair of glaring monitors like a soldier manning twin turrets. At her feet, the giant ring light—a luminous, tripod-legged parasite—waited patiently to stub toes and sabotage serenity. It wasn’t just a lighting fixture; it was a metaphor for the pandemic’s unwanted intrusion into every square inch of our domestic life.

    My wife’s battle didn’t end with her students. She also took it upon herself to launch our twin daughters, then fifth-graders, into their own virtual classrooms—equally chaotic, equally doomed. I remember walking past their screens, peering at those sad little Brady Bunch tiles of glitchy faces and frozen smiles and thinking, This isn’t going to work. It didn’t feel like school. It felt like a pathetic simulation of order run by people trying to pilot a burning zeppelin from their kitchen tables.

    I, by contrast, got off scandalously easy. I teach college. My courses were asynchronous, quietly nestled in Canvas like pre-packed emergency rations. No live sessions. No tech panics. Just optional Zoom office hours, which no one attended. I sat in my garage doing kettlebell swings like a suburban monk, then retreated inside to play piano in the filtered afternoon light. The pandemic, for me, was a preview of early retirement: low-contact, low-stakes, and high in self-righteous tranquility.

    My wife envied me. She joked that teaching Zoom classes was like having your teeth drilled by a sadist who lectures you on standardized testing while fumbling with the pliers. And I laughed—too hard, because it wasn’t really a joke.

    The pandemic cracked open a truth I still wince at: the great domestic imbalance. I do chores, yes. I wipe counters, haul laundry, load the dishwasher. But my wife does the emotional heavy lifting—the million invisible tasks of motherhood, schooling, comforting, coordinating. During lockdown, that imbalance stopped being abstract. It stared me in the face.

    For me, quarantine was a hermit’s holiday. For her, it was a battlefield with bad Wi-Fi. And while I’m back to teaching and she’s back to something closer to normal, I haven’t forgotten the ring light, the glazed stare, or the guilt that hums quietly like a broken refrigerator in the back of my mind.

  • Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    Two Student Learning Outcomes to Encourage Responsible Use of AI Tools in College Writing Classes

    As students increasingly rely on AI writing tools—sometimes even using one tool to generate an assignment and another to rewrite or “launder” it—we must adapt our teaching strategies to stay aligned with these evolving practices. To address this shift, I propose the following two updated Student Learning Outcomes that reflect the current landscape of AI-assisted writing:

    Student Learning Outcome #1: Using AI Tools Responsibly

    Students will integrate AI tools into their writing assignments in ways that enhance learning, demonstrate critical thinking, and reflect ethical and responsible use of technology.


    Definition of “Meaningfully, Ethically, and Responsibly”:

    To use AI tools meaningfully, ethically, and responsibly means students treat AI not as a shortcut to bypass thinking, but as a collaborative aid to deepen their writing, research, and revision process. Ethical use includes acknowledging when and how AI was used, avoiding plagiarism or misrepresentation, and understanding the limits and biases of these tools. Responsible use involves aligning AI usage with the assignment’s goals, maintaining academic integrity, and using AI to support—not replace—original thought and student voice.


    Five Assignment Strategies to Fulfill This Learning Outcome:

    1. AI Process Reflection Logs
      Require students to submit a short reflection with each assignment explaining if, how, and why they used AI tools (e.g., brainstorming, outlining, revising), and evaluate the effectiveness and ethics of their choices.
    2. Compare-and-Critique Tasks
      Assign students to generate an AI-written response to a prompt and then critique it—identifying weaknesses in reasoning, tone, or factual accuracy—and revise it with their own voice and insights.
    3. Source Verification Exercises
      Ask students to use AI to gather preliminary research, then verify, fact-check, and cite real sources that support or challenge the AI’s output, teaching them discernment and digital literacy.
    4. AI vs. Human Draft Workshops
      Have students bring both an AI-generated draft and a human-written draft of the same paragraph to class. In peer review, students analyze the differences in tone, structure, and depth of thought to develop judgment about when AI helps or hinders.
    5. Statement of Integrity Clause
      Include a required statement in the assignment where students attest to their use of AI tools, much like a bibliography or code of ethics, fostering transparency and self-awareness.

    Student Learning Outcome #2: Avoiding the Uncanny Valley Effect

    Students will produce writing that sounds natural, human, and authentic—free from the awkwardness, artificiality, or emotional flatness often associated with AI-generated content.


    Definition: The Uncanny Valley Effect in Writing

    The Uncanny Valley Effect in writing occurs when a piece of text almost sounds human—but not quite. It may be grammatically correct and well-structured, yet it feels emotionally hollow, overly generic, oddly formal, or just slightly “off.” Like a robot trying to pass as a person, the writing stirs discomfort or distrust because it mimics human tone without the depth, insight, or nuance of actual lived experience or authorial voice.


    5 Common Characteristics of the Uncanny Valley in Student Writing:

    1. Generic Language – Vague, overused phrases that sound like filler rather than specific, engaged thought (e.g., “Since the dawn of time…”).
    2. Overly Formal Tone – A stiff, robotic voice with little rhythm, personality, or variation in sentence structure.
    3. Surface-Level Thinking – Repetition of obvious or uncritical ideas with no deeper analysis, curiosity, or counterargument.
    4. Emotional Emptiness – Statements that lack genuine feeling, perspective, or a sense of human urgency.
    5. Odd Phrasing or Word Choice – Slightly off metaphors, synonyms, or transitions that feel misused or unnatural to a fluent reader.

    7 Ways Students Can Use AI Tools Without Falling into the Uncanny Valley:

    1. Always Revise the Output – Use AI-generated text as a rough draft or idea starter, but revise it with your own voice, style, and specific insights.
    2. Inject Lived Experience – Add personal examples, concrete details, or specific observations that an AI cannot generate from its data pool.
    3. Break the Pattern – Vary your sentence length, tone, and rhythm to avoid the AI’s predictable, formal cadence.
    4. Cut the Clichés – Watch for stale or filler phrases (“in today’s society,” “this essay will discuss…”) and replace them with clearer, more original statements.
    5. Ask the AI Better Questions – Use prompts that require nuance, comparison, or contradiction rather than shallow definitions or summaries.
    6. Fact-Check and Source – Don’t trust AI-generated facts or references. Verify claims with real sources and cite them properly.
    7. Read Aloud – If it sounds awkward or lifeless when spoken, revise. Authentic writing should sound like something a thoughtful person might actually say.
  • AI Wants to be Your Friend, and It’s Shrinking Your Mind

    AI Wants to be Your Friend, and It’s Shrinking Your Mind

    In The Atlantic essay “AI Is Not Your Friend,” Mike Caulfield lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.

  • Cultural Fusion or Culinary Fraud?

    Cultural Fusion or Culinary Fraud?

    My Critical Thinking students are grappling with the sacred and the sacrilegious—namely, tacos.

    Their final essay asks a deceptively simple question: When it comes to iconic dishes like the taco, should we cling to tradition as if it were holy writ, treating every variation as culinary heresy? Or is riffing on a recipe a legitimate act of evolution—or worse, an opportunistic theft dressed up in aioli?

    To dig into this, we turn to Netflix’s Ugly Delicious, where chef David Chang hosts an episode simply titled “Tacos.” The episode plays like a beautifully constructed argumentative essay by Gustavo Arellano, who dismantles the idea of “Mexican food” as a static monolith. Instead, he presents it as a glorious, shape-shifting culture of flavor—one that thrives because of its openness to the outside world.

    Arellano celebrates Mexico’s culinary curiosity: how Lebanese immigrants brought shawarma and inspired tacos al pastor, a perfect example of cultural fusion that became canon. He contrasts this with the United States’ suspicious, xenophobic posture—a country that historically snarls at outsiders until they open a food truck and sell $2 magic on a paper plate.

    Roy Choi, creator of the legendary Kogi taco trucks, takes this further. He speaks of cooking as a street-level negotiation for dignity: Korean-Mexican fusion forged in the heat of shared kitchens, shaped by the scorn of American culture, and perfected not out of trendiness but out of survival. These tacos aren’t just delicious; they’re resistance with a salsa verde finish.

    But this isn’t just a story of open minds and flavor-blending utopias. There’s also the hard truth of survival and adaptation. Take Lucia Rodriguez, who immigrated from Jalisco and had to recreate her recipes using whatever ingredients she could find in San Bernardino. Her efforts became the foundation of Mitla Cafe, a restaurant still thriving since 1937. It also became the blueprint for Glen Bell—yes, that Glen Bell—who reverse-engineered her food to create Taco Bell, which is to Mexican cuisine what boxed wine is to Bordeaux.

    Still, not all spin-offs are sins. Rosio Sanchez, a Michelin-level chef, began her journey by mastering traditional Mexican food. Only then did she begin to improvise, like a jazz virtuoso honoring the standards before going off-script. Her reinvention is rooted in love, not opportunism. It’s a tribute, not a theft.

    And therein lies the moral fault line: intent, respect, and—let’s not forget—execution. As one student noted with appropriate outrage, white TikTok influencers once rebranded agua fresca as “spa water,” a cultural mugging wrapped in Pinterest aesthetics. And let’s not ignore the corporate vultures who buy beloved local chains only to gut their soul with frozen ingredients and bottom-line mediocrity.

    The lesson? Not all innovation is appropriation. But if your food disrespects its roots, dilutes its meaning, or simply tastes like disappointment, it’s not fusion—it’s a felony.

    The rule is simple: Make great food that honors its lineage and blows people away. Otherwise, what you’re serving is not cuisine. It’s edible disrespect.

  • Satan Wears Patek: The Couture Demons of Network TV

    Satan Wears Patek: The Couture Demons of Network TV

    After dinner, my wife and I collapsed onto the couch like two satiated lions, still riding the sugar high from a slice of chocolate cake so transcendent it could’ve been smuggled out of a Vatican vault. This wasn’t just dessert—it was a spiritual experience. Fudgy, rich, and topped with a ganache that whispered blasphemies in French, it left us in a state of chocolaty euphoria. And what better way to follow up divine confectionery than with a show called Evil—which, in tone and content, felt like dessert’s opposite number.

    Evil, for the uninitiated, is what happens when The X-Files and The Exorcist have a baby and then dress it in Prada. Our hero is David Acosta, a priest so genetically gifted he looks like he was sculpted during an abs day in Michelangelo’s studio. He partners with Kristen Bouchard, a forensic psychologist with both supermodel cheekbones and a Rolodex of PhDs, and Ben Shakir, a tech bro turned ghostbuster, who handles the EMF detectors and keeps the Wi-Fi strong enough to livestream from hell. Together, they investigate cases of alleged possession, miracles, and demonic mischief—all lurking, naturally, in two-story suburban homes with open-concept kitchens.

    What really juices the narrative is the will-they-won’t-they tension between Kristen and Father Abs. Their chemistry crackles with forbidden longing, as if every exorcism could end in a kiss—had David not taken a vow of celibacy (and the producers not wanted to nuke the Catholic viewership). It’s less faith versus science and more eye contact versus self-control.

    And then there’s Leland Townsend, the show’s resident demon in Dockers. He’s less Prince of Darkness and more Assistant Manager of Darkness—slick, smug, and oily enough to deep-fry a turkey. He slinks into scenes oozing unearned confidence and pathological glee, like Satan’s regional sales director. You can practically smell the Axe body spray of evil.

    Let’s pause here for fashion. The wardrobe department on Evil deserves an Emmy, a Pulitzer, and possibly a fragrance line. Everyone’s rocking cinematic outerwear that belongs in the Louvre. Kristen’s coats are so tailored they could cut glass. Acosta’s wrist is adorned with a Patek Philippe that suggests his vows may include poverty of the soul, but not of the Swiss variety. Honestly, the outfits are so distracting you half expect Satan to comment on the stitching.

    In one late-night scene, Kristen’s daughters are using ghost-detecting iPad apps at 3 a.m., their faces bathed in eerie blue light. It’s a chilling tableau of children, tech, and probable demonic activity—basically a 2024 parenting blog. Just as the show was about to unravel the mystery, my wife hit pause and delivered a horror story of her own: teachers using AI to grade papers with personalized comments. Comments so perfectly tailored they could bring a tear to a parent’s eye—and yet, no human had written them.

    “What’s the point of teachers anymore?” she asked, already knowing the answer. I nodded solemnly, watching the paused image of Father David, his coat pristine, his watch immaculate. I had neither. And I live in Los Angeles, where “winter” is defined as turning off the ceiling fan.

    But something in that moment shifted. The show wasn’t just mocking the digital devil—it was embodying him. That wristwatch mocked me. The coat judged me. I wasn’t watching Evil; I was being possessed by it. By envy, by consumer lust, by the creeping suspicion that maybe—just maybe—I wasn’t living my best, most stylized demon-fighting life.

    It’s not the show’s demons that haunt me. It’s their wardrobe.

  • There Is No Digital Kaffeeklatsch: The Lie of Social Media

    There Is No Digital Kaffeeklatsch: The Lie of Social Media

    For the last fifteen years, we’ve let the term social media slip into our lexicon like a charming grifter. It sounds benign, even wholesome—like we’re all gathered around a digital café table, sipping lattes and chatting about our lives in a warm, buzzing kaffeeklatsch. But that illusion is precisely the problem. The phrase “social media” is branding sleight-of-hand, a euphemism designed to lull us into thinking we’re having meaningful interactions when, in reality, we’re being drained like emotional batteries in a rigged arcade.

    This is not a friendly coffeehouse. It’s a dopamine-spewing Digital Skinner Box where you tap and scroll like a lab rat hoping for one more pellet of validation. What we’re calling “social” is, in fact, algorithmic manipulation dressed in a hoodie. We are not exchanging ideas—we are bartering our attention for scraps of engagement while surrendering personal data to tech oligarchs who harvest our behavior like bloodless farmers fattening up their cattle.

    Richard Seymour calls this hellscape The Twittering Machine, and he’s not exaggerating. Byung-Chul Han calls it gamification capitalism, a regime in which we perform our curated selves for likes while the real self, the vulnerable human beneath the filter, slowly atrophies. Anna Lembke describes our overstimulated descent in Dopamine Nation, while the concept of Algorithmic Capture suggests we no longer shape technology—technology shapes us.

    So let’s drop the charade. This isn’t “social media.” It’s addiction media, engineered to flatten nuance, hollow out identity, and leave us twitching in the glow of our screens like the last souls left in a flickering casino. Whatever this is, it’s not convivial, it’s not coffeehouse chatter, and it’s certainly not social. It’s the end of human discourse masquerading as connection.