Tag: ai

  • How Pre-Digital Cinema Imagined the Stupidification Social Media Perfected

    How Pre-Digital Cinema Imagined the Stupidification Social Media Perfected

    Write a 1,700-word argumentative essay analyzing how The King of Comedy (1982) and/or The Truman Show (1998) anticipate the forms of “stupidification” depicted Jonathan Haidt’s “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Make an argumentative claim about how one or both of these earlier films relate to today’s digitally amplified forms of stupidification. Do they function as prophetic warnings? As examinations of longstanding human weaknesses that social media later exploited? Or as both? Develop a thesis that takes a clear position on the relationship between pre-digital and digital stupidification.

    Introduction Requirement (about 200–250 words):

    Define “stupidification” using Haidt’s key concepts—such as the Babel metaphor, outrage incentives, the collapse of shared reality, identity performance, and tribal signaling. Then briefly connect Haidt’s ideas to one concrete example from your own life or personal observations (e.g., online behavior, comment sections, family disputes shaped by social media). End your introduction with a clear thesis that takes a position on how effectively the earlier films anticipate the pathologies depicted in Haidt’s essay. 

    Be sure to have a counterargument-rebuttal section and a Works Cited page with a minimum of 4 sources. 

  • My Fifth-Decade Crisis in the Writing Classroom

    My Fifth-Decade Crisis in the Writing Classroom

    My students lean on AI the way past generations leaned on CliffsNotes and caffeine. They’re open about it, too. They send me their drafts: the human version and the AI-polished version, side by side, like before-and-after photos from a grammatical spa treatment. The upside? Their sentences are cleaner, the typos are nearly extinct, and dangling modifiers have been hunted to the brink. The downside? Engagement has flatlined. When students outsource their thinking to a bot, they sever the emotional thread to the material.

    It’s not that they’re getting dumber—they’re just developing a different flavor of intelligence, one optimized for our algorithmic future. And I know they’ll need that skill. But in the process, they grow numb to the very themes I’m trying to teach: how fashion brands and fitness influencers weaponize FOMO; how adolescent passion differs from mature purpose; how Frederick Douglass built a heroic code to claw his way out of the Sunken Place of slavery.

    This numbness shows up in the classroom. They’re present but elsewhere, half-submerged in the glow of their phones and laptops. Yesterday I screened The Evolution of the Black Quarterback—a powerful account of Black athletes who faced death threats and racist abuse to claim their place in the NFL. While these stories unfolded onscreen, my student-athletes were scrolling through sports highlights, barely glancing at the actual documentary in front of them.

    I’m not the kind of instructor who polices technology like a hall monitor. Still, I’m no longer convinced I have the power to pull students out of their world and into mine. I once believed I did. Perhaps this is my own educational Sunken Place: the realization that attention capture has shifted the center of gravity, and I’m now orbiting the edges.

    I’ve been teaching writing full-time since the 1980s. For decades, I believed I could craft lessons—and a persona—that made an impact. Now, in my fifth decade, I’m not sure I can say that with the same certainty. The ground has moved, and I’m still learning how to stand on it.

  • When Buying a New Computer Results in an Existential Crisis

    When Buying a New Computer Results in an Existential Crisis

    A computer is never just a computer. It’s a mirror of who you think you are — your ambitions, your identity, your delusions of purpose. If you fancy yourself a “power user” or “content creator,” you don’t want a flimsy piece of plastic gasping for air. You want a machine that hums with confidence — a gleaming altar to your productivity fantasies. You crave speed, efficiency, thermal dominance, at least 500 nits of blinding radiance, and a QHD or OLED screen that flatters your sense of destiny. The machine must look sleek and purposeful, the way a surgeon’s scalpel looks purposeful, even if it’s mostly used to slice digital cheesecake.

    That’s the mythology of computing. Now let’s talk about me. I’m 64, a man whose “power user” moments consist of reading an online article on one screen while taking notes on the other — a thrilling simulation of intellectual heroism. In these moments, I feel like an epidemiologist drafting a breakthrough paper on respiratory viruses, when in truth I’m analyzing a 900-word essay about AI in education or the psychological toll of protein shakes. I could do this work on a Chromebook, but that would insult my inner Corvette driver — the middle-aged man who insists on 400 horsepower for a trip to the grocery store, just to know it’s there.

    My setup hasn’t changed in seven years: an Acer Predator Triton 500 with an RTX 2080 (a $3,200 review model, not my dime), an Asus 4K monitor, and a mechanical keyboard that clicks like an old newsroom. The system runs flawlessly. Which is precisely the problem. Not needing a new computer makes me feel irrelevant — like a man whose life has plateaued. Buying one, however, rekindles the illusion that I’m still scaling great heights, performing tasks of vast cosmic significance rather than grading freshman essays about screen addiction.

    So yes, I’ll probably buy a Mac Mini M4 Pro with 48 GB of RAM and 1 TB of storage. Overkill, absolutely. “Future-proofing”? A sales pitch for gullible tech romantics. But after seven years with the Acer, I’ll have earned my delusion. The real problem is not specs — it’s time. By the time I buy a new computer, I’ll be 66, retired, and sitting before a computer whose lifespan will exceed my own. That realization turns every new purchase into an existential audit.

    I used to buy things to feel powerful; now I buy them to feel temporary. A computer, a car, a box of razors — all built to outlive their owner. The marketing says upgrade your life; the subtext whispers your warranty expires first.

    Maybe that makes me a miserabilist — a man who can turn even consumer electronics into meditations on mortality. But at least I’ll have the fastest machine in the cemetery, writing The Memoirs of a Miserabilist in 4K clarity, with perfect thermal efficiency and 500 nits of existential dread.

  • Bad But Worth It? De-skilling in the Age of AI (college essay prompt)

    Bad But Worth It? De-skilling in the Age of AI (college essay prompt)

    AI is now deeply embedded in business, the arts, and education. We use it to write, edit, translate, summarize, and brainstorm. This raises a central question: when does AI meaningfully extend our abilities, and when does it quietly erode them?

    In “The Age of De-Skilling,” Kwame Anthony Appiah argues that not all de-skilling is equal. Some forms are corrosive and hollow us out; some are “bad but worth it” because the benefits outweigh the loss; some are so destructive that no benefit can redeem them. In that framework, AI becomes most interesting when we talk about strategic de-skilling: deliberately off-loading certain tasks to machines so we can focus on deeper, higher-level work.

    Write a 1,700-word argumentative essay in which you defend, refute, or complicate the claim that not all dependence on AI is harmful. Take a clear position on whether AI can function as a “bad but worth it” form of de-skilling that frees us for more meaningful thinking—or whether, in practice, it mostly dulls our edge and trains us into passivity.

    Your essay must:

    • Engage directly with Appiah’s concepts of corrosive vs. “bad but worth it” de-skilling.
    • Distinguish between lazy dependence on AI and deliberate collaboration with it.
    • Include a counterargument–rebuttal section that uses at least one example of what we might call Ozempification—people becoming less agents and more “users” of systems. You may draw this example from one or more of the following Black Mirror episodes: “Joan Is Awful,” “Nosedive,” or “Smithereens.”
    • Use at least three sources in MLA format, including Appiah and at least one Black Mirror episode.

    For your supporting paragraphs, you might consider:

    • Cognitive off-loading as optimization
    • Human–AI collaboration in creative or academic work
    • Ethical limits of automation
    • How AI is redefining what counts as “skill”

    Your goal is to show nuanced critical thinking about AI’s role in human skill development. Don’t just declare AI good or bad; use Appiah’s framework to examine when AI’s shortcuts lead to degradation—and when, if used wisely, they might lead to liberation.

    3 building-block paragraph assignments

    1. Concept Paragraph: Explaining Appiah’s De-Skilling Framework

    Assignment:
    Write one well-developed paragraph (8–10 sentences) in which you explain Kwame Anthony Appiah’s distinctions among corrosive de-skilling, “bad but worth it” de-skilling, and de-skilling that is so destructive no benefit can justify it.

    • Use at least one short, embedded quotation from Appiah.
    • Paraphrase his ideas in your own words and clarify the differences between the three categories.
    • End the paragraph by briefly suggesting how AI might fit into one of these categories (without fully arguing your position yet).

    Your goal is to show that you understand Appiah’s framework clearly enough to use it later as the backbone of an argument.


    2. Definition Paragraph: Lazy Dependence vs. Deliberate Collaboration

    Assignment:
    Write one paragraph in which you define and contrast lazy dependence on AI and deliberate collaboration with AI in your own words.

    • Begin with a clear topic sentence that sets up the contrast.
    • Give at least one concrete example of “lazy dependence” (for instance, using AI to dodge thinking, reading, or drafting altogether).
    • Give at least one concrete example of “deliberate collaboration” (for instance, using AI to brainstorm options, check clarity, or off-load repetitive tasks while you still make the key decisions).
    • End the paragraph with a sentence explaining which of these two modes you think is more common among students right now—and why.

    This paragraph will later function as a “conceptual lens” for your body paragraphs.


    3. Counterargument Paragraph: Ozempification and Black Mirror

    Assignment:
    After watching one of the assigned Black Mirror episodes (“Joan Is Awful,” “Nosedive,” or “Smithereens”), write one counterargument paragraph that challenges the optimistic idea of “strategic de-skilling.”

    • Briefly describe a key moment or character from the episode that illustrates Ozempification—a person becoming more of a “user” of a system than an agent of their own life.
    • Explain how this example suggests that dependence on powerful systems (platforms, algorithms, or AI-like tools) can erode self-agency and critical thinking rather than free us.
    • End by posing a difficult question your eventual essay will need to answer—for example: If it’s so easy to slide from strategic use to dependence, can we really trust ourselves with AI?

    Later, you’ll rebut this paragraph in the full essay, but here your job is to make the counterargument as strong and persuasive as you can.

  • The Case for Strategic De-Skilling: Rethinking Skill and Dependence in the Age of AI (a College Writing Prompt)

    The Case for Strategic De-Skilling: Rethinking Skill and Dependence in the Age of AI (a College Writing Prompt)

    Background

    AI is a tool that we use in business, the arts, and education. Since AI is the genie out of the bottle that isn’t going back in, we have to confront the way AI renders us both benefits and liabilities. One liability is de-skilling, the way we lose our personal initiative, self-reliance and critical thinking skills as our dependence on AI makes us reflexively surrender our own thought for a lazy, frictionless existence in which we assert little effort and let AI do most of the work. 

    However, in his essay “The Age of De-Skilling,” Kwame Anthony Appiah correctly points out that not all de-skilling is equal. Some de-skilling is “corrosive,” some de-skilling is bad but worth it for the benefits, and some de-skilling is so self-destructive that no benefits can redeem its devastation. 

    In this context, where AI becomes interesting is the realm of what we call strategic de-skilling. This is a mindful form of de-skilling in which we take AI shortcuts because such shortcuts give us a worthy outcome that justifies the tradeoffs of whatever we lose as individuals dependent on technology. 

    Your Essay Prompt

    Write a 1,700-word argumentative essay that defends, refutes, or complicates the position that not all dependence on AI is ruinous. Argue that strategic de-skilling—outsourcing repetitive or mechanical labor to machines—can expand our mental bandwidth for higher-order creativity and analysis. Use Appiah’s notion of “bad but worth it” de-skilling to claim that AI, when used deliberately, frees us for deeper work rather than dulls our edge.

    Your Supporting Paragraphs

    For your supporting paragraphs, consider the following mapping components: 

    • cognitive off-loading as optimization
    • human-AI collaboration
    • ethical limits of automation
    • redefinition of skill

    Use Specific Case Studies of Strategic De-Skilling

    I recommend you can pick one or two of the following case studies to anchor your essay in concrete evidence:

    1. AI-Assisted Radiology Diagnostics
    AI models like Google’s DeepMind Health or Lunit INSIGHT CXR pre-screen medical images (X-rays, CT scans, MRIs) for anomalies such as lung nodules or breast tumors, freeing radiologists from exhaustive image scanning and letting them focus on diagnosis, context, and patient communication.

    2. Robotic Surgery Systems (e.g., da Vinci Surgical System)
    Surgeons use robotic interfaces to perform minimally invasive procedures with greater precision and less fatigue. The machine steadies the surgeon’s hand and filters tremors—technically a form of de-skilling—but this trade-off allows focus on strategy, anatomy, and patient safety rather than manual dexterity alone.

    3. AI-Driven Legal Research Platforms (Lexis+, Casetext CoCounsel)
    Lawyers now off-load hours of case searching and citation checking to AI tools that summarize precedent. What they lose in raw research grind, they gain in time for argument strategy and nuanced reasoning—shifting legal skill from memorization to interpretation.

    4. Intelligent Tutoring and Grading Systems (Gradescope, Khanmigo)
    Instructors let AI handle repetitive grading or generate practice problems. The loss of constant paper-marking allows teachers to focus on the art of explanation and individualized mentorship. Students, too, can use these systems to get instant feedback, training them to self-diagnose errors rather than depend entirely on human correction.

    5. AI-Based Drug Discovery (DeepMind’s AlphaFold, Insilico Medicine)
    Pharmaceutical researchers no longer spend years modeling protein folding manually. AI predicts structures in hours, speeding up breakthroughs. Scientists relinquish tedious modeling but redirect their expertise toward hypothesis-driven design, ethics, and clinical translation.

    6. Predictive Maintenance in Aviation and Engineering
    Airline engineers now rely on machine-learning algorithms to flag part failures before they occur. Mechanics perform fewer manual inspections but use data analytics to interpret system reports and prevent disasters—redefining “skill” as foresight rather than reaction.

    7. Algorithmic Financial Trading
    Portfolio managers off-load pattern recognition and timing decisions to AI trading bots. Their role shifts from acting as human calculators to setting ethical boundaries, risk thresholds, and macro-strategic goals—skills grounded in judgment, not just speed.

    8. AI-Powered Architecture and Design (Autodesk Generative Design)
    Architects use generative AI to produce hundreds of design iterations that balance structure, sustainability, and cost. The creative act moves from drafting to curating: selecting and refining the most meaningful human aesthetic from machine-generated abundance.

    9. Autonomous Agriculture Systems (John Deere’s See & Spray)
    Farmers now use AI-guided tractors and drones to detect weeds and optimize fertilizer use. They surrender manual fieldwork but gain ecological precision and data-driven management skills that improve yields and sustainability.

    10. AI-Enhanced Music and Film Editing (Adobe Sensei, AIVA, Runway ML)
    Editors and composers off-load technical tedium—color correction, noise reduction, beat synchronization—to AI tools. This frees them to focus on emotional pacing, thematic rhythm, and creative storytelling—the distinctly human layer of artistry.

    Purpose
    Your goal is to demonstrate nuanced critical thinking about AI’s role in human skill development. Show that you understand the difference between lazy dependence and deliberate collaboration. Engage with Appiah’s complicated notion of de-skilling to explore whether AI’s shortcuts lead to degradation—or, when used wisely, to liberation.

  • The Gospel of De-Skilling: When AI Turns Our Minds into Mashed Potatoes

    The Gospel of De-Skilling: When AI Turns Our Minds into Mashed Potatoes

    Kwame Anthony Appiah, in “The Age of De-Skilling,” poses a question that slices to the bone of our moment: Will artificial intelligence expand our minds or reduce them to obedient, gelatinous blobs? The creeping decay of competence and curiosity—what he calls de-skilling—happens quietly. Every time AI interprets a poem, summarizes a theory, or rewrites a sentence for us, another cognitive muscle atrophies. Soon, we risk becoming well-polished ghosts of our former selves. The younger generation, raised on this digital nectar, may never build those muscles at all. Teachers who lived through both the Before and After Times can already see the difference in their classrooms: the dimming spark, the algorithmic glaze in the eyes.

    Yet Appiah reminds us that all progress extracts a toll. When writing first emerged, the ancients panicked. In Plato’s Phaedrus, King Thamus warned that this new technology—writing—would make people stupid. Once words were carved into papyrus, memory would rot, dialogue would wither, nuance would die. The written word, Thamus feared, would make us forgetful and isolated. And in a way, he was right. Writing didn’t make us dumb, but it did fundamentally rewire how we think, remember, and converse. Civilization gained permanence and lost immediacy in the same stroke.

    Appiah illustrates how innovation often improves our craft while amputating our pride in it. A pulp mill worker once knew by touch and scent when the fibers were just right. Now, computers do it better—but the hands are idle. Bakers once judged bread by smell, color, and instinct; now a touchscreen flashes “done.” Precision rises, but connection fades. The worker becomes an observer of their own obsolescence.

    I see this too in baseball. When the robotic umpire era dawns, we’ll get flawless strike zones and fewer bad calls. But we’ll also lose Earl Weaver kicking dirt, red-faced and screaming at the ump until his cap flew. That fury—the human mess—is baseball’s soul. Perfection may be efficient, but it’s sterile.

    Even my seventy-five-year-old piano tuner feels it. His trade is vanishing. Digital keyboards never go out of tune; they just go out of style. Try telling a lifelong pianist to find transcendence on a plastic keyboard. The tactile romance of the grand piano, the aching resonance of a single struck note—that’s not progress you can simulate.

    I hear the same story in sound. I often tune my Tecsun PL-990 radio to KJAZZ, a station where a real human DJ spins records in real time. I’ve got Spotify, of course, but its playlists feel like wallpaper for the dead. Spotify never surprises me, never speaks between songs. It’s all flow, no friction—and my brain goes numb. KJAZZ keeps me alert because a person, not a program, is behind it.

    The same tension threads through my writing life. I’ve been writing and weight-lifting daily since my teens. Both disciplines demand sweat, repetition, and pain tolerance. Neglect one, and the other suffers. But since I began using AI to edit two years ago, the relationship has become complicated. Some days, AI feels like a creative partner—it pushes me toward stylistic risks, surprise turns of phrase, and new tonal palettes. Other days, it feels like a crutch. I toss half-baked paragraphs into the machine and tell myself, “ChatGPT will fix it.” That’s not writing; that’s delegation disguised as art.

    When I hit that lazy stretch, I know it’s time to step away—take a nap, watch Netflix, play piano—anything but write. Because once the machine starts thinking for me, I can feel my brain fog over.

    And yet, I confess to living a double life. There’s my AI-edited self, the gleaming, chiseled version of me—the writer on literary steroids. Then there’s my secret writer: the primitive, unassisted one who writes in a private notebook, in the flickering light of what feels like a mythic waterfall. No algorithms, no polish—just me and the unfiltered soul that remembers how to speak without prompts. This secret life is my tether to the human side of creation. It gives my writing texture, contradiction, blood. When I’m writing “in the raw,” I almost feel sneaky and subversive and whisper to myself: “ChatGPT must never know about this.” 

    Appiah is right: the genie isn’t going back in the bottle. Every advance carries its shadow. According to Sturgeon’s Law, 90% of everything is crap, and AI will follow that rule religiously. Most users will become lazy, derivative, and hollow. But the remaining 10%—the thinkers, artists, scientists, doctors, and musicians who wield it with intelligence—will produce miracles. They’ll also suffer for it. Because every new tool reshapes the hand that wields it, and every gain carries a ghost of what it replaces.

    Technology changes us. We change it back. And somewhere in that endless feedback loop—between the bucket piano tuner, the dirt-kicking manager, and the writer lost between human and machine—something resembling the soul keeps flickering.

  • Today I Gave My Students a Lesson on Real and Fake Engagement

    Today I Gave My Students a Lesson on Real and Fake Engagement

    I teach the student athletes at my college, and right now we’re exploring a question that cuts to the bone of modern life: Why are we so morally apathetic toward companies that feed our addictions, glamorize eating disorders, and employ CEOs who behave like cult leaders or predators?

    We’ve watched three documentaries to anchor our research: Brandy Hellville and the Cult of Fast Fashion (Max), Trainwreck: The Cult of American Apparel (Max), and White Hot: The Rise and Fall of Abercrombie & Fitch (Netflix). Each one dissects a different brand, but the pathology beneath them is the same—companies built not on fabric, but fantasy. For four weeks, I lectured about this moral rot. My students listened politely. Eyes glazed. Interest flatlined. It was clear: they didn’t care about fashion.

    So today, I tried something different. I told them to forget about the clothes. The essay, I explained, isn’t really about fashion—it’s about selling illusion. These brands were never peddling T-shirts or jeans; they were peddling a fantasy of beauty, popularity, and belonging. And they did it through something I called fake engagement—a kind of fever swamp where people mistake addiction for connection, and attention for meaning.

    Fake engagement is the psychological engine of our times. It’s the dopamine loop of social media and the endless scroll. People feed their insecurities into it and get rewarded with the mirage of significance: likes, follows, attention. It’s an addictive system built on FOMO and self-erasure. Fake engagement is a demon. The more you feed it, the hungrier it gets. You buy things, post things, watch things, all to feel visible—and yet every click deepens the void.

    This pathology is not confined to consumer brands. The entire economy now depends on it. Influencers sell fake authenticity. YouTubers stage “relatable” chaos. Politicians farm outrage to harvest donations. Every sphere of life—from entertainment to governance—has been infected by the logic of the algorithm: engagement above truth, virality above virtue.

    I told my students they weren’t hearing anything new. The technologist Jaron Lanier has been warning us for over a decade that digital platforms are rewiring our brains, turning us into unpaid content providers for an economy of distraction. But then I reminded them that as athletes, they actually hold a kind of immunity to this epidemic.

    Athletes can’t survive on fake engagement. They can’t pretend to win a game. They can’t filter a sprint or Photoshop a jump shot. Their work depends on real engagement—trust, discipline, and honest feedback. Their coaches demand evidence, not illusion. That’s what separates a competitor from a content creator. One lives in the real world; the other edits it out.

    In sports, there’s no algorithm to flatter you. Reality is merciless and fair. You either make the shot or you don’t. You either train or you coast. You either improve or you plateau. The scoreboard has no patience for your self-image.

    I contrasted this grounded reality with the digital circus we’ve come to call culture. Mukbang YouTubers stuff themselves with 10,000 calories on camera for likes. Watch obsessives blow through their savings chasing luxury dopamine. Influencers curate their “personal brand” as if selfhood were a marketing campaign. They call this engagement. I call it pathology. They’re chasing the same empty high that the fast-fashion industry monetized years ago: the belief that buying something—or becoming something online—can fill the hole of disconnection.

    This is the epistemic crisis of our time: a collective break from reality. People no longer ask whether something is true or good; they ask whether it’s viral. We’re a civilization medicated by attention, high on engagement, and bankrupt in meaning.

    That’s why I tell my athletes their essay isn’t about fashion—it’s about truth. About how human beings become spiritually and mentally sick when they lose their relationship to reality. You can’t heal what you refuse to see. And you can’t see anything clearly when your mind is hijacked by dopamine economics.

    The world doesn’t need more influencers. It needs coaches—people who ground others in trust, expertise, and evidence. Coaches model a form of mentorship that Silicon Valley can’t replicate. They give feedback that isn’t gamified. They remind players that mastery requires patience, and that progress is measured in skill, not clicks.

    When you think about it, the word coach itself has moral weight. It implies guidance, not manipulation. Accountability, not performance. A coach is the opposite of an influencer: they aren’t trying to be adored; they’re trying to make you better. They aren’t feeding your addiction to attention; they’re training your focus on reality.

    I told my students that if society is going to survive this digital delirium, we’ll need millions of new coaches—mentors, teachers, parents, and friends who can anchor us in truth when everything around us is optimized for illusion. The fast-fashion brands we study are just one symptom of a larger disease: the worship of surface.

    But reality, like sport, eventually wins. The body knows when it’s neglected. The mind knows when it’s being used. Truth has a way of breaking through even the loudest feed.

    The good news is that after four weeks of blank stares, something finally broke through. When I reframed the essay—not as a takedown of fashion, but as a diagnosis of fake engagement—the room changed. My athletes sat up straighter. They started nodding. Their eyes lit up. For the first time all semester, they were engaged.

    The irony wasn’t lost on me. The essay about fake engagement had just produced the real kind.

  • AI—Superpower for Learning or NPC Machine?

    AI—Superpower for Learning or NPC Machine?

    Essay Assignment:

    Background

    Students describe AI as a superpower for speed and convenience that leaves them with glossy, identical prose—clean, correct, and strangely vacant. Many say it “talks a lot without saying much,” flattens tone into AI-speak, and numbs them with sameness. The more they rely on it, the less they think; laziness becomes a habit, creativity atrophies, and personal voice is lost and replaced by AI-speak. Summaries replace books, notes replace listening, and polished drafts replace real inquiry. The result feels dehumanizing: work that reads like it was written by everyone and no one.

    Students also report a degradation in their education. Higher learning has become boring. Speed, making deadlines, and convenience are its key features, and the AI arms race pushes reluctant students to use AI just to keep pace. 

    The temptation to use AI because all the other students are using it rises with fatigue—copy-paste now, promise depth later—and “later” never arrives. Some call this “cognitive debt”: quick wins today, poorer thinking tomorrow. 

    Some students confess that using AI makes them feel like Non Player Characters in a video game. They’re not growing in their education. They have succumbed to apathy and cynicism as far as their education is concerned. 

    Other students admit that AI has stolen their attention and afflicted them with cognitive debt. Cognitive debt is the mental deficit we accumulate when we let technology do too much of our thinking for us. Like financial debt, it begins as a convenience—offloading memory, calculation, navigation, or decision-making to apps and algorithms—but over time it exacts interest in the form of diminished focus, weakened recall, and blunted problem-solving. Cognitive debt describes the gradual outsourcing of core mental functions—attention, critical reasoning, and creativity—to digital systems that promise efficiency but erode self-reliance. The result is a paradox: as our tools grow “smarter,” we grow more dependent on them to compensate for the very skills they dull. In short, cognitive debt is the quiet cost of convenience: each time we let technology think for us, we lose a little of our capacity to think for ourselves. 

    Yet the picture isn’t purely bleak. Several students say AI can be a lifeline: a steady partner for collaborating, brainstorming, organizing, or language support—especially for non-native speakers—when it’s used as a tutor rather than a ghostwriter. 

    When used deliberately rather than passively, AI writing tools can sharpen critical thinking and creativity by acting as intellectual sparring partners. They generate ideas, perspectives, and counterarguments that challenge students to clarify their own reasoning instead of settling for first thoughts. Rather than accepting the AI’s output, discerning writers critique, refine, and reshape AI writing tools—an exercise in metacognition that strengthens their analytical muscles. The process becomes less about outsourcing thought and more about editing thought—transforming AI from a shortcut into a mirror that reflects the quality, logic, and originality of one’s own mind.

    When used correctly, AI jump-starts drafts and levels the playing field; leaned on heavily, it erases voice, short-circuits struggle, and replaces learning with mindless convenience

    Question You Are Addressing in Your Essay

    But can AI be used effectively, or does our interaction with it, like any other product in the attention economy, reveal that it is designed to sink its talons into us and colonize our brains so that we become less like people with self-agency and more like Non Player Characters whose free will has been taken over by the machines? 

    Writing Prompt

    Write a 1,700-word argumentative essay that answers the above question. Be sure to have a counterargument and rebuttal section. Use four credible sources to support your claim. 

    Suggested Outline

    Paragraph 1: Write a 300-word personal reflection about the way you or someone you know uses AI effectively. Show how the person’s engagement with AI resulted in sharper critical thinking and creativity. Give a specific example of the project that revealed this process. 

    Paragraph 2: Write a 300-word personal reflection about the way you or someone you know abuses AI in a way that trades critical thinking with convenience.  How has AI changed the brain and the person’s approach to education? Write a detailed narrative that dramatizes these changes. 

    Paragraph 3: Write a thesis with 4 mapping components that will point to the topics in your supporting paragraphs.  

    Paragraphs 4-7: Your supporting paragraphs.

    Paragraphs 8 and 9: Your counterargument and rebuttal paragraphs.

    Paragraph 10: Your conclusion, a dramatic restatement of your thesis or a reiteration of a striking image you created in your introduction. 

    Your final page: MLA Works Cited with a minimum of 4 sources. 

  • 30 Student Responses to the Question “What Effect Does Using AI Have on Us?”

    30 Student Responses to the Question “What Effect Does Using AI Have on Us?”

    Prompt:

    Begin your essay with a 400-word personal reflection. This can be about your own experience or that of someone you interview. Focus on how using AI writing tools—like ChatGPT—has changed your habits when it comes to thinking, writing, or getting work done.

    From there, explore the deeper implications of relying on this technology. What happens when AI makes everything faster and easier? Does that convenience come at a cost? Reflect on the way AI can lull us into mental passivity—how it tempts us to hand over the hard work of thinking in exchange for quick results.

    Ask yourself: What kind of writing does AI actually produce? What are its strengths? Where does it fall short? And more importantly, what effect does using AI have on us? Does it sharpen our thinking, or make us more dependent? Do we risk becoming less original, less engaged—more like passive consumers of technology than active creators? As this process continues, are we becoming Non-Player Characters instead of humans with self-agency? Explain. 

    Finally, consider the trade-offs. Are we gaining a tool, or giving something up? Are we turning into characters in a Black Mirror episode—so enamored with convenience that we forget what it means to do the work ourselves? Use concrete examples and honest reflection to dig into the tension between human effort and technological assistance.

    Student Responses to Using AI tools for writing and education:

    1. “I am impressed with the speed and convenience but the final product is overly polished and hollow.”
    2. “I am amazed by the speed of production but all the sentences look the same. Honestly, it’s numbing after a while.”
    3. “The writing is frustrating because it talks a lot without saying much.”
    4. “I don’t have to think as much,” “I save time not having to think” and “I get used to this laziness.”
    5.  “AI writes better than I do, but it doesn’t have my unique voice.”
    6. “AI is like a steady writing partner, always there to help me and bounce off ideas, but lately I realize the more I depend on it, the less I challenge myself to think critically.”
    7. “Thanks to AI, I stopped reading books. Now I just get summaries of books. Now I get the information, but I no longer have a deep understanding.”
    8. “AI helps me take notes and organize ideas but it doesn’t help me truly listen, understand someone’s emotions, show empathy, or deal with uncertainty.”
    9. “AI writing is smooth and structured, but people aren’t. Real thought and emotions are messy. That’s where growth happens.”
    10. “When I’m tired, AI tempts me to just copy and paste it, and the more I use it in this manner, the stronger the temptation becomes.”
    11. “AI makes things really easy for me, but then I ask myself, ‘Am I really learning?’”
    12. “What started out as magical now has become woven into the fabric of daily life. Education has become boring.”
    13. “AI is a production superpower the way it inspires and organizes ideas, but I find over time I become more lazy, less creative, and rely on AI way too much.”
    14. “AI degrades the way we write and think. I can tell when something is written in AI-speak, without real human tone, and the whole experience is dehumanizing.”
    15. “I love AI because it saved me. I am not a native English speaker, so I rely on AI to help with my English. It’s like having a reliable tutor always by my side. But over time, I have become lazy and don’t have the same critical thinking I used to have. I see myself turning into an NPC.”
    16. “I have to use AI because the other students are using it. I should have the same advantages they have. But education has become less creative and more boring. It’s all about ease and convenience now.”
    17. “I used to love AI because it made me confident and motivated me to get my assignments in on my time. But over time, I lost my voice. Now everything is written with an AI voice.”
    18. “The more I use AI, the less I think things through on my own. I cut off my own thinking and just ask ChatGPT. It’s fast, but it kills creativity.”
    19. “When faced with a writing assignment and a blank mind, I would start things with ChatGPT, and it got things going. It wasn’t perfect, but I had something to start with, and I found this comforting. But as I got more confident with ChatGPT, I became less and less engaged with the education process. My default became convenience and nothing more.”
    20. “AI writing is so common, we don’t even ask if the writing is real anymore. No one cares. AI has made us all apathetic. We are NPCs in a video game.”
    21. “AI is a great tool because it helps everyone regardless of how much money we have, but it kills creativity and individuality. We’ve lost the pleasure of education. AI has become a mirror of our own superficial existence.”
    22. “When I first discovered AI to do my writing, I felt I had hit the jackpot, but then after taking so many shortcuts, I lost the love for what I was doing.”
    23. “It’s stressful to see a cursor blinking on a blank page, but thanks to AI, I can get something off and running quickly. The problem is that the words are clean and correct, but also generic. There is no depth to human emotion.”
    24. “I’ve been using AI since high school. A lot of its writing is useless. It doesn’t make sense. It’s inaccurate. It’s poorly written. It’s dehumanizing.”
    25. “AI is basically Google on steroids. I used to dread writing, but AI has pushed me to get my work done. The writing is polished but too perfect to be human writing. The biggest danger is that humans become too reliant on it.”
    26. “I barely use AI. It makes school trivial. It’s just another social media disease like TikTok, these streaming platforms that kill our attention spans and turn us into zombies.”
    27. “AI first felt like having the cheat code to get through school. But then I realized it puts us into a cognitive debt. We lose our tenacity and critical thinking.”
    28. “I am a mother and an avid reader, and I absolutely refuse to use AI for any purpose. AI can’t replace real writing, reading, or journaling. AI is a desecration of education and personal growth.”
    29. At first, I used AI to get ideas, but over time I realized I was no longer thinking. I wasn’t struggling to come up with what I really thought or what I really wanted to argue about. AI silenced who I really was.”
    30. “Using AI to do the heavy lifting doesn’t sit right with me, so I programmed my AI to tutor and guide me through studying, rather than using it as a crutch by providing prompts and tools to help me understand assignments.While my experience with AI has shown me its full capabilities, I’ve also learned that too much of it can ruin the entire experience, in this case, the learning experience.”
  • The Eight Ages of –ification

    The Eight Ages of –ification

    From Conformification to Enshittification: how every decade found a fresh way to ruin itself.


    The Age of Decline, Accelerated

    In Enshittification, Cory Doctorow argues that our decline isn’t gradual—it’s accelerating. Everything is turning to crap simultaneously, like civilization performing a synchronized swan dive into the sewer.

    The book’s subtitle, Why Everything Suddenly Got Worse and What to Do About It, suggests that degradation is now both universal and, somehow, fixable.

    Doctorow isn’t the first prophet to glimpse the digital abyss. Jaron Lanier, Jonathan Haidt, and other cultural Cassandras have long warned about the stupidification that comes from living inside the algorithmic aquarium. We swim in the same recycled sludge of dopamine and outrage, growing ever duller while congratulating ourselves on being “connected.”

    This numbness—the ethical anesthesia of the online age—makes us tolerate more crappiness from our corporate overlords. As the platforms enshittify, we invent our own little coping rituals. Some of us chant words with –ion suffixes as if they were incantations, linguistic ASMR to soothe our digital despair.

    When I saw Ozempic and ChatGPT promising frictionless perfection—weight loss without effort, prose without struggle—I coined Ozempification: the blissful surrender of self-agency to the cult of convenience.

    Now there’s an entire liturgy of –ifications, each describing a new layer of rot:


    • Enshittification — Doctorow’s coinage for the systematic decay of platforms that once worked.
    • Crapification / Encrappification — The transformation of quality into garbage in the name of efficiency.
    • Gamification — Turning life into a perpetual contest of meaningless points and dopamine rewards.
    • Attentionification — Reducing every act of expression to a plea for clicks.
    • Misinformationfication — When truth becomes a casualty of virality.
    • Ozempification — Replacing effort with optimization until we resemble our own avatars.
    • Stupidification — The great numbing: scrolling ourselves into idiocy while our neurons beg for mercy.

    But the crown jewel of this lexicon remains Enshittification—Doctorow’s diagnosis so precise that the American Dialect Society crowned it Word of the Year for 2023.

    Still, I’d like to push back on Doctorow’s suggestion that our current malaise is unique. Yes, technology accelerates decay, but each era has had its own pathology—its signature form of cultural rot. We’ve been creatively self-destructing for decades.

    So, let’s place Enshittification in historical context. Behold The Eight Ages of –ification: a timeline of civilization’s greatest hits in decline.


    1950s — Conformification

    The age of white fences and beige minds. America sold sameness as safety. Individuality was ironed flat, and television became the nation’s priest. Conformification is the fantasy that security comes from imitation—a tranquilized suburbia of identical dreams in identical ranch homes.


    1960s — Psychedelification

    When rebellion became transcendence through chemistry. Psychedelification was the belief that consciousness expansion could topple empires, if only the colors were bright enough. The result: self-absorption in tie-dye and the illusion that enlightenment could be mass-produced.


    1970s — Lustification

    A Freudian carnival of polyester and pelvic thrusts. From Deep Throat to Studio 54, desire was liberation and the body was both altar and marketplace. Lustification crowned pleasure as the last remaining ideology.


    1980s — Greedification

    When morality was replaced by market share. The decade baptized ambition in champagne and cocaine. Greedification is the conviction that money cleanses sin and that a Rolex can double as a rosary.


    1990s — Ironification

    The decade of smirks. Sincerity was cringe; irony was armor. Ironification made detachment the new intelligence: nothing believed, everything quoted, and feelings outsourced to sarcasm.


    2000s — Digitification

    Humanity uploaded itself. Digitification was the mass migration to the screen—the decade of Facebook envy, email anxiety, and dopamine disguised as connection. We stopped remembering and started refreshing.


    2010s — Influencification

    When everyone became a brand. Influencification turned authenticity into a business model and experience into content. The self became a product to be optimized for engagement.


    2020s — Enshittification

    Doctorow’s masterstroke: the final form of digital decay. Enshittification is what happens when every system optimizes for extraction—when user, worker, and platform all drown in the same algorithmic tar pit. It’s the exhaustion of meaning itself, disguised as progress.


    Epilogue: The 2030s — Reification

    If trends continue, we’ll soon enter Reification: a desperate attempt to make the unreal feel real again. After decades of filters, feeds, and frictionless fakery, we’ll long for something tangible—until, inevitably, we commodify that too.

    History repeats itself—only this time with better Wi-Fi.