Tag: artificial-intelligence

  • Gemini Has Taken Away the Mystique from ChatGPT

    Gemini Has Taken Away the Mystique from ChatGPT

    Matteo Wong’s “OpenAI Is in Trouble” reports that Gemini is crushing ChatGPT in the AI race. Marc Benioff of Salesforce spent just two hours on Gemini–all the time he needed to realize he’s leaving ChatGPT after three years. As he wrote on X: “I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane.” Meanwhile, a troubled Sam Altman has announced a “code red” in a memo to his employees. It appears to be a sink or swim situation. But Wong points out that this is more of a horse race with one company in the lead, then another, and then another, with frequent fluctuations. But even if ChatGPT can gain lost ground, it loses mystique. In the words of Wong: “More than ever, OpenAI seems like just another chatbot company.” 

    One possible cause of ChatGPT losing ground is its focus on commercial ventures, wanting to be “a one-stop-shop for anything” so that the platform helps you in your consumerism. Another factor is its focus on engagement, which has made ChatGPT tweaked in a way as to become a super sycophant. Wong writes: “Those tweaks, in turn, may have made some versions of ChatGPT dangerously obsequious–it has appeared to praise and reinforces some users’ darkest and most absurd ideas–and have been the subject of several lawsuits against OpenAI alleging that ChatGPT fueled delusional spirals and even, in some cases, contributed to suicide.”

    Another challenge for OpenAI is Google’s sheer size. Google can integrate Gemini into its “existing ecosystem” with billions of users. 

    I’ve been on ChatGPT for three years, impressed with it as an editing tool, and confess I have some FOMO when it comes to the current iteration of Gemini. An argument could be made that I should switch to Gemini, not just because it’s embedded in the Google Chrome that I use, but that I shouldn’t get too comfortable with one form of AI, as I have with ChatGPT, over the last three years. It might be wise to see ChatGPT less as a companion and more of a manipulating agent designed to capture my engagement so that I am serving its business interests more than my self-interests. 

    Another voice inside me, though, says Gemini will eventually do the same thing. Unless I find that Gemini will be a game-changer, in ways that ChatGPT isn’t, I suspect both should be treated cautiously: use these platforms as tools but don’t let them hijack your brain. 

  • Confessions of a College Writing Instructor in Transition

    Confessions of a College Writing Instructor in Transition

    Yesterday morning at the college, I ran into the Writing Center director and asked whether AI had thinned out the crowds of students seeking help. To his surprise, the numbers were down only slightly—less than ten percent. I told him I’m retiring in three semesters and have no idea what the job of a writing instructor will look like five years from now. He nodded and said what we’re all thinking: we’re in the middle of a technological tectonic shift, and no one knows where the fault lines lead.

    When I got home, I realized that when I meet my students face-to-face in Spring 2026, I’ll need to level with them. Something like this:

    Hello, Students.

    I won’t sugarcoat it. Writing instructors are in transition, and many of us don’t quite know our role anymore. We’re feeling our way through the dark. To pretend otherwise would be less than honest, and the one thing we need right now is credibility. 

    In this class, you’ll write three essays—each roughly two thousand words. The first examines GLP-1 drugs like Ozempic and the messy question of free will in weight management: are we outsourcing discipline to pharmaceuticals? The second explores our dependence on emerging technologies that claim to build new skills while quietly eroding old ones—a process known as de-skillification. The final essay tackles ultra-processed foods and the accusation that eating them is a form of self-poisoning. We’ll examine that claim in a world where food technology, especially for people on GLP-1 medications, promises affordability, convenience, and enhanced nutrition. All three assignments orbit the same theme: technology’s relentless disruption of daily life.

    And speaking of disruption, we need to talk about large language models—ChatGPT, Gemini, Claude, Llama, and whatever else arrives next Tuesday. It’s obvious that students are already using these tools to write and edit their work. Many of you have used them throughout high school; for you, AI isn’t cheating—it’s normal.

    I don’t expect you to avoid these tools. They’re part of being a functioning human in a rapidly changing world. The real question isn’t whether you use them, but how. If you treat them like wish-granting genies spitting out essays on command, you’ll produce communication with all the nuance of an emoji—slick, shallow, and dead on arrival. If you use AI for quick-and-dirty summaries, your brain will soften like a forgotten banana. But if you treat these tools as collaborators—writers’ room partners who help you brainstorm, clarify arguments, test counterarguments, and refine your prose—then you’re not just surviving college, you’re evolving.

    College is where you learn to use tools that shape your professional future. But it’s also where you sharpen the questions that determine how you live: Why am I here? What does it mean to live well? Those aren’t academic abstractions; they’re the spine of adulthood. You can’t separate your ambitions from your identity.

    AI can’t give you a soul. It can’t recall your first heartbreak, your deepest disappointment, or the electricity of a song that arrived at exactly the right moment. But it can help you articulate experience. It can help you think more clearly about who you are, how you plan to work, and how to live with an intact conscience.

    The critical thinking and communication skills we practice in this class exist for that purpose—and always will.

  • Did AI Break Education—Or Did Education Build the Perfect Tool for Its Own Collapse?

    Did AI Break Education—Or Did Education Build the Perfect Tool for Its Own Collapse?

    Argumentative Essay — 1,700 words

    Artificial intelligence has become the student’s quiet collaborator: it drafts essays, outlines arguments, rewrites weak prose, and produces thesis statements on command. Some critics insist this shift is catastrophic. They claim AI doesn’t just save time—it dissolves motivation, short-circuits difficulty, and converts students into passive operators of synthetic thought.

    Others argue AI merely reveals a truth we’ve avoided: education was already transactional, disengaged, and allergic to authentic inquiry. If a five-paragraph essay can be mass-produced by a bot in seconds, perhaps the problem was never the bot.

    Write an argumentative essay in which you take a position on the real source of the crisis.
    Your essay must answer the following question:

    Is AI dismantling human learning, or is AI a symptom of a system already committed to shallow thinking and assessment-by-template?

    To build your case:

    1. Analyze one critic who sees AI as corrosive.
      Choose one of the writers who frames AI as eroding motivation, mastery, identity, or intellectual development.
      Identify the mechanism of harm:
      How does AI damage learning? Where does the breakdown actually occur?
    2. Contrast them with one writer who shifts the blame elsewhere.
      Choose a writer who argues the deeper crisis is structural, cultural, or pedagogical.
      Show how they reframe the problem:
      Is the issue curriculum design? Academic culture? Literacy itself?
    3. Define the threshold.
      Explain when AI becomes a tool that enhances learning versus a crutch that annihilates it.
      Avoid yes/no binaries—demonstrate how context, assignment design, or student agency changes outcomes.
    4. Include a counterargument–rebuttal section.
      Address the strongest argument against your own position, then respond with evidence and reasoning.
      This should not be a token gesture—it should be the opponent you would actually fear.

    Requirements

    • Minimum 4 credible sources (MLA)
    • At least 2 of the writers listed below must appear as central interlocutors
    • Works Cited in MLA format
    • Your essay must argue, not summarize

    Your mission is not to repeat what the authors said but to confront the deeper question:
    What kind of intellectual culture emerges when AI becomes normal—and who is responsible for shaping it?

    List of Suggested Sources

    Critics who argue AI is damaging education

    1. Ashanty Rosario — “I’m a High Schooler. AI Is Demolishing My Education.”
    2. Lila Shroff — “The AI Takeover of Education Is Just Getting Started.”
    3. Damon Beres — “AI Has Broken High School and College.”
    4. Michael Clune — “Colleges Are Preparing to Self-Lobotomize.”

    Writers who reinterpret the crisis

    1. Ian Bogost — “College Students Have Already Changed Forever.”
    2. Tyler Austin Harper — “The Question All Colleges Should Ask Themselves About AI.”
    3. Tyler Austin Harper — “ChatGPT Doesn’t Have to Ruin College.”
    4. John McWhorter — “My Students Use AI. So What?”
  • The Rotator Cuff, the Honda Dealership, and the Human Soul

    The Rotator Cuff, the Honda Dealership, and the Human Soul

    Life has a way of mocking our plans. You stride in with a neat blueprint, and the universe responds by flinging marbles under your feet. My shoulder rehab, for instance, was supposed to be a disciplined, daily ritual: the holy grail of recovering from a torn rotator cuff. Instead, after one enthusiastic session, both shoulders flared with the kind of throbbing soreness reserved for muscles resurrected from the dead (though after walking home from Honda, it occurred to me that my right shoulder soreness is probably the result of a tetanus shot). So much for the doctor’s handouts of broomstick rotations and wall flexions. Today, the new fitness plan is modest: drop off the Honda for service, walk two miles home, and declare that my workout. Tomorrow: to be determined by the whims of my tendons and sore muscles.

    Teaching is no different. I’ve written my entire Spring 2026 curriculum, but then I read about humanities professor Alan Jacobs—our pedagogical monk—who has ditched computers entirely. Students handwrite every assignment in composition books; they read photocopied essays with wide margins, scribbling annotations in ink. According to Jacobs, with screens removed and the “LLM demons” exorcised, students rediscover themselves as human beings. They think again. They care again. I can see the appeal. They’re no longer NPCs feeding essays into the AI maw.

    But then I remembered who I am. I’m not a parchment-and-fountain-pen professor any more than I’m a pure vegan. I am a creature of convenience, pragmatism, and modern constraints. My students live in a world of laptops, apps, and algorithms; teaching them only quills and notebooks would be like handing a medieval knight a lightsaber and insisting he fight with a broomstick. I will honor authenticity another way—through the power of my prompts, the relevance of my themes, and the personal narratives that force students to confront their own thoughts rather than outsource them. My job is to balance the human soul with the tools of the age, not to bury myself—and my students—in nostalgia cosplay.

  • Has AI Broken Education—or Did We Break It First?

    Has AI Broken Education—or Did We Break It First?

    Argumentative Essay Prompt: AI, Education, and the Future of Human Thinking (1,700 words)

    Artificial intelligence has entered classrooms, study sessions, and homework routines with overwhelming speed. Some commentators argue that this shift is not just disruptive but disastrous. Ashanty Rosario, a high school student, warns in “I’m a High Schooler. AI Is Demolishing My Education” that AI encourages passivity, de-skills students, and replaces authentic learning with the illusion of competence. Lila Shroff, in “The AI Takeover of Education Is Just Getting Started,” argues that teachers and institutions are unprepared, leaving students to navigate a digital transformation with no guardrails. Damon Beres claims in “AI Has Broken High School and College” that classrooms are devolving into soulless content factories in which students outsource both thought and identity. These writers paint a bleak picture: AI is not just a tool—it is a force accelerating the decay of intellectual life.

    Other commentators take a different approach. Ian Bogost’s “College Students Have Already Changed Forever” argues that the real transformation happened long before AI—students have already become transactional, disengaged, and alienated, and AI simply exposes a preexisting wound. Meanwhile, Tyler Austin Harper offers two counterpoints: in “The Question All Colleges Should Ask Themselves About AI,” he insists that institutions must rethink how assignments function in the age of automation; and in “ChatGPT Doesn’t Have to Ruin College,” he suggests that AI could amplify human learning if courses are redesigned to reward original thinking, personal insight, and intellectual ambition rather than formulaic output.

    In a 1,700-word argumentative essay, defend, refute, or complicate the claim that AI is fundamentally damaging education. Your essay must:

    • Take a clear position on whether AI erodes learning, enhances it, or transforms it in ways that require new pedagogical strategies.
    • Analyze how Rosario, Shroff, and Beres frame the dangers of AI for intellectual development, motivation, and classroom culture.
    • Compare their views with Bogost and Harper, who argue that education itself—not AI—is the root of the crisis, or that educators must adapt rather than resist.
    • Include a counterargument–rebuttal section that addresses the strongest argument you disagree with.
    • Use at least four credible sources in MLA format, including at least three of the essays listed above.

    Your goal is not to summarize the articles but to evaluate what they reveal about the future of learning: Is AI the villain, the scapegoat, or a tool we have not yet learned to use wisely?

  • Does AI Destroy or Redefine Learning?

    Does AI Destroy or Redefine Learning?

    Argumentative Essay Prompt: The Effects of AI on Education (1,700 words)

    Artificial intelligence has raised alarm bells in education. Critics argue that students now rely so heavily on AI tools that they are becoming users rather than thinkers—outsourcing curiosity, creativity, and problem-solving to machines. In this view, the classroom is slowly deteriorating into a culture of passivity, distraction, and what some call a form of “communal stupidity.”

    In his Atlantic essay “My Students Use AI. So What?” linguist and educator John McWhorter challenges this narrative. Instead of treating AI as a threat to intelligence, he examines the everyday media consumption of his tween daughters. They spend little time reading traditional books, yet their time online exposes them to sophisticated humor, stylized language, and clever cultural references. Rather than dulling their minds, McWhorter argues, certain forms of media sharpen them—and occasionally reach the level of genuine artistic expression.

    McWhorter anticipates objections. Books demand imagination, concentration, and patience. He does not deny this. But he asks whether we have elevated books into unquestioned sacred objects. Human creativity has always existed in visual, auditory, and performative arts—not exclusively on the printed page.

    Like many educators, McWhorter also acknowledges that schooling must adapt. Just as no teacher today would demand students calculate square roots without a calculator, he recognizes that assigning a formulaic five-paragraph essay invites AI to automate it. Teaching must evolve, not retreat. He concludes that educators and parents must create new forms of engagement that work within the technological environment students actually inhabit.

    Is McWhorter persuasive? In a 1,700-word argumentative essay, defend, refute, or complicate his central claim that AI is not inherently corrosive to thinking, and that education must evolve rather than resist technological realities. Your essay should:
    • Make a clear, debatable thesis about AI’s influence on learning, creativity, and critical thinking.
    • Analyze how McWhorter defines intelligence, skill, and engagement in digital environments.
    • Include a counterargument–rebuttal section in which you address why some technologies may be so disruptive that adapting to them becomes impossible—or whether that fear misunderstands how students actually learn.
    • Use evidence from McWhorter and at least two additional credible sources.
    • Include a Works Cited page in MLA format with at least four sources total.

    Your goal is not to simply summarize McWhorter, but to weigh his claims against reality. Does AI open new modes of literacy, or does it train us into passive consumption? What does responsible adaptation look like, and where do we draw the line between embracing tools and surrendering agency?

    Building Block 1: Introduction Paragraph:

    Write a 300-word paragraph describing a non-book activity—such as a specific YouTube channel, a TikTok creator, an online gaming stream, or a subreddit—that entertains you while also requiring real engagement and intellectual effort. Do not speak in broad generalities; focus on one example. Describe what drew you to that content and what makes it more than passive consumption. If you choose a subreddit, explain how it operates: Do members debate technical details, challenge arguments, post layered memes that reference politics or philosophy, or analyze social behavior that demands you understand context and nuance? If you choose a video or stream, describe how its pacing, humor, visual cues, or language force you to track patterns, notice subtle callbacks, or recognize sarcasm and satire. Show how your brain works to interpret signals, anticipate moves, decode cultural references, or evaluate whether the creator is being sincere, ironic, or manipulative. Explain how this activity cultivates cognitive skills—pattern recognition, strategic thinking, language sensitivity, humor literacy, or cultural analysis—that are not identical to reading but still intellectually substantial. Then connect your experience to John McWhorter’s argument in “My Students Use AI. So What?” by explaining how your engagement challenges the assumption that screen-based media turns young people into passive consumers. McWhorter claims that digital content can sharpen minds by exposing viewers to stylized language, comedic timing, and creative expression; show how your chosen activity illustrates (or complicates) this point. Conclude by reflecting on whether the skills you are developing—whether from decoding layered Reddit discussions or following complex video essays—are simply different from the skills cultivated by books, or whether they offer alternative paths to intelligence that schools and parents should take seriously.

    Building Block 2: Conclusion

    Write a 250-word conclusion in which you step back from your argument and explain what your thesis reveals about the broader social implications of online entertainment. Do not summarize your paper. Instead, reflect on how your analysis has changed the way you think about digital media and your own habits as a viewer, gamer, or participant. Explain how your chosen example—whether a subreddit, a content creator, a gaming channel, or another digital space—demonstrates that online entertainment is not automatically a form of distraction or intellectual decay. Discuss how interacting with this media has trained you to interpret tone, decode humor or irony, follow complex narratives, or understand cultural signals that are easy to miss if you are not paying attention. Then consider what this means for society: If students are learning language, timing, persuasion, and nuance in digital environments, how should teachers, parents, and institutions respond? Should they continue to treat online entertainment as a threat to literacy, or as an alternate path to it? Draw a connection between your growth as a thinker and the larger question of where intelligence is cultivated in the 21st century. End your paragraph with a reflection on how your relationship to digital media has changed: Do you now view certain forms of online entertainment as trivial distractions, or as unexpected arenas where people practice rhetorical agility, cultural awareness, and cognitive skill?

  • Bad But Worth It? De-skilling in the Age of AI (college essay prompt)

    Bad But Worth It? De-skilling in the Age of AI (college essay prompt)

    AI is now deeply embedded in business, the arts, and education. We use it to write, edit, translate, summarize, and brainstorm. This raises a central question: when does AI meaningfully extend our abilities, and when does it quietly erode them?

    In “The Age of De-Skilling,” Kwame Anthony Appiah argues that not all de-skilling is equal. Some forms are corrosive and hollow us out; some are “bad but worth it” because the benefits outweigh the loss; some are so destructive that no benefit can redeem them. In that framework, AI becomes most interesting when we talk about strategic de-skilling: deliberately off-loading certain tasks to machines so we can focus on deeper, higher-level work.

    Write a 1,700-word argumentative essay in which you defend, refute, or complicate the claim that not all dependence on AI is harmful. Take a clear position on whether AI can function as a “bad but worth it” form of de-skilling that frees us for more meaningful thinking—or whether, in practice, it mostly dulls our edge and trains us into passivity.

    Your essay must:

    • Engage directly with Appiah’s concepts of corrosive vs. “bad but worth it” de-skilling.
    • Distinguish between lazy dependence on AI and deliberate collaboration with it.
    • Include a counterargument–rebuttal section that uses at least one example of what we might call Ozempification—people becoming less agents and more “users” of systems. You may draw this example from one or more of the following Black Mirror episodes: “Joan Is Awful,” “Nosedive,” or “Smithereens.”
    • Use at least three sources in MLA format, including Appiah and at least one Black Mirror episode.

    For your supporting paragraphs, you might consider:

    • Cognitive off-loading as optimization
    • Human–AI collaboration in creative or academic work
    • Ethical limits of automation
    • How AI is redefining what counts as “skill”

    Your goal is to show nuanced critical thinking about AI’s role in human skill development. Don’t just declare AI good or bad; use Appiah’s framework to examine when AI’s shortcuts lead to degradation—and when, if used wisely, they might lead to liberation.

    3 building-block paragraph assignments

    1. Concept Paragraph: Explaining Appiah’s De-Skilling Framework

    Assignment:
    Write one well-developed paragraph (8–10 sentences) in which you explain Kwame Anthony Appiah’s distinctions among corrosive de-skilling, “bad but worth it” de-skilling, and de-skilling that is so destructive no benefit can justify it.

    • Use at least one short, embedded quotation from Appiah.
    • Paraphrase his ideas in your own words and clarify the differences between the three categories.
    • End the paragraph by briefly suggesting how AI might fit into one of these categories (without fully arguing your position yet).

    Your goal is to show that you understand Appiah’s framework clearly enough to use it later as the backbone of an argument.


    2. Definition Paragraph: Lazy Dependence vs. Deliberate Collaboration

    Assignment:
    Write one paragraph in which you define and contrast lazy dependence on AI and deliberate collaboration with AI in your own words.

    • Begin with a clear topic sentence that sets up the contrast.
    • Give at least one concrete example of “lazy dependence” (for instance, using AI to dodge thinking, reading, or drafting altogether).
    • Give at least one concrete example of “deliberate collaboration” (for instance, using AI to brainstorm options, check clarity, or off-load repetitive tasks while you still make the key decisions).
    • End the paragraph with a sentence explaining which of these two modes you think is more common among students right now—and why.

    This paragraph will later function as a “conceptual lens” for your body paragraphs.


    3. Counterargument Paragraph: Ozempification and Black Mirror

    Assignment:
    After watching one of the assigned Black Mirror episodes (“Joan Is Awful,” “Nosedive,” or “Smithereens”), write one counterargument paragraph that challenges the optimistic idea of “strategic de-skilling.”

    • Briefly describe a key moment or character from the episode that illustrates Ozempification—a person becoming more of a “user” of a system than an agent of their own life.
    • Explain how this example suggests that dependence on powerful systems (platforms, algorithms, or AI-like tools) can erode self-agency and critical thinking rather than free us.
    • End by posing a difficult question your eventual essay will need to answer—for example: If it’s so easy to slide from strategic use to dependence, can we really trust ourselves with AI?

    Later, you’ll rebut this paragraph in the full essay, but here your job is to make the counterargument as strong and persuasive as you can.

  • The Case for Strategic De-Skilling: Rethinking Skill and Dependence in the Age of AI (a College Writing Prompt)

    The Case for Strategic De-Skilling: Rethinking Skill and Dependence in the Age of AI (a College Writing Prompt)

    Background

    AI is a tool that we use in business, the arts, and education. Since AI is the genie out of the bottle that isn’t going back in, we have to confront the way AI renders us both benefits and liabilities. One liability is de-skilling, the way we lose our personal initiative, self-reliance and critical thinking skills as our dependence on AI makes us reflexively surrender our own thought for a lazy, frictionless existence in which we assert little effort and let AI do most of the work. 

    However, in his essay “The Age of De-Skilling,” Kwame Anthony Appiah correctly points out that not all de-skilling is equal. Some de-skilling is “corrosive,” some de-skilling is bad but worth it for the benefits, and some de-skilling is so self-destructive that no benefits can redeem its devastation. 

    In this context, where AI becomes interesting is the realm of what we call strategic de-skilling. This is a mindful form of de-skilling in which we take AI shortcuts because such shortcuts give us a worthy outcome that justifies the tradeoffs of whatever we lose as individuals dependent on technology. 

    Your Essay Prompt

    Write a 1,700-word argumentative essay that defends, refutes, or complicates the position that not all dependence on AI is ruinous. Argue that strategic de-skilling—outsourcing repetitive or mechanical labor to machines—can expand our mental bandwidth for higher-order creativity and analysis. Use Appiah’s notion of “bad but worth it” de-skilling to claim that AI, when used deliberately, frees us for deeper work rather than dulls our edge.

    Your Supporting Paragraphs

    For your supporting paragraphs, consider the following mapping components: 

    • cognitive off-loading as optimization
    • human-AI collaboration
    • ethical limits of automation
    • redefinition of skill

    Use Specific Case Studies of Strategic De-Skilling

    I recommend you can pick one or two of the following case studies to anchor your essay in concrete evidence:

    1. AI-Assisted Radiology Diagnostics
    AI models like Google’s DeepMind Health or Lunit INSIGHT CXR pre-screen medical images (X-rays, CT scans, MRIs) for anomalies such as lung nodules or breast tumors, freeing radiologists from exhaustive image scanning and letting them focus on diagnosis, context, and patient communication.

    2. Robotic Surgery Systems (e.g., da Vinci Surgical System)
    Surgeons use robotic interfaces to perform minimally invasive procedures with greater precision and less fatigue. The machine steadies the surgeon’s hand and filters tremors—technically a form of de-skilling—but this trade-off allows focus on strategy, anatomy, and patient safety rather than manual dexterity alone.

    3. AI-Driven Legal Research Platforms (Lexis+, Casetext CoCounsel)
    Lawyers now off-load hours of case searching and citation checking to AI tools that summarize precedent. What they lose in raw research grind, they gain in time for argument strategy and nuanced reasoning—shifting legal skill from memorization to interpretation.

    4. Intelligent Tutoring and Grading Systems (Gradescope, Khanmigo)
    Instructors let AI handle repetitive grading or generate practice problems. The loss of constant paper-marking allows teachers to focus on the art of explanation and individualized mentorship. Students, too, can use these systems to get instant feedback, training them to self-diagnose errors rather than depend entirely on human correction.

    5. AI-Based Drug Discovery (DeepMind’s AlphaFold, Insilico Medicine)
    Pharmaceutical researchers no longer spend years modeling protein folding manually. AI predicts structures in hours, speeding up breakthroughs. Scientists relinquish tedious modeling but redirect their expertise toward hypothesis-driven design, ethics, and clinical translation.

    6. Predictive Maintenance in Aviation and Engineering
    Airline engineers now rely on machine-learning algorithms to flag part failures before they occur. Mechanics perform fewer manual inspections but use data analytics to interpret system reports and prevent disasters—redefining “skill” as foresight rather than reaction.

    7. Algorithmic Financial Trading
    Portfolio managers off-load pattern recognition and timing decisions to AI trading bots. Their role shifts from acting as human calculators to setting ethical boundaries, risk thresholds, and macro-strategic goals—skills grounded in judgment, not just speed.

    8. AI-Powered Architecture and Design (Autodesk Generative Design)
    Architects use generative AI to produce hundreds of design iterations that balance structure, sustainability, and cost. The creative act moves from drafting to curating: selecting and refining the most meaningful human aesthetic from machine-generated abundance.

    9. Autonomous Agriculture Systems (John Deere’s See & Spray)
    Farmers now use AI-guided tractors and drones to detect weeds and optimize fertilizer use. They surrender manual fieldwork but gain ecological precision and data-driven management skills that improve yields and sustainability.

    10. AI-Enhanced Music and Film Editing (Adobe Sensei, AIVA, Runway ML)
    Editors and composers off-load technical tedium—color correction, noise reduction, beat synchronization—to AI tools. This frees them to focus on emotional pacing, thematic rhythm, and creative storytelling—the distinctly human layer of artistry.

    Purpose
    Your goal is to demonstrate nuanced critical thinking about AI’s role in human skill development. Show that you understand the difference between lazy dependence and deliberate collaboration. Engage with Appiah’s complicated notion of de-skilling to explore whether AI’s shortcuts lead to degradation—or, when used wisely, to liberation.

  • The Gospel of De-Skilling: When AI Turns Our Minds into Mashed Potatoes

    The Gospel of De-Skilling: When AI Turns Our Minds into Mashed Potatoes

    Kwame Anthony Appiah, in “The Age of De-Skilling,” poses a question that slices to the bone of our moment: Will artificial intelligence expand our minds or reduce them to obedient, gelatinous blobs? The creeping decay of competence and curiosity—what he calls de-skilling—happens quietly. Every time AI interprets a poem, summarizes a theory, or rewrites a sentence for us, another cognitive muscle atrophies. Soon, we risk becoming well-polished ghosts of our former selves. The younger generation, raised on this digital nectar, may never build those muscles at all. Teachers who lived through both the Before and After Times can already see the difference in their classrooms: the dimming spark, the algorithmic glaze in the eyes.

    Yet Appiah reminds us that all progress extracts a toll. When writing first emerged, the ancients panicked. In Plato’s Phaedrus, King Thamus warned that this new technology—writing—would make people stupid. Once words were carved into papyrus, memory would rot, dialogue would wither, nuance would die. The written word, Thamus feared, would make us forgetful and isolated. And in a way, he was right. Writing didn’t make us dumb, but it did fundamentally rewire how we think, remember, and converse. Civilization gained permanence and lost immediacy in the same stroke.

    Appiah illustrates how innovation often improves our craft while amputating our pride in it. A pulp mill worker once knew by touch and scent when the fibers were just right. Now, computers do it better—but the hands are idle. Bakers once judged bread by smell, color, and instinct; now a touchscreen flashes “done.” Precision rises, but connection fades. The worker becomes an observer of their own obsolescence.

    I see this too in baseball. When the robotic umpire era dawns, we’ll get flawless strike zones and fewer bad calls. But we’ll also lose Earl Weaver kicking dirt, red-faced and screaming at the ump until his cap flew. That fury—the human mess—is baseball’s soul. Perfection may be efficient, but it’s sterile.

    Even my seventy-five-year-old piano tuner feels it. His trade is vanishing. Digital keyboards never go out of tune; they just go out of style. Try telling a lifelong pianist to find transcendence on a plastic keyboard. The tactile romance of the grand piano, the aching resonance of a single struck note—that’s not progress you can simulate.

    I hear the same story in sound. I often tune my Tecsun PL-990 radio to KJAZZ, a station where a real human DJ spins records in real time. I’ve got Spotify, of course, but its playlists feel like wallpaper for the dead. Spotify never surprises me, never speaks between songs. It’s all flow, no friction—and my brain goes numb. KJAZZ keeps me alert because a person, not a program, is behind it.

    The same tension threads through my writing life. I’ve been writing and weight-lifting daily since my teens. Both disciplines demand sweat, repetition, and pain tolerance. Neglect one, and the other suffers. But since I began using AI to edit two years ago, the relationship has become complicated. Some days, AI feels like a creative partner—it pushes me toward stylistic risks, surprise turns of phrase, and new tonal palettes. Other days, it feels like a crutch. I toss half-baked paragraphs into the machine and tell myself, “ChatGPT will fix it.” That’s not writing; that’s delegation disguised as art.

    When I hit that lazy stretch, I know it’s time to step away—take a nap, watch Netflix, play piano—anything but write. Because once the machine starts thinking for me, I can feel my brain fog over.

    And yet, I confess to living a double life. There’s my AI-edited self, the gleaming, chiseled version of me—the writer on literary steroids. Then there’s my secret writer: the primitive, unassisted one who writes in a private notebook, in the flickering light of what feels like a mythic waterfall. No algorithms, no polish—just me and the unfiltered soul that remembers how to speak without prompts. This secret life is my tether to the human side of creation. It gives my writing texture, contradiction, blood. When I’m writing “in the raw,” I almost feel sneaky and subversive and whisper to myself: “ChatGPT must never know about this.” 

    Appiah is right: the genie isn’t going back in the bottle. Every advance carries its shadow. According to Sturgeon’s Law, 90% of everything is crap, and AI will follow that rule religiously. Most users will become lazy, derivative, and hollow. But the remaining 10%—the thinkers, artists, scientists, doctors, and musicians who wield it with intelligence—will produce miracles. They’ll also suffer for it. Because every new tool reshapes the hand that wields it, and every gain carries a ghost of what it replaces.

    Technology changes us. We change it back. And somewhere in that endless feedback loop—between the bucket piano tuner, the dirt-kicking manager, and the writer lost between human and machine—something resembling the soul keeps flickering.

  • Maps, Not Megaphones: Lessons from Harari, Harris, and Kaplan

    Maps, Not Megaphones: Lessons from Harari, Harris, and Kaplan

    Yuval Noah Harari opens 21 Lessons for the 21st Century with a line that feels more prophetic with each passing year: “In a world deluged by irrelevant information, clarity is power.”


    He’s right. Millions of people rush into the digital coliseum to debate humanity’s future, yet 99.9% of them are shouting through a fog of misinformation, moral panic, and algorithmic distortion. Their sense of the world—our world—is scrambled beyond use.

    Unfair? Of course. But as Harari reminds us, history doesn’t deal in fairness. He admits he can’t give us food, shelter, or comfort, but he can, as a historian, offer something rarer: clarity. A small light in the long night.

    That phrase—clarity in the darkness—hit me like a gut punch while listening to one of the most illuminating podcasts I’ve ever encountered: Sam Harris’s Making Sense, episode #440 (October 24, 2025), featuring author and geopolitical thinker Robert D. Kaplan. Their conversation, centered on Kaplan’s terse 200-page book Waste Land: A World in Permanent Crisis, offered something I hadn’t felt in years: coherence.

    Most days, I feel swept away by the torrent of half-truths and hot takes about the state of the planet. We seem to be living out Yeats’s grim prophecy that “the center cannot hold.” And yet, as Kaplan spoke, the chaos briefly organized itself into a pattern I could recognize.

    Kaplan’s global map is not comforting—but it’s lucid. He traces the roots of instability to climate change stripping water and fertile soil from sub-Saharan Africa, forcing waves of migration toward Europe. Those migrations, he argues, will ignite decades of right-wing populism across the continent—a slow, grinding backlash that may define the century.

    Equally destructive, he warns, is our collapse of media credibility. Print journalism—with its editors, fact-checkers, and professional skepticism—has been displaced by digital media, where “passion replaces analysis.” Emotion has become the currency of attention. Reason, outbid by rage, has left the building.

    Listening to Kaplan for a single hour taught me more about the architecture of global disorder than months of doomscrolling could. His vision is bleak, but it’s ordered. Sobering, but strangely liberating. In a time when everyone is shouting, he simply draws a map.

    And as Harari might say—maps, not megaphones, are what lead us out of the dark.