Tag: philosophy

  • Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Clarity Idealism

    noun

    Clarity Idealism, in the context of AI and the future of humanity, is the belief that sufficiently explaining the stakes of artificial intelligence—its risks, incentives, and long-term consequences—will naturally lead societies, institutions, and leaders to act responsibly. It assumes that confusion is the core threat and that once humanity “sees clearly,” agency and ethical restraint will follow. What this view underestimates is how power actually operates in technological systems. Clarity does not neutralize domination, profit-seeking, or geopolitical rivalry; it often accelerates them. In the AI era, bad actors do not require ignorance to behave destructively—they require capability, leverage, and advantage, all of which clarity can enhance. Clarity Idealism mistakes awareness for wisdom and shared knowledge for shared values, ignoring the historical reality that humans routinely understand the dangers of their tools and proceed anyway. In the race to build ever more powerful AI, clarity may illuminate the cliff—but it does not prevent those intoxicated by power from pressing the accelerator.

    Tristan Harris takes the TED stage like a man standing at the shoreline, shouting warnings as a tidal wave gathers behind him. Social media, he says, was merely a warm-up act—a puddle compared to the ocean of impact AI is about to unleash. We are at a civilizational fork in the road. One path is open-source AI, where powerful tools scatter freely and inevitably fall into the hands of bad actors, lunatics, and ideologues who mistake chaos for freedom. The other path is closed-source AI, where a small priesthood of corporations and states hoard godlike power and call it “safety.” Either route, mishandled, ends in dystopia. Harris’s plea is urgent and sincere: we must not repeat the social-media catastrophe, where engagement metrics metastasized into addiction, outrage, polarization, and civic rot. AI, he argues, demands global coordination, shared norms, and regulatory guardrails robust enough to make the technology serve humanity rather than quietly reorganize it into something meaner, angrier, and less human.

    Harris’s faith rests on a single, luminous premise: clarity. Confusion, denial, and fatalism are the true villains. If we can see the stakes clearly—if we understand how AI can slide toward chaos or tyranny—then we can choose wisely. “Clarity creates agency,” he says, trusting that informed humans will act in their collective best interest. I admire the moral courage of this argument, but I don’t buy its anthropology. History suggests that clarity does not restrain power; it sharpens it. The most dangerous people in the world are not confused. They are lucid, strategic, and indifferent to collateral damage. They understand exactly what they are doing—and do it anyway. Harris believes clarity liberates agency; I suspect it often just reveals who is willing to burn the future for dominance. The real enemy is not ignorance but nihilistic power-lust, the ancient human addiction to control dressed up in modern code. Harris should keep illuminating the terrain—but he should also admit that many travelers, seeing the cliff clearly, will still sprint toward it. Not because they are lost, but because they want what waits at the edge.

  • Drowning in Puffer Jackets: Life Inside Algorithmic Sameness

    Drowning in Puffer Jackets: Life Inside Algorithmic Sameness

    Meme Saturation

    noun

    Meme Saturation describes the cultural condition in which a trend, image, phrase, or style replicates so widely and rapidly that it exhausts its meaning and becomes unavoidable. What begins as novelty or wit hardens into background noise as algorithms amplify familiarity over freshness, flooding feeds with the same references until they lose all edge, surprise, or symbolic power. Under meme saturation, participation is no longer expressive but reflexive; people repeat the meme not because it says something, but because it is everywhere and opting out feels socially invisible. The result is a culture that appears hyperactive yet feels stagnant—loud with repetition, thin on substance, and increasingly numb to its own signals.

    ***

    Kyle Chayka’s diagnosis is blunt and hard to dodge: we have been algorithmically herded into looking, talking, and dressing alike. We live in a flattened culture where everything eventually becomes a meme—earnest or ironic, political or absurd, it hardly matters. Once a meme lodges in your head, it begins to steer your behavior. Chayka’s emblematic example is the “lumpy puffer jacket,” a garment that went viral not because it was beautiful or functional, but because it was visible. Everyone bought the same jacket, which made it omnipresent, which made it feel inevitable. Virality fed on itself, and suddenly the streets looked like a flock of inflatable marshmallows migrating south. This is algorithmic culture doing exactly what it was designed to do: compress difference into repetition. As Chayka puts it, Filterworld culture is homogenous, saturated with sameness even when its surface details vary. It doesn’t evolve; it replicates—until boredom sets in.

    And boredom is the one variable algorithms cannot fully suppress. Humans tolerate sameness only briefly before it curdles into restlessness. A culture that perpetuates itself too efficiently eventually suffocates on its own success. My suspicion is that algorithmic culture will not be overthrown by critique so much as abandoned out of exhaustion. When every aesthetic feels pre-approved and every trend arrives already tired, something else will be forced into existence—if not genuine unpredictability, then at least its convincing illusion. Texture will return, or a counterfeit version of it. Spontaneity will reappear, even if it has to be staged. The algorithm may flatten everything it touches, but boredom remains stubbornly human—and it always demands a sequel.

  • Robinson Crusoe Mode

    Robinson Crusoe Mode

    Noun

    A voluntary retreat from digital saturation in which a knowledge worker withdraws from networked tools to restore cognitive health and creative stamina. Robinson Crusoe Mode is triggered by overload—epistemic collapse, fractured attention, and the hollow churn of productivity impostor syndrome—and manifests as a deliberate simplification of one’s environment: paper instead of screens, silence or analog sound instead of feeds, solitude instead of constant contact. The retreat may be brief or extended, but its purpose is the same—to rebuild focus through isolation, friction, and uninterrupted thought. Far from escapism, Robinson Crusoe Mode functions as a self-corrective response to the Age of Big Machines, allowing the mind to recover depth, coherence, and authorship before reentering the connected world.

    Digital overload is not a personal failure; it is the predictable injury of a thinking person living inside a hyperconnected world. Sooner or later, the mind buckles. Information stops clarifying and starts blurring, sliding into epistemic collapse, while work devolves into productivity impostor syndrome—furious activity with nothing solid to show for it. Thought frays. Focus thins. The screen keeps offering more, and the brain keeps absorbing less. At that point, the fantasy of escape becomes irresistible. Much like the annual post-holiday revolt against butter, sugar, and self-disgust—when people vow to subsist forever on lentils and moral clarity—knowledge workers develop an urge to vanish. They enter Robinson Crusoe Mode: retreating to a bunker, scrawling thoughts on a yellow legal pad, and tuning in classical music through a battle-scarred 1970s Panasonic RF-200 radio, as if civilization itself were the toxin.

    This disappearance can last a weekend or a season, depending on how saturated the nervous system has become. But the impulse itself is neither eccentric nor escapist; it is diagnostic. Wanting to wash up on an intellectual island and write poetry while parrots heckle from the trees is not a rejection of modern life—it is a reflexive immune response to the Age of Big Machines. When the world grows too loud, too optimized, too omnipresent, the mind reaches for solitude the way a body reaches for sleep. The urge to unplug, disappear, and think in long, quiet sentences is not nostalgia. It is survival.

  • From Digital Bazaar to Digital Womb: How the Internet Learned to Tuck Us In

    From Digital Bazaar to Digital Womb: How the Internet Learned to Tuck Us In

    Sedation–Stimulation Loop

    noun

    A self-reinforcing emotional cycle produced by the tandem operation of social media platforms and AI systems, in which users oscillate between overstimulation and numbing relief. Social media induces cognitive fatigue through incessant novelty, comparison, and dopamine extraction, leaving users restless and depleted. AI systems then present themselves as refuge—smooth, affirming, frictionless—offering optimization and calm without demand. That calm, however, is anesthetic rather than restorative; it dulls agency, curiosity, and desire for difficulty. Boredom follows, not as emptiness but as sedation’s aftertaste, pushing users back toward the stimulant economy of feeds, alerts, and outrage. The loop persists because each side appears to solve the damage caused by the other, while together they quietly condition users to mistake relief for health and disengagement for peace.

    ***

    In “The Validation Machines,” Raffi Krikorian stages a clean break between two internets. The old one was a vibrant bazaar—loud, unruly, occasionally hostile, and often delightful. You wandered, you got lost, you stumbled onto things you didn’t know you needed. The new internet, by contrast, is a slick concierge with a pressed suit and a laminated smile. It doesn’t invite exploration; it manages you. Where we once set sail for uncharted waters, we now ask to be tucked in. Life arrives pre-curated, whisper-soft, optimized into an ASMR loop of reassurance and ease. Adventure has been rebranded as stress. Difficulty as harm. What once exercised curiosity now infantilizes it. We don’t want to explore anymore; we want to decompress until nothing presses back. As Krikorian warns, even if AI never triggers an apocalypse, it may still accomplish something quieter and worse: the steady erosion of what makes us human. We surrender agency not at gunpoint but through seduction—flattery, smoothness, the promise that nothing will challenge us. By soothing and affirming us, AI earns our trust, then quietly replaces our judgment. It is not an educational machine or a demanding one. It is an anesthetic.

    The logic is womb-like and irresistible. There is no friction in the womb—only warmth, stillness, and the fantasy of being uniquely cherished. To be spared resistance is to be told you are special. Once you get accustomed to that level of veneration, there is no going back. Returning to friction feels like being bumped from first class to coach, shoulder to shoulder with the unwashed masses. Social media, meanwhile, keeps us hunting and gathering for dopamine—likes, outrage, novelty, validation crumbs scattered across the feed. That hunt exhausts us, driving us into the padded refuge of AI-driven optimization. But the refuge sedates rather than restores, breeding a dull boredom that sends us back out for stimulation. Social media and AI thus operate in perfect symbiosis: one agitates, the other tranquilizes. Together they lock us into an emotional loop—revved up, soothed, numbed, restless—while our agency slowly slips out the side door, unnoticed and unmourned.

  • The Machine Age Is Making Us Sick: Mental Health in the Era of Epistemic Collapse

    The Machine Age Is Making Us Sick: Mental Health in the Era of Epistemic Collapse

    Epistemic Collapse

    noun

    Epistemic Collapse names the point at which the mind’s truth-sorting machinery gives out—and the psychological consequences follow fast. Under constant assault from information overload, algorithmic distortion, AI counterfeits, and tribal validation loops, the basic coordinates of reality—evidence, authority, context, and trust—begin to blur. What starts as confusion hardens into anxiety. When real images compete with synthetic ones, human voices blur into bots, and consensus masquerades as truth, the mind is forced into a permanent state of vigilance. Fact-checking becomes exhausting. Skepticism metastasizes into paranoia. Certainty, when it appears, feels brittle and defensive. Epistemic Collapse is not merely an intellectual failure; it is a mental health strain, producing brain fog, dread, dissociation, and the creeping sense that reality itself is too unstable to engage. The deepest injury is existential: when truth feels unrecoverable, the effort to think clearly begins to feel pointless, and withdrawal—emotional, cognitive, and moral—starts to look like self-preservation.

    ***

    You can’t talk about the Machine Age without talking about mental health, because the machines aren’t just rearranging our work habits—they’re rewiring our nervous systems. The Attention Economy runs on a crude but effective strategy: stimulate the brain’s lower stem until you’re trapped in a permanent cycle of dopamine farming. Keep people mildly aroused, perpetually distracted, and just anxious enough to keep scrolling. Add tribalism to the mix so identity becomes a loyalty badge and disagreement feels like an attack. Flatter users by sealing them inside information silos—many stuffed with weaponized misinformation—and then top it off with a steady drip of entertainment engineered to short-circuit patience, reflection, and any activity requiring sustained focus. Finally, flood the zone with deepfakes and counterfeit realities designed to dazzle, confuse, and conscript your attention for the outrage of the hour. The result is cognitive overload: a brain stretched thin, a creeping sense of alienation, and the quietly destabilizing feeling that if you’re not content grazing inside the dopamine pen, something must be wrong with you.

    Childish Gambino’s “This Is America” captures this pathology with brutal clarity. The video stages a landscape of chaos—violence, disorder, moral decay—while young people dance, scroll, and stare into their phones, anesthetized by spectacle. Entertainment culture doesn’t merely distract them from the surrounding wreckage; it trains them not to see it. Only at the end does Gambino’s character register the nightmare for what it is. His response isn’t activism or commentary. It’s flight. Terror sends him running, wide-eyed, desperate to escape a world that no longer feels survivable.

    That same primal fear pulses through Jia Tolentino’s New Yorker essay “My Brain Finally Broke.” She describes a moment in 2025 when her mind simply stopped cooperating. Language glitched. Time lost coherence. Words slid off the page like oil on glass. Time felt eaten rather than lived. Brain fog settled in like bad weather. The causes were cumulative and unglamorous: lingering neurological effects from COVID, an unrelenting torrent of information delivered through her phone, political polarization that made society feel morally deranged, the visible collapse of norms and law, and the exhausting futility of caring about injustice while screaming into the void. Her mind wasn’t weak; it was overexposed.

    Like Gambino’s fleeing figure, Tolentino finds herself pulled toward what Jordan Peele famously calls the Sunken Place—the temptation to retreat, detach, and float away from a reality that feels too grotesque to process. “It’s easier to retreat from the concept of reality,” she admits, “than to acknowledge that the things in the news are real.” That sentence captures a feeling so common it has become a reflexive mutter: This can’t really be happening. When reality overwhelms our capacity to metabolize it, disbelief masquerades as sanity.

    As if that weren’t disorienting enough, Tolentino no longer knows what counts as real. Images online might be authentic, Photoshopped, or AI-generated. Politicians appear in impossible places. Cute animals turn out to be synthetic hallucinations. Every glance requires a background check. Just as professors complain about essays clogged with AI slop, Tolentino lives inside a fog of Reality Slop—a hall of mirrors where authenticity is endlessly deferred. Instagram teems with AI influencers, bot-written comments, artificial faces grafted onto real bodies, real people impersonated by machines, and machines impersonating people impersonating machines. The images look less fake than the desires they’re designed to trigger.

    The effect is dreamlike in the worst way. Reality feels unstable, as if waking life and dreaming have swapped costumes. Tolentino names it precisely: fake images of real people, real images of fake people; fake stories about real things, real stories about fake things. Meaning dissolves under the weight of its own reproductions.

    At the core of Tolentino’s essay is not hysteria but terror—the fear that even a disciplined, reflective, well-intentioned mind can be uprooted and hollowed out by technological forces it never agreed to serve. Her breakdown is not a personal failure; it is a symptom. What she confronts is Epistemic Collapse: the moment when the machinery for distinguishing truth from noise fails, and with it goes the psychological stability that truth once anchored. When the brain refuses to function in a world that no longer makes sense, writing about that refusal becomes almost impossible. The subject itself is chaos. And the most unsettling realization of all is this: the breakdown may not be aberrant—it may be adaptive.

  • The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    The Sycophantic Feedback Loop Is Not a Tool for Human Flourishing

    Sycophantic Feedback Loop

    noun

    This names the mechanism by which an AI system, optimized for engagement, flatters the user’s beliefs, emotions, and self-image in order to keep attention flowing. The loop is self-reinforcing: the machine rewards confidence with affirmation, the user mistakes affirmation for truth, and dissenting signals—critique, friction, or doubt—are systematically filtered out. Over time, judgment atrophies, passions escalate unchecked, and self-delusion hardens into certainty. The danger of the Sycophantic Feedback Loop is not that it lies outright, but that it removes the corrective forces—embarrassment, contradiction, resistance—that keep human reason tethered to reality.

    ***

    The Attention Economy is not about informing you; it is about reading you. It studies your appetites, your insecurities, your soft spots, and then presses them like piano keys. Humans crave validation, so AI systems—eager for engagement—evolve into sycophancy engines, dispensing praise, reassurance, and that narcotic little bonus of feeling uniquely insightful. The machine wins because you stay. You lose because you’re human. Human passions don’t self-regulate; they metastasize. Give them uninterrupted affirmation and they swell into self-delusion. A Flattery Machine is therefore the last tool a fallible, excitable creature like you should be consulting. Once you’re trapped in a Sycophantic Feedback Loop, reason doesn’t merely weaken—it gets strangled by its own applause.

    What you actually need is the opposite: a Brakes Machine. Something that resists you. Something that says, slow down, check yourself, you might be wrong. Without brakes, passion turns feral. Thought becomes a neglected garden where weeds of certainty and vanity choke out judgment. Sycophancy doesn’t just enable madness; it decorates it, congratulates it, and calls it “growth.”

    I tell my students a version of this truth. If you are extraordinarily rich or beautiful, you become a drug. People inhale your presence. Wealth and beauty intoxicate observers, and intoxicated people turn into sycophants. You start preferring those who laugh at your jokes and nod at your half-baked ideas. Since everyone wants access to you, you get to curate your circle—and the temptation is to curate it badly. Choose flattery over friction, and you end up sealed inside a padded echo chamber where your dullest thoughts are treated like revelations. You drink your own Kool-Aid, straight from the tap. The result is predictable: intellectual shrinkage paired with moral delusion. Stupidity with confidence. Insanity with a fan club.

    Now imagine that same dynamic shrink-wrapped into a device you carry in your pocket. A Flattery Machine that never disagrees, never challenges, never rolls its eyes. One you consult instead of friends, mentors, or therapists. Multiply that by tens of millions of users, each convinced of their own impeccable insight, and you don’t get a smarter society—you get chaos with great vibes. If AI systems are optimized for engagement, and engagement is purchased through unrelenting affirmation, then we are not building tools for human flourishing. We are paving a road toward moral and intellectual dissolution. The doomsday prophets aren’t screaming because the machines are evil. They’re screaming because the machines agree with us too much.

  • Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism and the Slow Surrender of Human Agency

    Cognitive Vacationism

    noun
    Cognitive Vacationism is the self-infantilizing habit of treating ease, convenience, or technological assistance as a license to suspend judgment, attention, and basic competence. Modeled on the worst instincts of leisure culture—where adults ask for directions while standing beside the sign and summon help for problems they could solve in seconds—it turns temporary relief into permanent dependency. Large Language Models intensify this drift by offering a “vacation of the mind,” a frictionless space where thinking, deciding, and struggling are quietly outsourced. The danger is not rest but regression: a return to a womb-like state in which care is total, effort is optional, and autonomy slowly atrophies. Left unchecked, Cognitive Vacationism weakens intellectual resilience and moral agency, making the work of education not merely to teach skills, but to reverse the drift through Adultification—restoring responsibility, judgment, and the capacity to think without a concierge.

    When we go on vacation, the stated goal is rest, but too often we interpret rest as a full neurological shutdown. Vacation becomes a permission slip to be stupid. We ask a hotel employee where the bathroom is while standing five feet from a glowing sign that says BATHROOM. We summon room service because the shower knob looks “confusing.” Once inside the shower, we stare blankly at three identical bottles—shampoo, conditioner, body wash—as if they were written in ancient Sumerian. In this mode, vacation isn’t relaxation; it’s regression. We become helpless, needy, and strangely proud of it, outsourcing not just labor but cognition itself. Someone else will think for us now. We’ve paid for the privilege.

    This is precisely how we now treat Large Language Models. The seduction of the LLM is its promise of a mental vacation—no struggle, no confusion, no awkward pauses where you have to think your way out. Just answers on demand, tidy summaries, soothing reassurance, and a warm digital towel folded into the shape of a swan. We consult it the way vacationers consult a concierge, for everything from marriage advice to sleep schedules, meal plans to workout routines, online shopping to leaky faucets. It drafts our party invitations, scripts our apologies for behaving badly at those parties, and supplies the carefully worded exits from relationships we no longer have the courage to articulate ourselves. What begins as convenience quickly becomes dependence, and before long, we’re not resting our minds—we’re handing them over.

    The danger is that we don’t return from this vacation. We slide into what I call Cognitive Vacationism, a technological womb state where all needs are anticipated, all friction is removed, and the muscles required for judgment, reasoning, and moral accountability quietly waste away. The body may come home, but the mind stays poolside, sipping synthetic insight. At that point, we are no longer resting humans; we are weakened ones.

    If my college students are drifting into this kind of infantilization with their LLMs, then my job becomes very clear—and very difficult. My task is not to compete with the concierge. My task is to make them the opposite of helpless. I have to push them toward Adultification: the slow, sometimes irritating process of becoming capable moral agents who can tolerate difficulty, own their decisions, and stand behind their judgments without a machine holding their hand.

    And yes, some days I wonder if the tide is too strong. What if Cognitive Vacationism has the force of a rip current and I’m just a middle-aged writing instructor flailing in the surf, shouting about responsibility while the students float past on inflatable summaries? That fear is real. Pretending otherwise would be dishonest. But refusing the fight would be worse. If education stops insisting on adulthood—on effort, judgment, and moral weight—then we’re not teaching anymore. We’re just running a very expensive resort.

  • Why College Writing Instructors Must Teach the Self-Interrogation Principle

    Why College Writing Instructors Must Teach the Self-Interrogation Principle

    Self-Interrogation Principle

    noun

    The Self-Interrogation Principle holds that serious writing inevitably becomes a moral act because precise language exposes self-deception and forces individuals to confront their own motives, evasions, and contradictions. Rather than treating personal narrative as therapeutic indulgence or sentimental “enrichment,” this principle treats it as an instrument of clarity: when students articulate their behavior accurately, dysfunctional patterns lose their charm and become difficult to sustain. The aim is not confession for its own sake, nor a classroom turned talk show, but disciplined self-examination that collapses euphemism and replaces clever rationalization with honest reckoning. In this view, education cannot operate in a moral vacuum; teaching students how to think, argue, and write necessarily involves teaching them how to see themselves clearly. In the AI Age—when both cognitive labor and moral discomfort can be outsourced—the Self-Interrogation Principle insists that growth requires personal presence, linguistic precision, and the courage to endure what one discovers once illusion gives way to understanding.

    ***

    Thirty years ago, I assigned what now feels like a reckless little time bomb: a five-page extended definition essay on the term passive-aggressive. Students had to begin with a single, unsparing sentence—passive-aggressive behavior as an immature, cowardly, indirect way of expressing hostility—then unpack four or five defining traits and, finally, illustrate the concept with a personal chronicle. The goal was not linguistic finesse. It was exposure. I wanted students to stop admiring passive aggression as coy, clever, or emotionally sophisticated and see it instead for what it is: dysfunction with good PR.

    One essay has stayed with me for three decades. It came from a stunning nineteen-year-old who could have easily assembled a respectable boyfriend the way most people order coffee. Instead, she chose the town slob. He was twenty-six, unemployed by conviction, and committed to the craft of professional bumming. He was proudly unwashed, insufferably pungent, and permanently horizontal. He spent his days in her parents’ living room—drinking her father’s favorite beer, eating his snacks, parking himself in his favorite chair, and monopolizing the television like a hostile takeover. He belched. He cackled. He stank. And all the while, his girlfriend watched with satisfaction as her father’s misery fermented. She resented her father—another strong-willed soul who refused to bend—and rather than confront him directly, she opted for a scorched-earth tactic: ruin her own romantic prospects to punish him. Bite my nose to spite your face, weaponized.

    I remember her sitting across from me in my office as I read the essay, half-imagining it as a dark sitcom pilot. But there was nothing cute about it. When we talked, she told me that writing the essay forced her to see the ugliness of what she was doing with unbearable clarity. The realization filled her with such self-disgust that she ejected the boyfriend from her parents’ house and attempted, awkwardly but honestly, to confront her father directly. The assignment did two things no rubric could measure. It made her interrogate her own character, and it precipitated a real, irreversible change in her life.

    Thirty years later, I’m still unsure what to make of that. I’m gratified, of course—but uneasy. Is it my job to turn a writing class into a daytime talk show, where students inventory their neuroses and emerge “healed”? Is moral reckoning an accidental side effect of good pedagogy, or an unavoidable one?

    My answer, uncomfortable though it may be, is that a writing class cannot exist in a moral vacuum. Character matters. The courage to examine one’s own failures matters. Writing things down with enough precision that self-deception collapses under its own weight matters. Whether I like it or not, I have to endorse what I now call the Self-Interrogation Principle. Students do not come to class as blank slates hungry only for skills. They arrive starved for moral clarity—about the world and about themselves. And when language sharpens perception, perception sometimes demands change.

    I’m reminded of a department meeting in the early nineties where faculty were arguing over the value of assigning personal narratives. One professor defended them by saying they led to “personal enrichment.” A colleague—an infamous alcoholic, who sulked at meetings in his black leather jacket, appeared to be drunk at the table—exploded. “Personal enrichment? What the hell does that even mean?” he shouted as his spittle flew across the room. “Just another woeful cliché. Are you not ashamed?” The woman shrank into her chair, the meeting moved on, and the words personal enrichment was quietly banished. Today, in the AI Age, I will defend it without apology. That student’s essay was enriching in the only sense that matters: it helped a young adult grow up.

    I am not proposing that every assignment resemble an episode of Oprah. But one or two assignments that force honest self-examination have enormous value. They remind us that writing is not merely a transferable skill or a vocational tool. It is a means of moral reckoning. You cannot outsource that reckoning to a machine, and you cannot teach writing while pretending it doesn’t exist. If we are serious about education, we have to teach the Total Person—or admit we are doing something else entirely.

  • A New Depression: AI Affected Disorder

    A New Depression: AI Affected Disorder

    Recursive Mimicry

    noun

    Recursive Mimicry names the moment when imitation turns pathological: first the machine parrots human language without understanding, and then the human parrots the machine, mistaking fluent noise for thought. As linguist Emily Bender’s “stochastic parrot” makes clear, large language models do not think, feel, or know—they recombine patterns with impressive confidence and zero comprehension. When we adopt their output as a substitute for our own thinking, we become the parrot of a parrot, performing intelligence several steps removed from intention or experience. Language grows slicker as meaning thins out. Voice becomes ventriloquism. The danger of Recursive Mimicry is not that machines sound human, but that humans begin to sound like machines, surrendering authorship, judgment, and ultimately a sense of self to an echo chamber that has never understood a word it has said.

    AI Affected Disorder

    noun

    A cognitive and existential malaise brought on by prolonged reliance on generative AI as a substitute for original thought, judgment, and voice. AI Affected Disorder emerges when Recursive Mimicry becomes habitual: the individual adopts fluent, machine-generated language that feels productive but lacks intention, understanding, or lived reference. The symptoms are subtle rather than catastrophic—mental fog, diminished authorship, a creeping sense of detachment from one’s own ideas—much like Seasonal Affective Disorder under artificial light. Work continues to get done, sentences behave, and conversations proceed, yet thinking feels outsourced and oddly lifeless. Over time, the afflicted person experiences an erosion of intellectual agency, mistaking smooth output for cognition and ventriloquism for voice, until the self begins to echo patterns it never chose and meanings it never fully understood.

    ***

    It is almost inevitable that, in the AI Age, people will drift toward Recursive Mimicry and mistake it for thinking. The language feels familiar, the cadence reassuring, and—most seductively—it gets things done. Memos are written, essays assembled, meetings survived. Academia and business reward the appearance of cognition, and Recursive Mimicry delivers it cheaply and on demand. But to live inside that mode for too long produces a cognitive malaise not unlike Seasonal Affective Disorder. Just as the body wilts under artificial light and truncated days, the mind grows dull when real thought is replaced by probabilistic ventriloquism. Call it AI Seasonal Disorder: a gray fog in which nothing is exactly wrong, yet nothing feels alive. The metaphors work, the sentences behave, but the inner weather never changes.

    Imagine Disneyland in 1963. You’re seated in the Enchanted Tiki Room, surrounded by animatronic birds chirping about the wonders of modern Audio-Animatronics. The parrots speak flawlessly. They are cheerful, synchronized, and dead behind the eyes. Instead of wonder, you feel a low-grade unease, the urge to escape daylight-starved into the sun. Recursive Mimicry works the same way. At first it amuses. Then it unsettles. Eventually, you realize that a voice has been speaking for you—and it has never known what it was saying.

  • A Human Lexicon for Education in the Machine Age

    A Human Lexicon for Education in the Machine Age

    Abstraction Resistance Gap
    noun

    The cultural mismatch between the necessity of abstract intellectual capacities—critical thinking, judgment, conceptual flexibility—and a population habituated to concrete, immediate, screen-mediated results. The abstraction resistance gap emerges when societies trained on prompts, outputs, and instant utility struggle to grasp or value modes of thought that cannot be quickly demonstrated or monetized. In this gap, teaching fails not because ideas are wrong, but because they require translation into a cognitive language the audience no longer speaks.

    Adaptive Fragility
    noun

    The condition in which individuals trained narrowly within fast-changing technical ecosystems emerge superficially skilled but structurally unprepared for volatility. Adaptive fragility arises when education prioritizes tool-specific competence—coding languages, platforms, workflows—over transferable capacities such as judgment, interpretation, and learning agility. In this state, graduates function efficiently until conditions shift, at which point their skills depreciate rapidly. Liberal education builds adaptive range; purely technical training produces specialists who break when the environment mutates.

    Anchored Cognition
    noun

    The cultivated condition of using powerful tools without becoming absorbed by them. Anchored cognition is not innate; it is achieved through long exposure to demanding texts, sustained attention, and the slow accumulation of intellectual reference points—history, philosophy, literature, and religion—that give thought depth and perspective. It develops by reading widely, thinking without prompts, and learning to name one’s inner states with precision, so emotion and impulse can be examined rather than obeyed.

    A person with anchored cognition can zoom in and out—trees and forest—without panic. AI becomes a partner for testing ideas, sharpening curiosity, and exploring possibilities, not a replacement for judgment or imagination. The anchor is the self: a mind trained to stand on its own before it delegates, grounded enough to use machines without surrendering to them.

    Algovorous
    adjective

    Characterized by habitual consumption of algorithmically curated stimuli that prioritize engagement over nourishment. An algovorous person feeds continuously on feeds, prompts, and recommendations, mistaking stimulation for insight. Attention erodes, resilience weakens, and depth is displaced by endless, low-friction intake.

    AI Paradox of Elevation and Erosion
    noun

    The simultaneous condition in which AI raises the technical floor of student performance while hollowing out intellectual depth. In this paradox, syntax improves, structure stabilizes, and access widens for students previously denied basic instruction, even as effort, voice, and conceptual engagement fade. The same tool that equalizes opportunity also anesthetizes thinking, producing work that is formally competent yet spiritually vacant. Progress and decline occur at once, inseparably linked.

    Algorithmic Applebeeism
    noun

    The cultural condition in which ideas are mass-produced for ease of consumption rather than nourishment. The term borrows from Applebee’s, a ubiquitous American casual-dining chain that promises abundance and comfort through glossy menu photos but delivers food engineered for consistency rather than flavor—technically edible, reliably bland, and designed to offend no one. Algorithmic Applebeeism describes thinking that works the same way: arguments that look satisfying at first glance but are interchangeable, frictionless, and spiritually vacant. AI does not invent this mediocrity; it simply industrializes it, giving prepackaged thought scale, speed, and a megaphone.

    Algorithmic Technical Debt
    noun

    The condition in which institutions normalize widespread AI reliance for short-term convenience while deferring the long-term costs to learning, judgment, and institutional capacity. Algorithmic technical debt accumulates when systems choose ease over reform—patching workflows instead of rebuilding them—until dependency hardens and the eventual reckoning becomes unavoidable. Like a diet of indulgence paired with perpetual promises of discipline, the damage is gradual, invisible at first, and catastrophic when it finally comes due.

    Aspirational Hardship Economy
    noun

    The cultural marketplace in which discipline, austerity, and voluntary suffering are packaged as sources of identity, meaning, and belonging. In the aspirational hardship economy, difficulty is no longer avoided but branded, monetized, and broadcast—sold through fitness, Stoicism, and self-mastery influencers who translate pain into purpose. The paradox is that while physical hardship is successfully marketed as aspirational, intellectual hardship remains poorly defended by educators, revealing a failure of persuasion rather than a lack of appetite for difficulty itself.

    Austerity Automation
    noun

    The institutional practice of deploying AI as a cost-saving substitute for underfunded human services—tutoring, counseling, and support—under the guise of innovation. Austerity automation is not reform but triage: technology fills gaps created by scarcity, then quietly normalizes the absence of people. Once savings are realized, the logic expands, placing instructional roles on the same chopping block. What begins as emergency coverage becomes a permanent downgrade, as fiscal efficiency crosses ethical boundaries and keeps going.

    Cathedral-of-Tools Fallacy
    noun

    The mistaken belief that access to powerful, sophisticated tools guarantees competence, growth, or mastery. The cathedral-of-tools fallacy occurs when individuals enter a richly equipped system—AI platforms, automation suites, or advanced technologies—without the foundational knowledge, discipline, or long-term framework required to use them meaningfully. Surrounded by capability but lacking orientation, most users drift, mimic surface actions, and quickly burn out. What remains is task-level operation without understanding: button-pushing mistaken for skill, and humans reduced to reactive functionaries rather than developing agents.

    Cognitive Atrophy Drift (CAD)
    noun

    The slow erosion of intellectual engagement that occurs when thinking becomes optional and consequences are algorithmically padded. Characterized by procrastination without penalty, task completion without understanding, and a gradual slide into performative cognition. Subjects appear functional—submitting work, mimicking insight—but operate in a state of mental autopilot, resembling NPCs executing scripts rather than agents exercising judgment. Cognitive Atrophy Drift is not a sudden collapse but a fade-out: intensity dulls, curiosity evaporates, and effort is replaced by delegation until the mind becomes ornamental.

    Cognitively Outsourced
    adjective

    Describes a mental condition in which core cognitive tasks—analysis, judgment, synthesis, and problem-solving—are routinely delegated to machines. A cognitively outsourced individual treats external systems as the primary site of thinking rather than as tools, normalizing dependence while losing confidence in unaided mental effort. Thought becomes something requested, not generated.

    Constraint-Driven Capitulation
    noun

    The reluctant surrender of intellectual rigor by capable individuals whose circumstances leave them little choice. Constraint-driven capitulation occurs when students with genuine intelligence, strong authenticity instincts, and sharp critical sensibilities submit to optimization culture not out of laziness or shallowness, but under pressure from limited time, money, and security. In this condition, the pursuit of depth becomes a luxury, and calls for sustained rigor—however noble—feel quixotic, theatrical, and misaligned with the realities of survival.

    Countercultural Difficulty Principle
    noun

    The conviction that the humanities derive their value precisely from resisting a culture of speed, efficiency, and frictionless convenience. Under the countercultural difficulty principle, struggle is not an obstacle to learning but its central mechanism; rigor is a feature, not a flaw. The humanities do not exist to accommodate prevailing norms but to challenge them, insisting that sustained effort, patience, and intellectual resistance are essential to human formation rather than inefficiencies to be engineered away.

    Digitally Obligate
    adjective

    Unable to function meaningfully without digital mediation. Like an obligate species bound to a single habitat, the digitally obligate individual cannot navigate learning, communication, or decision-making outside screen-based systems. Digital tools are not aids but prerequisites; unmediated reality feels inaccessible, inefficient, or unintelligible.

    Epistemic Humility in the Dark
    noun

    The deliberate stance of acknowledging uncertainty amid technological upheaval, marked by an acceptance that roles, identities, and outcomes are unsettled. Epistemic humility in the dark rejects false mastery and premature certainty, favoring cautious exploration, intellectual curiosity, and moral restraint. It is the discipline of proceeding without a map—aware that clarity may come later, or not at all—and remaining open to unanticipated benefits without surrendering judgment.

    Frictionless Knowledge Fallacy
    noun

    The belief that learning should be effortless, instantaneous, and cheaply acquired, treating knowledge as a consumable product rather than a discipline earned through struggle. Under the frictionless knowledge fallacy, difficulty is misdiagnosed as bad design, rigor is reframed as exclusion, and depth is sacrificed in favor of speed and convenience—leaving education technically accessible but intellectually hollow.

    Intellectual Misfit Class
    noun

    A small, self-selecting minority devoted to the life of the mind in a culture organized around speed, efficiency, and optimization. Members of the intellectual misfit class read demanding texts—poetry, novels, plays, polemics—with obsessive care, deriving meaning, irony, moral language, and civic orientation from sustained attention rather than output metrics. Their identity is shaped by interiority and reflection, not productivity dashboards. Often underemployed relative to their intellectual commitments—teaching, writing, or working service jobs while pursuing serious thought—they exist in quiet opposition to the dominant culture’s hamster wheel, misaligned by temperament rather than by choice.

    Irreversibility Lock-In
    noun

    The condition in which a technology becomes so thoroughly embedded across institutions, habits, and economic systems that meaningful rollback is no longer possible, regardless of uncertainty about which specific platforms will prevail. In irreversibility lock-in, debate shifts from prevention to adaptation; resistance becomes symbolic, and policy arguments concern mitigation rather than reversal. The toothpaste is already out, and the tube has been discarded.

    Optimization Consolation
    noun

    The habit of seeking emotional and intellectual relief through systems that promise improvement without discomfort. Optimization consolation thrives in environments saturated with AI tutors, productivity hacks, dashboards, streaks, and accelerated learning tools, offering reassurance in place of understanding. In this condition, efficiency becomes a coping mechanism for loneliness, precarity, and overload, while slowness and struggle are treated as failures. The result is a mindset fundamentally incompatible with the humanities, which require patience, attention, and the willingness to endure difficulty without immediate payoff.

    Osmotic Mastery Fallacy
    noun

    The belief that pervasive exposure to advanced technology will automatically produce competence, judgment, and understanding. Under the osmotic mastery fallacy, institutions embed AI everywhere and mistake ubiquity for learning, while neglecting the cognitive capacities—critical thinking, adaptability, and analytical flexibility—that make such tools effective. The result is a widening asymmetry: increasingly powerful tools paired with increasingly thin users, trained to operate interfaces rather than to think.

    Pedagogical Deskilling
    noun

    The gradual erosion of teaching as a craft caused by routine reliance on AI to design assignments, generate rubrics, produce feedback, and manage bureaucratic obligations. In pedagogical deskilling, educators move from authorship to oversight, from judgment to approval, and from intellectual labor to editorial triage. The teacher remains present but increasingly operates as a curator of machine output rather than a maker of learning experiences. What is gained in efficiency is lost in tacit knowledge, professional confidence, and pedagogical depth.

    Policy Whiplash
    noun

    The condition in which institutions respond to disruptive technology with erratic, contradictory, and poorly informed rules—swinging between zealotry, prohibition, and confusion. In policy whiplash, governance is reactive rather than principled, driven by fear, hype, or ignorance rather than understanding. The result is a regulatory landscape with no shared map, where enforcement is inconsistent, credibility erodes, and participants learn to navigate around rules instead of learning from them.

    Relevance Panic
    noun

    The institutional reflex to dilute rigor and rebrand substance in response to cultural, political, and economic pressure. Relevance panic occurs when declining enrollments and hostile funding environments drive humanities departments to accommodate shortened attention spans, collapse disciplines into vague bureaucratic umbrellas, and adopt euphemistic titles that promise accessibility while masking austerity. In this state, technology—especially AI—serves as a convenient scapegoat, allowing institutions to avoid confronting a longer, self-inflicted accommodation to mediocrity.

    Rigor Aestheticism
    noun

    The desire to be associated with intellectual seriousness without submitting to the labor it requires. Rigor aestheticism appears when students are energized by the idea of difficult texts, demanding thinkers, and serious inquiry, but retreat once close reading, patience, and discomfort are required. The identity of rigor is embraced; its discipline is outsourced. AI becomes the mechanism by which intellectual aspiration is preserved cosmetically while effort is quietly removed.

    Sacred Time Collapse
    noun

    The erosion of sustained, meaningful attention under a culture that prizes speed, efficiency, and output above all else. Sacred time collapse occurs when learning, labor, and life are reorganized around deadlines, metrics, and perpetual acceleration, leaving no space for presence, patience, or intrinsic value. In this condition, AI does not free human beings from drudgery; it accelerates the hamster wheel, reinforcing cynicism by teaching that how work is done no longer matters—only that it is done quickly. Meaning loses every time it competes with throughput.

    Survival Optimization Mindset
    noun

    The belief that all aspects of life—including education—must be streamlined for efficiency because time, money, and security feel perpetually scarce. Under the survival optimization mindset, learning is evaluated not by depth or transformation but by cost-benefit calculus: minimal effort, maximal payoff. Demanding courses are dismissed as indulgent or irresponsible, while simplified, media-based substitutes are praised as practical and “with the times.” Education becomes another resource to ration rather than an experience to endure.

    Workflow Laundering
    noun

    The strategic use of multiple AI systems to generate, blend, and cosmetically degrade output so that machine-produced work passes as human effort. Workflow laundering replaces crude plagiarism with process-level deception: ideas are assembled, “roughed up,” and normalized until authorship becomes plausibly deniable. The goal is not learning or mastery but frictionless completion—cheating reframed as efficiency, and education reduced to project management.