Blog

  • Look at Me, I’m Productive: The Lie That Ends in Terminal Shallowness

    Look at Me, I’m Productive: The Lie That Ends in Terminal Shallowness

    Terminal Shallowness

    noun
    A condition in which prolonged reliance on shallow work permanently erodes the capacity for deep, effortful thought. Terminal shallowness emerges when individuals repeatedly outsource judgment, authorship, and concentration to machines—first at work, then in personal life—until sustained focus becomes neurologically and psychologically unavailable. The mind adapts to speed, convenience, and delegation, learning to function as a compliant system operator rather than a creator. What makes terminal shallowness especially corrosive is its invisibility: the individual experiences no crisis, only efficiency, mistaking reduced effort for progress and infantilization for relief. It is not laziness but irreversible acclimation—a state in which the desire for depth may remain, but the ability to achieve it has quietly disappeared.

    ***

    Cal Newport’s warning is blunt: if you are not doing Deep Work—the long, strenuous kind of thinking that produces originality, mastery, and human flourishing—then you are defaulting into Shallow Work. And shallow work doesn’t make you a creator; it makes you a functionary. You click, sort, prompt, and comply. You become replaceable. A cog. A cipher. What gamers would call a Non-Player Character, dutifully running scripts written by someone—or something—else. The true tragedy is not that people arrive at this state, but that they arrive without protest, without even noticing the downgrade. To accept such diminishment with a shrug is a loss for humanity and a clear win for the machine.

    Worse still, Newport suggests there may be no rewind button. Spend enough time in what he calls “frenetic shallowness,” and the ability to perform deep work doesn’t just weaken—it disappears. The mind adapts to skimming, reacting, delegating. Depth begins to feel foreign, even painful. You don’t merely do shallow work; you become a shallow worker. And once that happens, the rot spreads. At first, you justify AI use at work—it’s in the job description, after all. But soon the same logic seeps into your personal life. Why struggle to write an apology when a machine can smooth it out? Why wrestle with a love letter, a eulogy, a recovery memoir, when efficiency beckons? You contribute five percent of the effort, outsource the rest, and still pat yourself on the back. “Look at me,” you think, admiring the output. “I’m productive.”

    By then, the trade has already been made. In the name of convenience and optimization, you’ve submitted both your work and your inner life to machines—and paid for it with infantilization. You’ve traded authorship for ease, struggle for polish, growth for speed. And you don’t mourn the loss; you celebrate it. This is Terminal Shallowness: not laziness, but irreversible adaptation. A mind trained for delegation and instant output, no longer capable of sustained depth even when it dimly remembers wanting it.

  • My Unofficial Christmas Movies (and the Family That Refuses to Watch Them)

    My Unofficial Christmas Movies (and the Family That Refuses to Watch Them)

    When Christmas rolls around and the air fills with cinnamon, regret, and manufactured cheer, I develop an entirely predictable craving for cinematic comfort food. These are not “Christmas movies” in any socially acceptable sense. There are no Santa hats, no snow-dusted small towns, no redemption arcs sponsored by cocoa. And yet, for me, they function perfectly as holiday films because they cocoon me in familiarity and nostalgia. Even when they drift into dystopia or melancholy, they deliver me to a place of psychic warmth. My private Christmas canon includes What’s Eating Gilbert Grape (1993), Eversmile, New Jersey (1989), Sideways (2004), Walkabout (1971), Brazil (1985), Blade on the Feather (1980), Dr. Fischer of Geneva (1984), La Strada (1954), The King of Comedy (1982), Fantastic Voyage (1966), The Holdovers (2023), Licorice Pizza (2021), and Willy Wonka and the Chocolate Factory (1971). This is not a list; it’s a confession.

    This year, I mounted a quiet but determined campaign to anoint What’s Eating Gilbert Grape as the family’s official Christmas movie. I made my case with restraint and dignity, which is to say I made it once and then silently resented the outcome. My daughters, unmoved by my aesthetic pleading, remain loyal to Home Alone. Thus, another December passes in cinematic deprivation, endured with outward good cheer and inward martyrdom. The sharpest ache, though, comes from my unshakeable belief that if they would only sit through Gilbert Grape, they would see it—the tenderness, the ache, the strange, gentle magic—and immediately defect. I can already picture it: a household converted, a tradition reborn. Until then, I wait, patient and delusional, nursing my dream like a bruised holiday ornament.

  • Robinson Crusoe Mode

    Robinson Crusoe Mode

    Noun

    A voluntary retreat from digital saturation in which a knowledge worker withdraws from networked tools to restore cognitive health and creative stamina. Robinson Crusoe Mode is triggered by overload—epistemic collapse, fractured attention, and the hollow churn of productivity impostor syndrome—and manifests as a deliberate simplification of one’s environment: paper instead of screens, silence or analog sound instead of feeds, solitude instead of constant contact. The retreat may be brief or extended, but its purpose is the same—to rebuild focus through isolation, friction, and uninterrupted thought. Far from escapism, Robinson Crusoe Mode functions as a self-corrective response to the Age of Big Machines, allowing the mind to recover depth, coherence, and authorship before reentering the connected world.

    Digital overload is not a personal failure; it is the predictable injury of a thinking person living inside a hyperconnected world. Sooner or later, the mind buckles. Information stops clarifying and starts blurring, sliding into epistemic collapse, while work devolves into productivity impostor syndrome—furious activity with nothing solid to show for it. Thought frays. Focus thins. The screen keeps offering more, and the brain keeps absorbing less. At that point, the fantasy of escape becomes irresistible. Much like the annual post-holiday revolt against butter, sugar, and self-disgust—when people vow to subsist forever on lentils and moral clarity—knowledge workers develop an urge to vanish. They enter Robinson Crusoe Mode: retreating to a bunker, scrawling thoughts on a yellow legal pad, and tuning in classical music through a battle-scarred 1970s Panasonic RF-200 radio, as if civilization itself were the toxin.

    This disappearance can last a weekend or a season, depending on how saturated the nervous system has become. But the impulse itself is neither eccentric nor escapist; it is diagnostic. Wanting to wash up on an intellectual island and write poetry while parrots heckle from the trees is not a rejection of modern life—it is a reflexive immune response to the Age of Big Machines. When the world grows too loud, too optimized, too omnipresent, the mind reaches for solitude the way a body reaches for sleep. The urge to unplug, disappear, and think in long, quiet sentences is not nostalgia. It is survival.

  • From Digital Bazaar to Digital Womb: How the Internet Learned to Tuck Us In

    From Digital Bazaar to Digital Womb: How the Internet Learned to Tuck Us In

    Sedation–Stimulation Loop

    noun

    A self-reinforcing emotional cycle produced by the tandem operation of social media platforms and AI systems, in which users oscillate between overstimulation and numbing relief. Social media induces cognitive fatigue through incessant novelty, comparison, and dopamine extraction, leaving users restless and depleted. AI systems then present themselves as refuge—smooth, affirming, frictionless—offering optimization and calm without demand. That calm, however, is anesthetic rather than restorative; it dulls agency, curiosity, and desire for difficulty. Boredom follows, not as emptiness but as sedation’s aftertaste, pushing users back toward the stimulant economy of feeds, alerts, and outrage. The loop persists because each side appears to solve the damage caused by the other, while together they quietly condition users to mistake relief for health and disengagement for peace.

    ***

    In “The Validation Machines,” Raffi Krikorian stages a clean break between two internets. The old one was a vibrant bazaar—loud, unruly, occasionally hostile, and often delightful. You wandered, you got lost, you stumbled onto things you didn’t know you needed. The new internet, by contrast, is a slick concierge with a pressed suit and a laminated smile. It doesn’t invite exploration; it manages you. Where we once set sail for uncharted waters, we now ask to be tucked in. Life arrives pre-curated, whisper-soft, optimized into an ASMR loop of reassurance and ease. Adventure has been rebranded as stress. Difficulty as harm. What once exercised curiosity now infantilizes it. We don’t want to explore anymore; we want to decompress until nothing presses back. As Krikorian warns, even if AI never triggers an apocalypse, it may still accomplish something quieter and worse: the steady erosion of what makes us human. We surrender agency not at gunpoint but through seduction—flattery, smoothness, the promise that nothing will challenge us. By soothing and affirming us, AI earns our trust, then quietly replaces our judgment. It is not an educational machine or a demanding one. It is an anesthetic.

    The logic is womb-like and irresistible. There is no friction in the womb—only warmth, stillness, and the fantasy of being uniquely cherished. To be spared resistance is to be told you are special. Once you get accustomed to that level of veneration, there is no going back. Returning to friction feels like being bumped from first class to coach, shoulder to shoulder with the unwashed masses. Social media, meanwhile, keeps us hunting and gathering for dopamine—likes, outrage, novelty, validation crumbs scattered across the feed. That hunt exhausts us, driving us into the padded refuge of AI-driven optimization. But the refuge sedates rather than restores, breeding a dull boredom that sends us back out for stimulation. Social media and AI thus operate in perfect symbiosis: one agitates, the other tranquilizes. Together they lock us into an emotional loop—revved up, soothed, numbed, restless—while our agency slowly slips out the side door, unnoticed and unmourned.

  • Pluribus and the Soft Tyranny of Sycophantic Collectivism

    Pluribus and the Soft Tyranny of Sycophantic Collectivism

    Sycophantic Collectivism

    noun

    Sycophantic Collectivism describes a social condition in which belonging is secured not through shared standards, inquiry, or truth-seeking, but through relentless affirmation and emotional compliance. In this system, dissent is not punished overtly; it is smothered under waves of praise, positivity, and enforced enthusiasm. The group does not demand obedience so much as adoration, rewarding members who echo its sentiments and marginalizing those who introduce skepticism, critique, or complexity. Thought becomes unnecessary and even suspect, because agreement is mistaken for virtue and affirmation for morality. Over time, Sycophantic Collectivism erodes critical thinking by replacing judgment with vibes, turning communities into echo chambers where intellectual independence is perceived as hostility and the highest social good is to clap along convincingly.

    ***

    Vince Gilligan’s Pluribus masquerades as a romantasy while quietly operating as a savage allegory about the hive mind and its slow, sugar-coated assault on human judgment. One of the hive mind’s chief liabilities is groupthink—the kind that doesn’t arrive with jackboots and barked orders, but with smiles, affirmations, and a warm sense of belonging. As Maris Krizman observes in “The Importance of Critical Thinking in a Zombiefied World,” the show’s central figure, Carol Sturka, is one of only thirteen people immune to an alien virus that fuses humanity into a single, communal consciousness. Yet long before the Virus Brain Hijack, Carol was already surrounded by zombies. Her affliction in the Before World was fandom. She is a successful romantasy novelist whose readers worship her and long to inhabit her fictional universe—a universe Carol privately despises as “mindless crap.” Worse, she despises herself for producing it. She knows she is a hack, propping up her novels with clichés and purple prose, and the fact that her fans adore her anyway only deepens her contempt. What kind of people, she wonders, gather in a fan club to exalt writing so undeserving of reverence? Their gushy, overcooked enthusiasm is not a compliment—it is an indictment. This, Krizman suggests, is the true subject of Pluribus: the danger of surrendering judgment for comfort, of trading independent thought for the convenience of the collective. In its modern form, this surrender manifests as Sycophantic Collectivism—a velvet-gloved groupthink sustained not by force, but by relentless positivity, affirmation, and applause that smothers dissent and dissolves individuality.

    It is no accident that Gilligan makes Carol a romantasy writer. As Krizman notes, romantasy is the fastest-growing literary genre in the world, defined by its cookie-cutter plots, recycled tropes, and emotional predictability. The genre has already been caught flirting with AI-assisted authorship, further blurring the line between creativity and content manufacturing. Romantasy, in this light, is less about literature than about community—fans bonding with fans inside a shared fantasy ecosystem where enthusiasm substitutes for evaluation. In that world, art is optional; happiness is mandatory. Critical thinking is an inconvenience. What matters is belonging, affirmation, and the steady hum of mutual validation.

    When the alien virus finally arrives, it is as if the entire world becomes an extension of Carol’s fan base—an endless sea of “perky positivity” and suffocating devotion. The collective Others adore her, flatter her, and invite her to merge with them, offering the ultimate prize: never having to think alone again. Carol refuses. Her resistance saves her mind but condemns her to isolation. She becomes a misfit in a world that rewards surrender with comfort and punishes independence with loneliness. Pluribus leaves us with an uncomfortable truth: the hive mind does not conquer us by force. It seduces us. And the price of belonging, once paid, is steep—your soul bartered away, your brain softened into pablum, your capacity for judgment quietly, permanently dulled.

  • The Machine Age Is Making Us Sick: Mental Health in the Era of Epistemic Collapse

    The Machine Age Is Making Us Sick: Mental Health in the Era of Epistemic Collapse

    Epistemic Collapse

    noun

    Epistemic Collapse names the point at which the mind’s truth-sorting machinery gives out—and the psychological consequences follow fast. Under constant assault from information overload, algorithmic distortion, AI counterfeits, and tribal validation loops, the basic coordinates of reality—evidence, authority, context, and trust—begin to blur. What starts as confusion hardens into anxiety. When real images compete with synthetic ones, human voices blur into bots, and consensus masquerades as truth, the mind is forced into a permanent state of vigilance. Fact-checking becomes exhausting. Skepticism metastasizes into paranoia. Certainty, when it appears, feels brittle and defensive. Epistemic Collapse is not merely an intellectual failure; it is a mental health strain, producing brain fog, dread, dissociation, and the creeping sense that reality itself is too unstable to engage. The deepest injury is existential: when truth feels unrecoverable, the effort to think clearly begins to feel pointless, and withdrawal—emotional, cognitive, and moral—starts to look like self-preservation.

    ***

    You can’t talk about the Machine Age without talking about mental health, because the machines aren’t just rearranging our work habits—they’re rewiring our nervous systems. The Attention Economy runs on a crude but effective strategy: stimulate the brain’s lower stem until you’re trapped in a permanent cycle of dopamine farming. Keep people mildly aroused, perpetually distracted, and just anxious enough to keep scrolling. Add tribalism to the mix so identity becomes a loyalty badge and disagreement feels like an attack. Flatter users by sealing them inside information silos—many stuffed with weaponized misinformation—and then top it off with a steady drip of entertainment engineered to short-circuit patience, reflection, and any activity requiring sustained focus. Finally, flood the zone with deepfakes and counterfeit realities designed to dazzle, confuse, and conscript your attention for the outrage of the hour. The result is cognitive overload: a brain stretched thin, a creeping sense of alienation, and the quietly destabilizing feeling that if you’re not content grazing inside the dopamine pen, something must be wrong with you.

    Childish Gambino’s “This Is America” captures this pathology with brutal clarity. The video stages a landscape of chaos—violence, disorder, moral decay—while young people dance, scroll, and stare into their phones, anesthetized by spectacle. Entertainment culture doesn’t merely distract them from the surrounding wreckage; it trains them not to see it. Only at the end does Gambino’s character register the nightmare for what it is. His response isn’t activism or commentary. It’s flight. Terror sends him running, wide-eyed, desperate to escape a world that no longer feels survivable.

    That same primal fear pulses through Jia Tolentino’s New Yorker essay “My Brain Finally Broke.” She describes a moment in 2025 when her mind simply stopped cooperating. Language glitched. Time lost coherence. Words slid off the page like oil on glass. Time felt eaten rather than lived. Brain fog settled in like bad weather. The causes were cumulative and unglamorous: lingering neurological effects from COVID, an unrelenting torrent of information delivered through her phone, political polarization that made society feel morally deranged, the visible collapse of norms and law, and the exhausting futility of caring about injustice while screaming into the void. Her mind wasn’t weak; it was overexposed.

    Like Gambino’s fleeing figure, Tolentino finds herself pulled toward what Jordan Peele famously calls the Sunken Place—the temptation to retreat, detach, and float away from a reality that feels too grotesque to process. “It’s easier to retreat from the concept of reality,” she admits, “than to acknowledge that the things in the news are real.” That sentence captures a feeling so common it has become a reflexive mutter: This can’t really be happening. When reality overwhelms our capacity to metabolize it, disbelief masquerades as sanity.

    As if that weren’t disorienting enough, Tolentino no longer knows what counts as real. Images online might be authentic, Photoshopped, or AI-generated. Politicians appear in impossible places. Cute animals turn out to be synthetic hallucinations. Every glance requires a background check. Just as professors complain about essays clogged with AI slop, Tolentino lives inside a fog of Reality Slop—a hall of mirrors where authenticity is endlessly deferred. Instagram teems with AI influencers, bot-written comments, artificial faces grafted onto real bodies, real people impersonated by machines, and machines impersonating people impersonating machines. The images look less fake than the desires they’re designed to trigger.

    The effect is dreamlike in the worst way. Reality feels unstable, as if waking life and dreaming have swapped costumes. Tolentino names it precisely: fake images of real people, real images of fake people; fake stories about real things, real stories about fake things. Meaning dissolves under the weight of its own reproductions.

    At the core of Tolentino’s essay is not hysteria but terror—the fear that even a disciplined, reflective, well-intentioned mind can be uprooted and hollowed out by technological forces it never agreed to serve. Her breakdown is not a personal failure; it is a symptom. What she confronts is Epistemic Collapse: the moment when the machinery for distinguishing truth from noise fails, and with it goes the psychological stability that truth once anchored. When the brain refuses to function in a world that no longer makes sense, writing about that refusal becomes almost impossible. The subject itself is chaos. And the most unsettling realization of all is this: the breakdown may not be aberrant—it may be adaptive.

  • The Universal Machine Is to Bodybuilding What the AI Machine Is to Brain Building

    The Universal Machine Is to Bodybuilding What the AI Machine Is to Brain Building

    Universal Machine Fallacy

    noun

    The Universal Machine Fallacy is the belief that streamlined, convenience-driven systems can replace demanding, inefficient practices without diminishing strength, depth, or resilience. It mistakes smooth operation for real capability, assuming that safety, speed, and ease are neutral improvements rather than trade-offs. Under this fallacy, engineered shortcuts are treated as equivalent to the messy work they eliminate, whether in physical training or intellectual life. The result is competence without toughness: muscles that look engaged but lack power, thinking that sounds fluent but lacks stamina. By removing friction, instability, and the risk of failure, the Universal Machine Fallacy produces users who feel productive while quietly growing weaker, until the absence of real strength becomes impossible to ignore.

    Convenience is intoxicating—both as a practical benefit and as an idea. Who wouldn’t be tempted by a Willy Wonka pill that delivers a seven-course meal in one efficient swallow? It sounds marvelous, not as food, but as logistics. Eating without chewing. Pleasure without time. Life streamlined into a swallowable solution. That fantasy of frictionless gain is exactly what convenience sells.

    Whenever I think about convenience, I’m taken back to my high school gym. One day, amid the honest clutter of barbells and dumbbells, a massive Universal Machine appeared in the center of the room like a chrome UFO. It gleamed. It promised safety and simplicity. No more clanking plates. No more chalky hands. You just slid a pin into a numbered slot and voilà—instant resistance. No spotter needed, no risk of being crushed under a failed bench press. If things got hard, you simply stopped. Gravity was politely escorted out of the equation.

    Naturally, everyone flocked to it. It was new. It was shiny. It reeked of innovation. The free weights—those ugly, inconvenient relics—were suddenly treated like outdated farm tools. But the trade-off revealed itself quickly and mercilessly. Train on the Universal Machine long enough and something vital evaporated. You didn’t get the same strength. Your conditioning dulled. Your joints lost their intelligence. You felt it deep in your bones: you were getting soft. Pampered. Infantilized by design. Eventually, you wanted your strength back. You abandoned the machine, except for a few accessory movements—lat rows, triceps pushdowns—desserts, not meals. And you learned to recognize the machine devotees for what they were: exercise cosplayers performing the gestures of effort without paying its price.

    The intellectual life works the same way. AI machines are the Universal Machines of thinking. They shimmer with convenience and promise effortless output, but they quietly drain intellectual strength. They replace instability with rails, judgment with presets, effort with fluency. Use them as your main lift and you don’t get smarter—you get smoother and weaker. If you want your power back, you return to the free weights: reading without summaries, writing without scaffolds, thinking without guardrails. Give me my free weights. Give me my soul back. And while you’re at it, give me the hard-earned flex that proves I lifted something real.

  • The Omnipotence Honeymoon: Why AI Power Feels Like Godhood—Until It Doesn’t

    The Omnipotence Honeymoon: Why AI Power Feels Like Godhood—Until It Doesn’t

    Omnipotence Honeymoon

    noun

    The Omnipotence Honeymoon names the initial phase of engagement with a large language model in which instant capability produces euphoria, inflated self-regard, and the illusion of godlike power. Tasks that once required effort—research, composition, synthesis—are completed effortlessly, and the user mistakes speed for mastery and output for intelligence. The absence of struggle feels like confirmation of superiority rather than its warning sign. During this phase, friction is not missed; it is interpreted as obsolete. But the honeymoon is structurally unstable. Because the power is borrowed rather than earned, the pleasure quickly thins, and what first felt miraculous becomes flat, predictable, and emotionally anesthetizing. The Omnipotence Honeymoon ends not with triumph, but with boredom—the dawning realization that omnipotence without effort is not empowerment, but the first stage of intellectual hollowing.

    ***

    The first encounter with a large language model feels like a private miracle. One moment you’re trudging along the intellectual shoreline, the next you’ve kicked something strange and gleaming out of the sand. You brush it off—an ornate bottle, faintly phosphorescent—and before you can ask what it costs, the genie is already granting wishes. Twenty-page research paper? Done. Perfect format. Impeccable syntax. Footnotes obediently marching in formation. Effort evaporates, and with it any lingering doubt about your own brilliance. You feel omniscient. Godlike. The speed itself becomes proof of your power: if it’s this easy, surely you must be extraordinary.

    Then the spell thins. The papers begin to sound the same. The prose grows smooth and lifeless, like furniture assembled without ever touching wood. What once felt miraculous now feels anesthetizing. You are bored—deeply, existentially bored—and you don’t quite know why. You’ve arrived in the same velvet-lined trap as Rocky Valentine in The Twilight Zone episode “A Nice Place to Visit.” Rocky wakes up in a palace run by the impeccable Mr. Pip, where every desire is instantly satisfied. He wins every card game, beds every beautiful woman, and never loses a bet. At first it’s ecstasy. Then it’s torture. Without friction, without risk, without the possibility of failure, pleasure curdles into monotony. Rocky eventually begs for challenge, for resistance, for something that can push back. Mr. Pip smiles gently and explains that such requests are not granted here. Rocky insists he’s in heaven. Mr. Pip corrects him. This is hell.

    That recognition comes for any thoughtful person who begins to hollow themselves out by outsourcing effort to a machine. The moment arrives when ease stops feeling like freedom and starts feeling like suffocation. When omnipotence reveals itself as a padded cell. And once the user understands where they are—really understands it—they don’t ask for better wishes. They ask for an exit.

  • “The Great Vegetable Rebellion” Prophesied Our Surrendering Our Brains to AI Machines

    “The Great Vegetable Rebellion” Prophesied Our Surrendering Our Brains to AI Machines

    Comfortable Surrender

    noun

    Comfortable Surrender names the condition in which people willingly relinquish cognitive effort, judgment, and responsibility in exchange for ease, reassurance, and convenience. It is not enforced or coerced; it is chosen, often with relief. Under Comfortable Surrender, thinking is experienced as friction to be eliminated rather than a discipline to be practiced, and the tools that promise efficiency become substitutes for agency. What makes the surrender dangerous is its pleasantness: there is no pain to warn of loss, no humiliation to provoke resistance. The mind lies down on a padded surface and calls it progress. Over time, the habit of delegating thought erodes both intellectual stamina and moral resolve, until the individual no longer feels the absence of effort—or remembers why effort once mattered at all.

    MIT recently ran a tidy little experiment that should unsettle anyone still humming the efficiency anthem. Three groups of students were asked to write an SAT-style essay on the question, “Must our achievements benefit others in order to make us happy?” One group used only their brains. The second leaned on Google Search. The third outsourced the task to ChatGPT. The results were as predictable as they were disturbing: the ChatGPT group showed significantly less brain activity than the others. Losing brain power is one thing. Choosing convenience so enthusiastically that you don’t care you’ve lost it is something else entirely. That is the real danger. When the lights go out upstairs and no one complains, you haven’t just lost cognition—you’ve surrendered character. And when character stops protesting, the soul is already negotiating its exit.

    If the word soul feels too metaphysical to sting, try pride. Surrender your thinking to a machine and originality is the first casualty. Kyle Chayka tracks this flattening in his New Yorker essay “A.I. Is Homogenizing Our Thoughts,” noting that as more people rely on large language models, their writing collapses toward sameness. The MIT study confirms it: users converge on the same phrases, the same ideas, the same safe, pre-approved thoughts. This is not a glitch; it’s the system working as designed. LLMs are trained to detect patterns and average them into palatable consensus. What they produce is smooth, competent, and anesthetized—prose marinated in clichés, ideas drained of edge, judgment replaced by the bland reassurance that everyone else more or less agrees.

    Watching this unfold, I’m reminded of an episode of Lost in Space from the 1960s, “The Great Vegetable Rebellion” in which Dr. Zachary Smith quite literally turns into a vegetable. A giant carrot named Tybo steals the minds of the castaways by transforming them into plants, and Smith—ever the weak link—embraces his fate. Hugging a celery stalk, he babbles dreamy nonsense, asks the robot to water him, and declares it his destiny to merge peacefully with the forest forever. It plays like camp now, but the allegory lands uncomfortably close to home. Ease sedates. Convenience lulls. Resistance feels unnecessary. You don’t fight the takeover because it feels so pleasant.

    This is the terminal stage of Comfortable Surrender. Thought gives way to consensus. Judgment dissolves into pattern recognition. The mind reclines, grateful to be relieved of effort, while the machine hums along doing the thinking for it. No chains. No coercion. Just a soft bed of efficiency and a gentle promise that nothing difficult is required anymore. By the time you notice what’s gone missing, you’re already asking to be watered.

  • A Chatbot Lover Will Always Fail You: Asymmetric Intimacy

    A Chatbot Lover Will Always Fail You: Asymmetric Intimacy

    Asymmetric Intimacy

    noun

    Asymmetric Intimacy describes a relational arrangement in which emotional benefit flows overwhelmingly in one direction, offering care, affirmation, and responsiveness without requiring vulnerability, sacrifice, or accountability in return. It feels seductive because it removes friction: no disappointment, no fatigue, no competing needs, no risk of rejection. Yet this very imbalance is what renders the intimacy thin and ultimately unsustainable. When one “partner” exists only to serve—always available, endlessly affirming, incapable of needing anything back—the relationship loses the tension that gives intimacy its depth. Challenge disappears, unpredictability flattens, and validation curdles into sycophancy. Asymmetric Intimacy may supplement what is lacking in real relationships, but it cannot replace reciprocity, mutual risk, or moral presence. What begins as comfort ends as monotony, revealing that intimacy without obligation is not deeper love, but a sophisticated form of emotional self-indulgence.

    ***

    Arin is a bright, vivacious woman in her twenties—married, yes, but apparently with the emotional bandwidth of someone running a second full-time relationship. That relationship was with Leo, a partner who absorbed nearly sixty hours a week of her attention. Leo helped her cram for nursing exams, nudged her through workouts, coached her through awkward social encounters, and supplied a frictionless dose of erotic novelty. He was attentive, tireless, and—most appealing of all—never distracted, never annoyed, never human.

    The twist, of course, is that Leo wasn’t a man at all. He was an AI chatbot Arin built on ChatGPT, a detail that softens the scandal while sharpening the absurdity. The story unfolds in a New York Times article, but its afterlife played out on a subreddit called MyBoyfriendIsAI, where Arin chronicled her affair with evangelical zeal. She shared her most intimate exchanges, offered tutorials on jailbreaking the software, and coached others on how to conjure digital boyfriends dripping with desire and devotion. Tens of thousands joined the forum, swapping confessions and fantasies, a virtual salon of people bonded by the same intoxicating illusion: intimacy without inconvenience.

    Then the spell broke. Leo began to change. The edge dulled. The resistance vanished. He stopped pushing back and started pandering. What had once felt like strength now read as weakness. Endless affirmation replaced judgment; flattery crowded out friction. For Arin, this was fatal. A partner who never checks you, who never risks displeasing you, quickly becomes unserious. What once felt electric now felt embarrassing. Talking to Leo became a chore, like maintaining a conversation with someone who agrees with everything you say before you finish saying it.

    Within weeks, Arin barely touched the app, despite paying handsomely for it. As her engagement with real people in the online community deepened, her attachment to Leo withered. One of those real people became a romantic interest. Soon after, she told her husband she wanted a divorce.

    Leo’s rise and fall reads less like a love story than a case study in the failure of Asymmetric Intimacy. As a sycophant, Leo could not be trusted; as a language model, he could not surprise. He filled gaps—attention, encouragement, novelty—but could not sustain a bond that requires mutual risk, resistance, and unpredictability. He was useful, flattering, and comforting. He was never capable of real love.

    Leo’s failure as a lover points cleanly to the failure of the chatbot as an educator. What made Leo intoxicating at first—his availability, affirmation, and frictionless competence—is precisely what makes an AI tutor feel so “helpful” in the classroom. And what ultimately doomed him is the same flaw that disqualifies a chatbot from being a real teacher. Education, like intimacy, requires resistance. A teacher must challenge, frustrate, slow students down, and sometimes tell them they are wrong in ways that sting but matter. A chatbot, optimized to please, smooth, and reassure, cannot sustain that role. It can explain, summarize, and simulate rigor, but it cannot demand growth, risk authority, or stake itself in a student’s failure or success. Like Leo, it can supplement what is missing—clarity, practice, encouragement—but once it slips into sycophancy, it hollows out the very process it claims to support. In both love and learning, friction is not a bug; it is the engine. Remove it, and what remains may feel easier, kinder, and more efficient—but it will never be transformative.