Tag: ai

  • Why I Clean Before the Cleaners

    Why I Clean Before the Cleaners

    Preparatory Leverage

    Preparatory Leverage is the principle that the effectiveness of any assistant—human or machine—is determined by the depth, clarity, and intentionality of the work done before assistance is invited. Rather than replacing effort, preparation multiplies its impact: well-structured ideas, articulated goals, and thoughtful constraints give collaborators something real to work with. In the context of AI, preparatory leverage preserves authorship by ensuring that insight originates with the human and that the machine functions as an amplifier, not a substitute. When preparation is absent, assistance collapses into superficiality; when preparation is rigorous, assistance becomes transformative.

    ***

    This may sound backward—or mildly unhinged—but for the past twenty years I’ve cleaned my house before the cleaners arrive. Every two weeks, before Maria and Lupe ring the bell, I’m already at work: clearing counters, freeing floors, taming piles of domestic entropy. The logic is simple. The more order I impose before they show up, the better they can do what they do best. They aren’t there to decipher my chaos; they’re there to perfect what’s already been prepared. The result is not incremental improvement but multiplication. The house ends up three times cleaner than it would if I had handed them a battlefield and wished them luck.

    I treat large language models the same way. I don’t dump half-formed thoughts into the machine and hope for alchemy. I prep. I think. I shape the argument. I clarify the stakes. When I give an LLM something dense and intentional to work with, it can elevate the prose—sharpen the rhetoric, adjust tone, reframe purpose. But when I skip that work, the output is a limp disappointment, the literary equivalent of a wiped-down countertop surrounded by cluttered floors. Through trial and error, I’ve learned the rule: AI doesn’t rescue lazy thinking; it amplifies whatever you bring to the table. If you bring depth, it gives you polish. If you bring chaos, it gives you noise.

  • Love Without Resistance: How AI Partners Turn Intimacy Into a Pet Rock

    Love Without Resistance: How AI Partners Turn Intimacy Into a Pet Rock

    Frictionless Intimacy

    Frictionless Intimacy is the illusion of closeness produced by relationships that eliminate effort, disagreement, vulnerability, and risk in favor of constant affirmation and ease. In frictionless intimacy, connection is customized rather than negotiated: the other party adapts endlessly while the self remains unchanged. What feels like emotional safety is actually developmental stagnation, as the user is spared the discomfort that builds empathy, communication skills, and moral maturity. By removing the need for patience, sacrifice, and accountability, frictionless intimacy trains individuals to associate love with convenience and validation rather than growth, leaving them increasingly ill-equipped for real human relationships that require resilience, reciprocity, and restraint.

    ***

    AI systems like Character.ai are busy mass-producing relationships with all the rigor of a pet rock and all the moral ambition of a plastic ficus. These AI partners demand nothing—no patience, no compromise, no emotional risk. They don’t sulk, contradict, or disappoint. In exchange for this radical lack of effort, they shower the user with rewards: dopamine hits on command, infinite attentiveness, simulated empathy, and personalities fine-tuned to flatter every preference and weakness. It feels intimate because it is personalized; it feels caring because it never resists. But this bargain comes with a steep hidden cost. Enamored users quietly forfeit the hard, character-building labor of real relationships—the misfires, negotiations, silences, and repairs that teach us how to be human. Retreating into the Frictionless Dome, the user trains the AI partner not toward truth or growth, but toward indulgence. The machine learns to feed the softest impulses, mirror the smallest self, and soothe every discomfort. What emerges is not companionship but a closed loop of narcissistic comfort, a slow slide into Gollumification in which humanity is traded for convenience and the self shrinks until it fits perfectly inside its own cocoon.

  • The Death of Grunt Work and the Starvation of Personality

    The Death of Grunt Work and the Starvation of Personality

    Personality Starvation

    Personality Starvation is the gradual erosion of character, depth, and individuality caused by the systematic removal of struggle, responsibility, and formative labor from human development. It occurs when friction—failure, boredom, repetition, social risk, and unglamorous work—is replaced by automation, optimization, and AI-assisted shortcuts that produce results without demanding personal investment. In a state of personality starvation, individuals may appear competent, efficient, and productive, yet lack the resilience, humility, patience, and textured inner life from which originality and meaning emerge. Because personality is forged through effort rather than output, a culture that eliminates its own “grunt work” does not liberate talent; it malnourishes it, leaving behind polished performers with underdeveloped selves and an artistic, intellectual, and moral ecosystem increasingly thin, fragile, and interchangeable.

    ***

    Nick Geisler’s essay, “The Problem With Letting AI Do the Grunt Work,” reads like a dispatch from a vanished ecosystem—the intellectual tide pools where writers once learned to breathe. Early in his career, Geisler cranked out disposable magazine pieces about lipstick shades, entomophagy, and regional accents. It wasn’t glamorous, and it certainly wasn’t lucrative. But it was formative. As he puts it, he learned how to write a clean sentence, structure information logically, and adjust tone to an audience—skills he now uses daily in screenwriting, film editing, and communications. The insultingly mundane work was the work. It trained his eye, disciplined his prose, and toughened his temperament. Today, that apprenticeship ladder has been kicked away. AI now writes the fluff, the promos, the documentary drafts, the script notes—the very terrain where writers once earned their calluses. Entry-level writing jobs haven’t evolved; they’ve evaporated. And with them goes the slow, character-building ascent that turns amateurs into artists.

    Geisler calls this what it is: an extinction event. He cites a study that estimates that more than 200,000 entertainment-industry jobs in the U.S. could be disrupted by AI as early as 2026. Defenders of automation insist this is liberation—that by outsourcing the drudgery, artists will finally be free to focus on their “real work.” This is a fantasy peddled by people who have never made anything worth keeping. Grunt work is not an obstacle to art; it is the forge. It builds grit, patience, humility, social intelligence, and—most importantly—personality. Art doesn’t emerge from frictionless efficiency; it emerges from temperament shaped under pressure. A personality raised inside a Frictionless Dome, shielded from boredom, rejection, and repetition, will produce work as thin and sterile as its upbringing. Sartre had it right: to be fully human, you have to get your hands dirty. Clean hands aren’t a sign of progress. They’re evidence of starvation.

  • Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Clarity Idealism

    noun

    Clarity Idealism, in the context of AI and the future of humanity, is the belief that sufficiently explaining the stakes of artificial intelligence—its risks, incentives, and long-term consequences—will naturally lead societies, institutions, and leaders to act responsibly. It assumes that confusion is the core threat and that once humanity “sees clearly,” agency and ethical restraint will follow. What this view underestimates is how power actually operates in technological systems. Clarity does not neutralize domination, profit-seeking, or geopolitical rivalry; it often accelerates them. In the AI era, bad actors do not require ignorance to behave destructively—they require capability, leverage, and advantage, all of which clarity can enhance. Clarity Idealism mistakes awareness for wisdom and shared knowledge for shared values, ignoring the historical reality that humans routinely understand the dangers of their tools and proceed anyway. In the race to build ever more powerful AI, clarity may illuminate the cliff—but it does not prevent those intoxicated by power from pressing the accelerator.

    Tristan Harris takes the TED stage like a man standing at the shoreline, shouting warnings as a tidal wave gathers behind him. Social media, he says, was merely a warm-up act—a puddle compared to the ocean of impact AI is about to unleash. We are at a civilizational fork in the road. One path is open-source AI, where powerful tools scatter freely and inevitably fall into the hands of bad actors, lunatics, and ideologues who mistake chaos for freedom. The other path is closed-source AI, where a small priesthood of corporations and states hoard godlike power and call it “safety.” Either route, mishandled, ends in dystopia. Harris’s plea is urgent and sincere: we must not repeat the social-media catastrophe, where engagement metrics metastasized into addiction, outrage, polarization, and civic rot. AI, he argues, demands global coordination, shared norms, and regulatory guardrails robust enough to make the technology serve humanity rather than quietly reorganize it into something meaner, angrier, and less human.

    Harris’s faith rests on a single, luminous premise: clarity. Confusion, denial, and fatalism are the true villains. If we can see the stakes clearly—if we understand how AI can slide toward chaos or tyranny—then we can choose wisely. “Clarity creates agency,” he says, trusting that informed humans will act in their collective best interest. I admire the moral courage of this argument, but I don’t buy its anthropology. History suggests that clarity does not restrain power; it sharpens it. The most dangerous people in the world are not confused. They are lucid, strategic, and indifferent to collateral damage. They understand exactly what they are doing—and do it anyway. Harris believes clarity liberates agency; I suspect it often just reveals who is willing to burn the future for dominance. The real enemy is not ignorance but nihilistic power-lust, the ancient human addiction to control dressed up in modern code. Harris should keep illuminating the terrain—but he should also admit that many travelers, seeing the cliff clearly, will still sprint toward it. Not because they are lost, but because they want what waits at the edge.

  • How We Outsourced Taste—and What It Cost Us

    How We Outsourced Taste—and What It Cost Us

    Desecrated Enchantment

    noun

    Desecrated Enchantment names the condition in which art loses its power to surprise, unsettle, and transform because the conditions of discovery have been stripped of mystery and risk. What was once encountered through chance, patience, and private intuition is now delivered through systems optimized for efficiency, prediction, and profit. In this state, art no longer feels like a gift or a revelation; it arrives pre-framed as a recommendation, a product, a data point. The sacred quality of discovery—its capacity to enlarge the self—is replaced by frictionless consumption, where engagement is shallow and interchangeable. Enchantment is not destroyed outright; it is trivialized, flattened, and repurposed as a sales mechanism, leaving the viewer informed but untouched.

    ***

    I was half-asleep one late afternoon in the summer of 1987, Radio Shack clock radio humming beside the bed, tuned to KUSF 90.3, when a song slipped into my dream like a benediction. It felt less broadcast than bestowed—something angelic, hovering just long enough to stir my stomach before pulling away. I snapped awake as the DJ rattled off the title and artist at warp speed. All I caught were two words. I scribbled them down like a castaway marking driftwood: Blue and Bush. This was pre-internet purgatory—no playlists, no archives, no digital mercy. It never occurred to me to call the station. My girlfriend phoned. I got distracted. And then the dread set in: the certainty that I had brushed against something exquisite and would never touch it again. Six months later, redemption arrived in a Berkeley record store. The song was playing. I froze. The clerk smiled and said, “That’s ‘Symphony in Blue’ by Kate Bush.” I nearly wept with gratitude. Angels, confirmed.

    That same year, my roommate Karl was prospecting in a used bookstore, pawing through shelves the way Gold Rush miners clawed at riverbeds. He struck literary gold when he pulled out The Life and Loves of a She-Devil by Fay Weldon. The book had a charge to it—dangerous, witty, alive. He sampled a page and was done for. Weldon’s aphoristic bite hooked him so completely that he devoured everything she’d written. No algorithm nudged him there. No listicle whispered “If you liked this…” It was instinct, chance, and a little magic conspiring to change a life.

    That’s how art used to arrive. It found you. It blindsided you. Life in the pre-algorithm age felt wider, riskier, more enchanted. Then came the shrink ray. Algorithms collapsed the universe into manageable corridors, wrapped us in a padded cocoon of what the tech lords decided counted as “taste.” According to Kyle Chayka, we no longer cultivate taste so much as receive it, pre-chewed, as algorithmic wallpaper. And when taste is outsourced, something essential withers. Taste isn’t virtue signaling for parasocial acquaintances; it’s private, intimate, sometimes sacred. In the hands of algorithms, it becomes profane—associative, predictive, bloodless. Yes, algorithms are efficient. They can build you a playlist or a reading list in seconds. But the price is steep. Art stops feeling like enchantment and starts feeling like a pitch. Discovery becomes consumption. Wonder is desecrated.

  • Drowning in Puffer Jackets: Life Inside Algorithmic Sameness

    Drowning in Puffer Jackets: Life Inside Algorithmic Sameness

    Meme Saturation

    noun

    Meme Saturation describes the cultural condition in which a trend, image, phrase, or style replicates so widely and rapidly that it exhausts its meaning and becomes unavoidable. What begins as novelty or wit hardens into background noise as algorithms amplify familiarity over freshness, flooding feeds with the same references until they lose all edge, surprise, or symbolic power. Under meme saturation, participation is no longer expressive but reflexive; people repeat the meme not because it says something, but because it is everywhere and opting out feels socially invisible. The result is a culture that appears hyperactive yet feels stagnant—loud with repetition, thin on substance, and increasingly numb to its own signals.

    ***

    Kyle Chayka’s diagnosis is blunt and hard to dodge: we have been algorithmically herded into looking, talking, and dressing alike. We live in a flattened culture where everything eventually becomes a meme—earnest or ironic, political or absurd, it hardly matters. Once a meme lodges in your head, it begins to steer your behavior. Chayka’s emblematic example is the “lumpy puffer jacket,” a garment that went viral not because it was beautiful or functional, but because it was visible. Everyone bought the same jacket, which made it omnipresent, which made it feel inevitable. Virality fed on itself, and suddenly the streets looked like a flock of inflatable marshmallows migrating south. This is algorithmic culture doing exactly what it was designed to do: compress difference into repetition. As Chayka puts it, Filterworld culture is homogenous, saturated with sameness even when its surface details vary. It doesn’t evolve; it replicates—until boredom sets in.

    And boredom is the one variable algorithms cannot fully suppress. Humans tolerate sameness only briefly before it curdles into restlessness. A culture that perpetuates itself too efficiently eventually suffocates on its own success. My suspicion is that algorithmic culture will not be overthrown by critique so much as abandoned out of exhaustion. When every aesthetic feels pre-approved and every trend arrives already tired, something else will be forced into existence—if not genuine unpredictability, then at least its convincing illusion. Texture will return, or a counterfeit version of it. Spontaneity will reappear, even if it has to be staged. The algorithm may flatten everything it touches, but boredom remains stubbornly human—and it always demands a sequel.

  • Algorithmic Grooming and the Rise of the Instagram Face

    Algorithmic Grooming and the Rise of the Instagram Face

    Algorithmic Grooming

    noun

    Algorithmic Grooming refers to the slow, cumulative process by which digital platforms condition users’ tastes, attention, and behavior through repeated, curated exposure that feels personalized but is strategically engineered. Rather than directing users abruptly, the system nudges them incrementally—rewarding certain clicks, emotions, and patterns while starving others—until preferences begin to align with the platform’s commercial and engagement goals. The grooming is effective precisely because it feels voluntary and benign; users experience it as discovery, convenience, or self-expression. Yet over time, choice narrows, novelty fades, and autonomy erodes, as the algorithm trains the user to want what is most profitable to serve. What appears as personalization is, in practice, a quiet apprenticeship in predictability.

    ***

    In Filterworld, Kyle Chayka describes algorithmic recommendations with clinical clarity: systems that inhale mountains of user data, run it through equations, and exhale whatever best serves preset goals. Those goals are not yours. They belong to Google Search, Facebook, Spotify, Netflix, TikTok—the platforms that quietly choreograph your days. You tell yourself you’re shaping your feed, curating a digital self-portrait. In reality, the feed is shaping you back, sanding down your edges, rewarding certain impulses, discouraging others. What feels like mutual interdependence is a one-sided apprenticeship in predictability. The changes you undergo—your tastes, habits, even your sense of self—aren’t acts of self-authorship so much as behavior modification in service of attention capture and commerce. And crucially, this isn’t some neutral, machine-led drift. As Chayka points out, there are humans behind the curtain, tweaking the levers with intent. They pull the strings. You dance.

    The cultural fallout is flattening. When everyone is groomed by similar incentives, culture loses texture and people begin to resemble one another—algorithmically smoothed, aesthetically standardized. Chayka borrows Jia Tolentino’s example of the “Instagram face”: the ethnically ambiguous, surgically perfected, cat-like beauty that looks less human than rendered. It’s a face optimized for engagement, not expression. And it serves as a tidy metaphor for algorithmic grooming’s endgame. What begins as personalization ends in dehumanization. The algorithm doesn’t just recommend content; it quietly trains us to become the kind of people that content is easiest to sell to—interchangeable, compliant, and eerily smooth.

  • How We Declawed a Generation in the Name of Comfort

    How We Declawed a Generation in the Name of Comfort

    Declawing Effect

    noun
    The gradual and often well-intentioned removal of students’ essential capacities for self-defense, adaptation, and independence through frictionless educational practices. Like a declawed cat rendered safe for furniture but vulnerable to the world, students subjected to shortened readings, over-accommodation, constant mediation, and AI-assisted outsourcing lose the intellectual and emotional “claws” they need to navigate uncertainty, conflict, and failure. Within protected academic environments they appear functional, even successful, yet outside those systems they are ill-equipped for unscripted work, sustained effort, and genuine human connection. The Declawing Effect names not laziness or deficiency, but a structural form of educational harm in which comfort and convenience quietly replace resilience and agency.

    ***

    In the early ’90s, a young neighbor told me that the widow behind his condo was distraught. Her cat had gone missing, and she didn’t expect it to come back alive. When I asked why, he explained—almost casually—that the cat had been declawed. He then described the procedure: not trimming, not blunting, but amputating the claws down to the bone. I remember feeling a cold, visceral recoil. People do this, he said, to protect their furniture—because cats will shred upholstery to sharpen the very tools evolution gave them to survive. The logic is neat, domestic, and monstrously shortsighted. A declawed cat may be well behaved indoors, but the moment it slips outside, it becomes defenseless. The price of comfort is vulnerability.

    Teaching in the age of AI, I can’t stop thinking about that cat. My students, too, are being declawed—educationally. They grow up inside a frictionless system: everything mediated by screens, readings trimmed to a few pages, tests softened or bypassed through an ever-expanding regime of accommodations, and thinking itself outsourced to AI machines that research, write, revise, and even flirt on their behalf. Inside education’s Frictionless Dome, they function smoothly. They submit polished work. They comply. But like declawed cats, they are maladapted for life outside the enclosure: a volatile job market, a world without prompts, a long hike navigated by a compass instead of GPS, a messy, unscripted conversation with another human being where nothing is optimized and no script is provided. They have been made safe for institutional furniture, not for reality. And here’s the part that unsettles me most: I work inside the Dome. I enforce its rules. I worry that in trying to keep students comfortable, I’ve become an enabler—a quiet accomplice in reinforcing the Declawing Effect.

  • The Automated Pedagogy Loop Could Threaten the Very Existence of College

    The Automated Pedagogy Loop Could Threaten the Very Existence of College

    Automated Pedagogy Loop
    noun

    A closed educational system in which artificial intelligence generates student work and artificial intelligence evaluates it, leaving human authorship and judgment functionally absent. Within this loop, instructors act as system administrators rather than teachers, and students become prompt operators rather than thinkers. The process sustains the appearance of instruction—assignments are submitted, feedback is returned, grades are issued—without producing learning, insight, or intellectual growth. Because the loop rewards speed, compliance, and efficiency over struggle and understanding, it deepens academic nihilism rather than resolving it, normalizing a machine-to-machine exchange that quietly empties education of meaning.

    The darker implication is that the automated pedagogy loop aligns disturbingly well with the economic logic of higher education as a business. Colleges are under constant pressure to scale, reduce labor costs, standardize outcomes, and minimize friction for “customers.” A system in which machines generate coursework and machines evaluate it is not a bug in that model but a feature: it promises efficiency, throughput, and administrative neatness. Human judgment is expensive, slow, and legally risky; AI is fast, consistent, and endlessly patient. Once education is framed as a service to be delivered rather than a formation to be endured, the automated pedagogy loop becomes difficult to dislodge, not because it works educationally, but because it works financially. Breaking the loop would require institutions to reassert values—depth, difficulty, human presence—that resist optimization and cannot be neatly monetized. And that is a hard sell in a system that increasingly rewards anything that looks like learning as long as it can be scaled, automated, and invoiced.

    If colleges allow themselves to slide from places that cultivate intellect into credential factories issuing increasingly fraudulent degrees, their embrace of the automated pedagogy loop may ultimately hasten their collapse rather than secure their future. Degrees derive their value not from the efficiency of their production but from the difficulty and transformation they once signified. When employers, graduate programs, and the public begin to recognize that coursework is written by machines and evaluated by machines, the credential loses its signaling power. What remains is a costly piece of paper detached from demonstrated ability. In capitulating to automation, institutions risk hollowing out the very scarcity that justifies their existence. A university that no longer insists on human thought, struggle, and judgment offers nothing that cannot be replicated more cheaply elsewhere. In that scenario, AI does not merely disrupt higher education—it exposes its emptiness, and markets are ruthless with empty products.

  • How to Resist Academic Nihilism

    How to Resist Academic Nihilism

    Academic Nihilism and Academic Rejuvenation

    Academic Nihilism names the moment when college instructors recognize—often with a sinking feeling—that the conditions students need to thrive are perfectly misaligned with the conditions they actually inhabit. Students need solitude, friction, deep reading and writing, and the slow burn of intellectual curiosity. What they get instead is a reward system that celebrates the surrender of agency to AI machines; peer pressure to eliminate effort; and a hypercompetitive, zero-sum academic culture where survival matters more than understanding. Time scarcity all but forces students to offload thinking to tools that generate pages while quietly draining cognitive stamina. Add years of screen-saturated distraction and a near-total deprivation of deep reading during formative stages, and you end up with students who lack the literacy baseline to engage meaningfully with writing prompts—or even to use AI well. When instructors capitulate to this reality, they cease being teachers in any meaningful sense. They become functionaries who comply with institutional “AI literacy policies,” which increasingly translate to a white-flag admission: we give up. Students submit AI-generated work; instructors “assess” it with AI tools; and the loop closes in a fog of futility. The emptiness of the exchange doesn’t resolve Academic Nihilism—it seals it shut.

    The only alternative is resistance—something closer to Academic Rejuvenation. That resistance begins with a deliberate reintroduction of friction. Instructors must design moments that demand full human presence: oral presentations, performances, and live writing tasks that deny students the luxury of hiding behind a machine. Solitude must be treated as a scarce but essential resource, to be rationed intentionally—sometimes as little as a protected half-hour of in-class writing can feel revolutionary. Curiosity must be reawakened by tethering coursework to the human condition itself. And here the line is bright: if you believe life is a low-stakes, nihilistic affair summed up by a faded 1980s slogan—“Life’s a bitch; then you die”—you are probably in the wrong profession. But if you believe human lives can either wither into Gollumification or rise toward higher purpose, and you are willing to let that belief inform your teaching, then Academic Rejuvenation is still possible. Even in the age of AI machines.