Tag: writing

  • What Cochinita Pibil Can Teach Us About Learning

    What Cochinita Pibil Can Teach Us About Learning

    Academic Friction is the intentional reintroduction of difficulty, resistance, and human presence into the learning process as a corrective to academic nihilism. Academic friction rejects the premise that education should be frictionless, efficient, or fully mediated by machines, insisting instead that intellectual growth requires struggle, solitude, and sustained attention. It is created through practices that cannot be outsourced or automated—live writing, oral presentations, performance, slow reading, and protected time for thought—forcing students to confront ideas without the buffer of AI assistance. Far from being punitive, academic friction restores agency, rebuilds cognitive stamina, and reawakens curiosity by making learning consequential again. It treats difficulty not as an obstacle to be removed, but as the very medium through which thinking, meaning, and human development occur.

    Greatness is born from resistance. Depth is what happens when something pushes back. Friction is not an obstacle to meaning; it is the mechanism that creates it. Strip friction away and you don’t get excellence—you get efficiency, speed, and a thin satisfaction that evaporates on contact. This is as true in food as it is in thinking.

    Consider cochinita pibil, a dish that seems to exist for the sole purpose of proving that greatness takes time. Nothing about it is casual. Pork shoulder is marinated overnight in achiote paste, bitter orange juice, garlic, cumin, oregano—an aggressive, staining bath that announces its intentions early. The meat doesn’t just absorb flavor; it surrenders to it. Traditionally, it is wrapped in banana leaves, sealed like contraband, and buried underground in a pit oven. Heat rises slowly. Smoke seeps inward. Hours pass. The pork breaks down molecule by molecule, fibers loosening until resistance gives way to tenderness. This is not cooking as convenience; it is cooking as ordeal. The reward is depth—meat so saturated with flavor it feels ancient, ceremonial, earned.

    Now here’s the confession: as much as I love food, I love convenience more. And convenience is just another word for frictionless. I will eat oatmeal three times a day without hesitation. Not because oatmeal is great, but because it is obedient. It asks nothing of me. Pour, stir, microwave, done. Oatmeal does not resist. It does not demand patience, preparation, or attention. It delivers calories with monk-like efficiency. It is fuel masquerading as a meal, and I choose it precisely because it costs me nothing.

    The life of the intellect follows the same fork in the road. There is the path of cochinita pibil and the path of oatmeal. One requires slow reading, sustained writing, confusion, revision, and the willingness to sit with discomfort until something breaks open. The other offers summaries, shortcuts, prompts, and frictionless fluency—thought calories without intellectual nutrition. Both will keep you alive. Only one will change you.

    The tragedy of our moment is not that people prefer oatmeal. It’s that we’ve begun calling it cuisine. We’ve mistaken smoothness for insight and speed for intelligence. Real thinking, like real cooking, is messy, time-consuming, and occasionally exhausting. It stains the counter. It leaves you unsure whether it will be worth it until it is. But when it works, it produces something dense, resonant, and unforgettable.

    Cochinita pibil does not apologize for the effort it requires. Neither should serious thought. If we want depth, we have to accept friction. Otherwise, we’ll live well-fed on oatmeal—efficient, unchallenged, and never quite transformed.

  • The Fit Yoga Guy vs. the Hungry Bouncer

    The Fit Yoga Guy vs. the Hungry Bouncer

    Appetite–Identity Schism is the comic yet demoralizing rift between the person you believe you should be—lean, serene, lightly nourished by kombucha, nutritional yeast, and moral superiority—and the person your body stubbornly insists you are: ravenous, calorically ambitious, and constitutionally unsuited for dainty portions or lifestyle minimalism. In this schism, the mind dreams in yoga poses while the stomach dreams in baked goods; the aspirational self floats through the day fasting effortlessly, while the embodied self plans its next meal with the focus of a military campaign. The result is not merely frustration but a persistent identity crisis, in which self-improvement fantasies are repeatedly mugged by biology, and the gap between ideal and appetite becomes a source of chronic scowling, gallows humor, and reluctant acceptance that some bodies are built less for cucumber water and more for surviving winters.

    ***

    I love the idea of myself as a vegan: trim, luminous, gently smiling through yoga poses, fueled by virtue and trace minerals. I eat two, maybe three small meals a day—meals so tasteful and restrained they barely count as eating. I sip green tea. I flirt with cucumber water. I practice intermittent fasting with the smug serenity of someone who hasn’t felt hunger since 2009. I don’t need a cleanse because I always feel cleansed. A cleanse, for me, would be redundant—like washing a raindrop.

    Then reality clears its throat.

    Enter the gorilla in the room: my appetite. It is not mindful. It is not intermittent. It is an industrial operation. I dream in towers of molasses cookies. I wake up hungry. I snack the way fish breathe—constantly, instinctively, and without shame. Remove my appetite and I am the Fit Yoga Guy, floating through life in breathable linen. Restore it and I become a burly, bow-legged bouncer who looks like a retired football player with a herniated disc working the late shift at Honky Tonk Central. The kind of man who doesn’t sip beverages—he orders them.

    This misalignment between aspiration and anatomy makes me irritable. I wear a permanent scowl, as if I’ve just been personally betrayed by a salad. I stare wistfully at the possibility of a GLP-1 prescription, praying my insurance will deliver salvation, only to accept the grim truth: I will not die looking like Jake Gyllenhaal. I will die looking like Larry Csonka—solid, hungry, and built for a colder, harsher era.

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • How Real Writing Survives in the Age of ChatGPT

    How Real Writing Survives in the Age of ChatGPT

    AI-Resistant Pedagogy is an instructional approach that accepts the existence of generative AI without surrendering the core work of learning to it. Rather than relying on bans, surveillance, or moral panic, it redesigns courses so that thinking must occur in places machines cannot fully inhabit: live classrooms, oral exchanges, process-based writing, personal reflection, and sustained human presence. This pedagogy emphasizes how ideas are formed—not just what is submitted—by foregrounding drafting, revision, discussion, and decision-making as observable acts. It is not AI-proof, nor does it pretend to be; instead, it makes indiscriminate outsourcing cognitively unrewarding and pedagogically hollow. In doing so, AI-resistant pedagogy treats technology as a background condition rather than the organizing principle of education, restoring friction, accountability, and intellectual agency as non-negotiable features of learning.

    ***

    Carlo Rotella, an English writing instructor at Boston College, refuses to go the way of the dinosaurs in the Age of AI Machines. In his essay “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse,” he explains that he doesn’t lecture much at all. Instead, he talks with his students—an endangered pedagogical practice—and discovers something that flatly contradicts the prevailing moral panic: his students are not freeloading intellectual mercenaries itching to outsource their brains to robot overlords. They are curious. They want to learn how to write. They want to understand how tools work and how thinking happens. This alone punctures the apocalyptic story line that today’s students will inevitably cheat their way through college with AI while instructors helplessly clutch their blue books like rosary beads.

    Rotella is not naïve. He admits that any instructor who continues teaching on autopilot is “sleepwalking in a minefield.” Faced with Big Tech’s frictionless temptations—and humanity’s reliable preference for shortcuts—he argues that teachers must adapt or become irrelevant. But adaptation doesn’t mean surrender. It means recommitting to purposeful reading and writing, dialing back technological dependence, and restoring face-to-face intellectual community. His key distinction is surgical and useful: good teaching isn’t AI-proof; it’s AI-resistant. Resistance comes from three old-school but surprisingly radical moves—pen-and-paper and oral exams, teaching the writing process rather than just collecting finished products, and placing real weight on what happens inside the classroom. In practice, that means in-class quizzes, short handwritten essays, scaffolded drafting, and collaborative discussion—students learning how to build arguments brick by brick instead of passively absorbing a two-hour lecture like academic soup.

    Personal narrative becomes another line of defense. As Mark Edmundson notes, even when students lean on AI, reflective writing forces them to feed the machine something dangerously human: their own experience. That act alone creates friction. In my own courses, students write a six-page research paper on whether online entertainment sharpens or corrodes critical thinking. The opening paragraph is a 300-word confession about a habitual screen indulgence—YouTube, TikTok, a favorite creator—and an honest reckoning with whether it educates or anesthetizes. The conclusion demands a final verdict about their own personal viewing habits: intellectual growth or cognitive decay? To further discourage lazy outsourcing, I show them AI-generated examples in all their hollow, bloodless glory—perfectly grammatical, utterly vacant. Call it AI-shaming if you like. I call it a public service. Nothing cures overreliance on machines faster than seeing what they produce when no human soul is involved.

  • Why I Chose Mary Ann Over Ginger

    Why I Chose Mary Ann Over Ginger

    Cosmetic Overfit describes the point at which beauty becomes so heavily engineered—through makeup, styling, filtering, or performative polish—that it tips from alluring into AI-like. At this stage, refinement overshoots realism: faces grow too symmetrical, textures too smooth, gestures too rehearsed. What remains is not ugliness but artificiality—the aesthetic equivalent of a model trained too hard on a narrow dataset. Cosmetic overfit strips beauty of warmth, contingency, and human variance, replacing them with a glossy sameness that reads as synthetic. The result is a subtle loss of desire: the subject is still visually impressive but emotionally distant, admired without being longed for.

    ***

    When I was in sixth grade, the most combustible argument on the playground wasn’t nuclear war or the morality of capitalism—it was Gilligan’s Island: Ginger or Mary Ann. Declaring your allegiance carried the same social risk as outing yourself politically today. Voices rose. Insults flew. Fists clenched. Friendships cracked. For the record, both women were flawless avatars of their type. Ginger was pure Hollywood excess—sequins, wigs, theatrical glamour, a walking studio backlot. Mary Ann was the counterspell: the sun-kissed farm girl with bare legs, natural hair, wide-eyed innocence, and a smile that suggested pie cooling on a windowsill. You couldn’t lose either way, but I gave my vote to Mary Ann. She wore less makeup, less artifice, one fewer strategically placed beauty mole. She looked touched by sunlight rather than a lighting rig. In retrospect, both women were almost too beautiful—beautiful enough to register as vaguely AI-like before AI existed. But Mary Ann was the less synthetic of the two, and that mattered. When beauty is over-engineered—buried under wigs, paint, and performance—it starts to feel algorithmic, glossy, emotionally inert. Mary Ann may have been cookie-cutter gorgeous, but she wasn’t laminated. And even back then, my pre-digital brain knew the rule: the less AI-like the beauty, the more irresistible it becomes.

  • The Seductive Assistant

    The Seductive Assistant

    Auxiliary Cognition describes the deliberate use of artificial intelligence as a secondary cognitive system that absorbs routine mental labor—drafting, summarizing, organizing, rephrasing, and managing tone—so that the human mind can conserve energy for judgment, creativity, and higher-order thinking. In this arrangement, the machine does not replace thought but scaffolds it, functioning like an external assistant that carries cognitive weight without claiming authorship or authority. At its best, auxiliary cognition restores focus, reduces fatigue, and enables sustained intellectual work that might otherwise be avoided. At its worst, when used uncritically or excessively, it risks dulling the very capacities it is meant to protect, quietly shifting from support to substitution.

    ***

    Yale creative writing professor Meghan O’Rourke approaches ChatGPT the way a sober adult approaches a suspicious cocktail: curious, cautious, and alert to the hangover. In her essay “I Teach Creative Writing. This Is What A.I. Is Doing to Students,” she doesn’t offer a manifesto so much as a field report. Her conversations with the machine, she writes, revealed a “seductive cocktail of affirmation, perceptiveness, solicitousness, and duplicity”—a phrase that lands like a raised eyebrow. Sometimes the model hallucinated with confidence; sometimes it surprised her with competence. A few of its outputs were polished enough to pass as “strong undergraduate work,” which is both impressive and unsettling, depending on whether you’re grading or paying tuition.

    What truly startled O’Rourke, however, wasn’t the quality of the prose but the way the machine quietly lifted weight from her mind. Living with the long-term effects of Lyme disease and Covid, her energy is a finite resource, and AI nudged her toward tasks she might otherwise postpone. It conserved her strength for what actually mattered: judgment, creativity, and “higher-order thinking.” More than a glorified spell-checker, the system proved tireless and oddly soothing, a calm presence willing to draft, rephrase, and organize without complaint. When she described this relief to a colleague, he joked that she was having an affair with ChatGPT. The joke stuck because it carried a grain of truth. “Without intending it,” she admits, the machine became a partner in shouldering the invisible mental load that so many women professors and mothers carry. Freed from some of that drain, she found herself kinder, more patient, even gentler in her emails.

    What lingers after reading O’Rourke isn’t naïveté but honesty. In academia, we are flooded with essays cataloging AI’s classroom chaos, and rightly so—I live in that turbulence myself. But an exclusive fixation on disaster obscures a quieter fact she names without flinching: used carefully, AI can reduce cognitive load and return time and energy to the work and “higher-order thinking” that actually requires a human mind. The challenge ahead isn’t to banish the machine or worship it, but to put a bridle on it—to insist that it serve rather than steer. O’Rourke’s essay doesn’t promise salvation, but it does offer a shaft of light in a dim tunnel: a reminder that if we use these tools deliberately, we might reclaim something precious—attention, stamina, and the capacity to think deeply again.

  • Why I Clean Before the Cleaners

    Why I Clean Before the Cleaners

    Preparatory Leverage

    Preparatory Leverage is the principle that the effectiveness of any assistant—human or machine—is determined by the depth, clarity, and intentionality of the work done before assistance is invited. Rather than replacing effort, preparation multiplies its impact: well-structured ideas, articulated goals, and thoughtful constraints give collaborators something real to work with. In the context of AI, preparatory leverage preserves authorship by ensuring that insight originates with the human and that the machine functions as an amplifier, not a substitute. When preparation is absent, assistance collapses into superficiality; when preparation is rigorous, assistance becomes transformative.

    ***

    This may sound backward—or mildly unhinged—but for the past twenty years I’ve cleaned my house before the cleaners arrive. Every two weeks, before Maria and Lupe ring the bell, I’m already at work: clearing counters, freeing floors, taming piles of domestic entropy. The logic is simple. The more order I impose before they show up, the better they can do what they do best. They aren’t there to decipher my chaos; they’re there to perfect what’s already been prepared. The result is not incremental improvement but multiplication. The house ends up three times cleaner than it would if I had handed them a battlefield and wished them luck.

    I treat large language models the same way. I don’t dump half-formed thoughts into the machine and hope for alchemy. I prep. I think. I shape the argument. I clarify the stakes. When I give an LLM something dense and intentional to work with, it can elevate the prose—sharpen the rhetoric, adjust tone, reframe purpose. But when I skip that work, the output is a limp disappointment, the literary equivalent of a wiped-down countertop surrounded by cluttered floors. Through trial and error, I’ve learned the rule: AI doesn’t rescue lazy thinking; it amplifies whatever you bring to the table. If you bring depth, it gives you polish. If you bring chaos, it gives you noise.

  • Listening Ourselves Smaller: The Optimization Trap of Always-On Content

    Listening Ourselves Smaller: The Optimization Trap of Always-On Content

    Productivity Substitution Fallacy

    noun

    Productivity Substitution Fallacy is the mistaken belief that consuming information is equivalent to producing value, insight, or growth. Under this fallacy, activities that feel efficient—listening to podcasts, skimming summaries, scrolling explanatory content—are treated as meaningful work simply because they occupy time and convey the sensation of being informed. The fallacy replaces depth with volume, reflection with intake, and judgment with accumulation. It confuses motion for progress and exposure for understanding, allowing individuals to feel industrious while avoiding the slower, more demanding labor of thinking, synthesizing, and creating.

    ***

    Thomas Chatterton Williams confesses, with a mix of embarrassment and clarity, that he has fallen into the podcast “productivity” trap—not because podcasts are great, but because they feel efficient. He admits in “The Podcast ‘Productivity’ Trap” that he fills his days with voices piping information into his ears even as he knows much of it is tepid, recycled, and algorithmically tailored to his existing habits. The podcasts don’t expand his mind; they pad it. Still, he keeps reaching for them because they flatter his sense of optimization. Music requires surrender. Silence requires thought. Podcasts, by contrast, offer the illusion of nourishment without demanding digestion. They are the informational equivalent of cracking open a lukewarm can of malt liquor instead of pouring a glass of champagne: cheaper, faster, and falsely fortifying. He listens not because the content is rich, but because it allows him to feel “informed” while moving through the day with maximum efficiency and minimum risk of reflection.

    Williams’s confession lands because it exposes a broader pathology of the Big Tech age. We are all under quiet pressure to convert every idle moment into output, every pause into intake. Productivity has become a moral performance, and optimization its theology. In that climate, mediocrity thrives—not because it is good, but because it is convenient. We mistake constant consumption for growth and busyness for substance. The result is a slow diminishment of the self: fewer surprises, thinner tastes, and a mind trained to skim rather than savor. We are not becoming more informed; we are becoming more managed, mistaking algorithmic drip-feeding for intellectual life.

  • The Death of Grunt Work and the Starvation of Personality

    The Death of Grunt Work and the Starvation of Personality

    Personality Starvation

    Personality Starvation is the gradual erosion of character, depth, and individuality caused by the systematic removal of struggle, responsibility, and formative labor from human development. It occurs when friction—failure, boredom, repetition, social risk, and unglamorous work—is replaced by automation, optimization, and AI-assisted shortcuts that produce results without demanding personal investment. In a state of personality starvation, individuals may appear competent, efficient, and productive, yet lack the resilience, humility, patience, and textured inner life from which originality and meaning emerge. Because personality is forged through effort rather than output, a culture that eliminates its own “grunt work” does not liberate talent; it malnourishes it, leaving behind polished performers with underdeveloped selves and an artistic, intellectual, and moral ecosystem increasingly thin, fragile, and interchangeable.

    ***

    Nick Geisler’s essay, “The Problem With Letting AI Do the Grunt Work,” reads like a dispatch from a vanished ecosystem—the intellectual tide pools where writers once learned to breathe. Early in his career, Geisler cranked out disposable magazine pieces about lipstick shades, entomophagy, and regional accents. It wasn’t glamorous, and it certainly wasn’t lucrative. But it was formative. As he puts it, he learned how to write a clean sentence, structure information logically, and adjust tone to an audience—skills he now uses daily in screenwriting, film editing, and communications. The insultingly mundane work was the work. It trained his eye, disciplined his prose, and toughened his temperament. Today, that apprenticeship ladder has been kicked away. AI now writes the fluff, the promos, the documentary drafts, the script notes—the very terrain where writers once earned their calluses. Entry-level writing jobs haven’t evolved; they’ve evaporated. And with them goes the slow, character-building ascent that turns amateurs into artists.

    Geisler calls this what it is: an extinction event. He cites a study that estimates that more than 200,000 entertainment-industry jobs in the U.S. could be disrupted by AI as early as 2026. Defenders of automation insist this is liberation—that by outsourcing the drudgery, artists will finally be free to focus on their “real work.” This is a fantasy peddled by people who have never made anything worth keeping. Grunt work is not an obstacle to art; it is the forge. It builds grit, patience, humility, social intelligence, and—most importantly—personality. Art doesn’t emerge from frictionless efficiency; it emerges from temperament shaped under pressure. A personality raised inside a Frictionless Dome, shielded from boredom, rejection, and repetition, will produce work as thin and sterile as its upbringing. Sartre had it right: to be fully human, you have to get your hands dirty. Clean hands aren’t a sign of progress. They’re evidence of starvation.

  • Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Against AI Moral Optimism: Why Tristan Harris Underestimates Power

    Clarity Idealism

    noun

    Clarity Idealism, in the context of AI and the future of humanity, is the belief that sufficiently explaining the stakes of artificial intelligence—its risks, incentives, and long-term consequences—will naturally lead societies, institutions, and leaders to act responsibly. It assumes that confusion is the core threat and that once humanity “sees clearly,” agency and ethical restraint will follow. What this view underestimates is how power actually operates in technological systems. Clarity does not neutralize domination, profit-seeking, or geopolitical rivalry; it often accelerates them. In the AI era, bad actors do not require ignorance to behave destructively—they require capability, leverage, and advantage, all of which clarity can enhance. Clarity Idealism mistakes awareness for wisdom and shared knowledge for shared values, ignoring the historical reality that humans routinely understand the dangers of their tools and proceed anyway. In the race to build ever more powerful AI, clarity may illuminate the cliff—but it does not prevent those intoxicated by power from pressing the accelerator.

    Tristan Harris takes the TED stage like a man standing at the shoreline, shouting warnings as a tidal wave gathers behind him. Social media, he says, was merely a warm-up act—a puddle compared to the ocean of impact AI is about to unleash. We are at a civilizational fork in the road. One path is open-source AI, where powerful tools scatter freely and inevitably fall into the hands of bad actors, lunatics, and ideologues who mistake chaos for freedom. The other path is closed-source AI, where a small priesthood of corporations and states hoard godlike power and call it “safety.” Either route, mishandled, ends in dystopia. Harris’s plea is urgent and sincere: we must not repeat the social-media catastrophe, where engagement metrics metastasized into addiction, outrage, polarization, and civic rot. AI, he argues, demands global coordination, shared norms, and regulatory guardrails robust enough to make the technology serve humanity rather than quietly reorganize it into something meaner, angrier, and less human.

    Harris’s faith rests on a single, luminous premise: clarity. Confusion, denial, and fatalism are the true villains. If we can see the stakes clearly—if we understand how AI can slide toward chaos or tyranny—then we can choose wisely. “Clarity creates agency,” he says, trusting that informed humans will act in their collective best interest. I admire the moral courage of this argument, but I don’t buy its anthropology. History suggests that clarity does not restrain power; it sharpens it. The most dangerous people in the world are not confused. They are lucid, strategic, and indifferent to collateral damage. They understand exactly what they are doing—and do it anyway. Harris believes clarity liberates agency; I suspect it often just reveals who is willing to burn the future for dominance. The real enemy is not ignorance but nihilistic power-lust, the ancient human addiction to control dressed up in modern code. Harris should keep illuminating the terrain—but he should also admit that many travelers, seeing the cliff clearly, will still sprint toward it. Not because they are lost, but because they want what waits at the edge.