Category: technology

  • I Trained an AI Named Rocky—and Still Got Fat

    I Trained an AI Named Rocky—and Still Got Fat

    Your life, in brief, is going to bed mildly furious at yourself because you once again ate more than you meant to. Maybe twice in your entire adult existence you lay there, hands folded, whispering, “Well done, child,” as if discipline were a rare celestial event. Then you notice your friends consulting their generative AI oracles for diet wisdom, and you think, Why not me? You christen your chatbot Rocky, because nothing says accountability like a fictional personal trainer who can’t see you. Rocky obediently spits out hundreds of menus—vegan-ish, Mediterranean-leaning, low-calorie, high-protein, morally upright. You spend hours refining them, debating legumes, adjusting macros, basking in Rocky’s algorithmic approval. Rocky is proud of you. You feel productive. You feel serious.

    And yet, night after night, the same verdict arrives: you ate more than you intended to. Only now it hurts worse. Not only did you overeat, you also squandered hundreds of hours in earnest conversation with a machine that never once made you get on the exercise bike. You weren’t training—you were planning to train. You weren’t changing—you were curating the conditions under which change might someday occur. Congratulations: you’ve fallen into Optimization Displacement, the elegant self-deception in which planning replaces action and refinement masquerades as effort. Under its spell, complexity feels virtuous, engagement feels like work, and productivity theater substitutes for sweat. Optimization displacement is soothing because it offers control without discomfort, mastery without risk—but it quietly steals the time, resolve, and momentum required to do the one thing that actually works: getting up and pedaling.

    Fed up with dieting and your Rocky chatbot, you give up on your health quest and begin writing a memoir tentatively titled I Trained an AI Named Rocky–and Still Got Fat

  • The Hidden Price of Digital Purity

    The Hidden Price of Digital Purity

    Digital Asceticism is the deliberate, selective refusal of digital environments that inflame attention, distort judgment, and reward compulsive performance—while remaining just online enough to function at work or school. It is not technophobia or a monkish retreat to the woods. It is targeted abstinence. A disciplined no to platforms that mainline adrenaline, monetize approval-seeking, and encourage cognitive excess. Digital asceticism treats restraint as hygiene: a mental detox that restores proportion, quiets the nervous system, and makes sustained thought possible again. In theory, it is an act of self-preservation. In practice, it is a social provocation.

    At some point, digital abstinence becomes less a lifestyle choice than a medical necessity. You don’t vanish entirely—emails still get answered, documents still get submitted—but you excise the worst offenders. You leave the sites engineered to spike adrenaline. You step away from social platforms that convert loneliness into performance. You stop leaning on AI machines because you know your weakness: once you start, you overwrite. The prose swells, flexes, and bulges like a bodybuilder juiced beyond structural integrity. The result is a brief but genuine cleansing. Attention returns. Language slims down. The mind exhales.

    Then comes the price. Digital abstinence is never perceived as neutral. Like a vegan arriving at a barbecue clutching a frozen vegetable patty, your refusal radiates judgment whether you intend it or not. Your silence implies their noise. Your absence throws their habits into relief. You didn’t say they were living falsely—but your departure suggests it. Resentment follows. So does envy. While you were gone, people were quietly happy for you, even as they resented you. You had done what they could not: stepped away, purified, escaped.

    The real shock comes when you try to return. The welcome is chilly. People are offended that you left, because leaving forced a verdict on their behavior—and the verdict wasn’t flattering. Worse, your return depresses them. Watching you re-enter the platforms feels like watching a recovering alcoholic wander back into the liquor store. Your relapse reassures them, but it also wounds them. Digital asceticism, it turns out, is not just a personal discipline but a social rupture. Enter it carefully. Once you leave the loop, nothing about going back is simple.

  • Stir-Free Peanut Butter and the Slow Death of Self-Control

    Stir-Free Peanut Butter and the Slow Death of Self-Control

    Frictionless Consumption is the pattern by which ease replaces judgment and convenience overrides restraint. When effort is removed—no stirring, no waiting, no resistance—consumption accelerates beyond intention because nothing slows it down. What once required pause, preparation, or minor inconvenience now flows effortlessly, inviting repetition and excess. The danger is not the object itself but the vanished friction that once acted as a governor on behavior. Frictionless consumption feels like freedom in the moment, but over time it produces dependency, overuse, and decline, as appetite expands to fill the space where effort used to be. In eliminating difficulty, it quietly eliminates self-regulation, leaving users wondering how they arrived at excess when nothing ever felt like too much.

    ***

    For decades, I practiced the penitential ritual of mixing organic peanut butter. I wrapped a washcloth around a tablespoon for traction and churned as viscous globs of nut paste and brown sludge slithered up the sides of the jar. The stirring was never sufficient. No matter how heroic the effort, you always discovered fossilized peanut-butter boulders lurking at the bottom, surrounded by a moat of free-floating oil. The jar itself became slick, greasy, faintly accusatory. Still, I consoled myself with the smug glow of dietary righteousness. At least I’m natural, I thought, halo firmly in place.

    Then one day, my virtue collapsed. I sold my soul and bought Stir-Free. Its label bore the mark of the beast—additives, including the much-maligned demon, palm oil—but the first swipe across a bagel was a revelation. No stirring. No resistance. No penance. It spread effortlessly on toast, waffles, pancakes, anything foolish enough to cross its path. The only question that remained was not Is this evil? but Why did I waste decades of my life pretending the other way was better?

    The answer arrived quietly, in the form of my expanding waistline. Because peanut butter had become frictionless, I began consuming it with abandon. Spoonfuls multiplied. Servings lost their meaning. I blamed palm oil, of course—it had a face, a name, a moral odor—but the real culprit was ease. Stir-Free was not just a product; it was an invitation. When effort disappears, consumption accelerates. I didn’t gain weight because of additives. I gained weight because nothing stood between me and another effortless swipe.

    Large Language Models are Stir-Free peanut butter for the mind. They are smooth, stable, instantly gratifying, and always ready to spread. They remove the resistance from thinking, deliver fast results, and reward you with the illusion of productivity. Like Stir-Free, they invite overuse. And like Stir-Free, the cost is not immediately obvious. The more you rely on them, the more your intellectual core softens. Eventually, you’re left with a cognitive physique best described as a pencil-neck potato—bulky output, no supporting structure.

    The promise of a frictionless life is one of the great seductions of the modern age. It feels humane, efficient, enlightened. In reality, it is a trap. Friction was never the enemy; it was the brake. Remove it everywhere—food, thinking, effort, judgment—and you don’t get progress. You get collapse, neatly packaged and easy to spread.

  • Feedback Latency Intolerance

    Feedback Latency Intolerance

    Feedback Latency Intolerance is the conditioned inability to endure even brief gaps between action and response, produced by prolonged immersion in systems that reward instantaneous acknowledgment. Under its influence, ordinary delays—seconds rather than minutes—register as emotional disturbances, triggering agitation, self-doubt, or irritation disproportionate to the circumstance. The condition collapses temporal perspective, converting neutral waiting into perceived absence or rejection. What is lost is not efficiency but patience: the learned capacity to exist without immediate validation. Feedback latency intolerance reveals how algorithmic environments retrain emotional regulation, replacing mature tolerance for delay with a reflexive demand for constant confirmation.

    The extent of my deterioration revealed itself recently at a new pancake house. I took my daughter, asked the server what he actually liked on the menu, and obediently ordered the fried chicken biscuit sandwich. Then—already overplaying the moment—I texted my wife to announce my choice, as if this were actionable intelligence. I stared at my phone, waiting for the small red numeral to appear, the sacred 1 that would certify my existence. Forty seconds passed. Forty. I refreshed my screen like a lab rat pressing a lever, convinced something had gone wrong with the universe.

    In that absurd interval, it dawned on me: I had entered a state of pathological impatience, the natural byproduct of prolonged residence in the dopamine swamp of algorithmic life, where self-worth is measured by speed and volume of response. The sensation felt disturbingly familiar. My mind snapped back to stories my mother told about feeding me as a baby. The spoon, freshly loaded with mashed potatoes, would leave my mouth for a brief, necessary refill—and I would erupt in fury, unable to tolerate the unbearable injustice of the spoon’s absence. I screamed not from hunger, but from interruption. Sitting there in the pancake house, refreshing my phone, I realized I had simply upgraded the spoon. This is what too much time inside these machines does to a person: it doesn’t make you faster or smarter—it makes you an adult who can’t survive a forty-second gap between bites.

  • Stop Selling Books Like Vitamins: Reading as Pleasure, Not Duty

    Stop Selling Books Like Vitamins: Reading as Pleasure, Not Duty

    Literary Vice names the framing of reading as a private, absorbing, and mildly antisocial pleasure rather than a civic duty or self-improvement exercise. It treats books the way earlier cultures treated forbidden novels or disreputable entertainments: as experiences that tempt, distract, and pull the reader out of alignment with respectable schedules, market rhythms, and digital expectations. Literary vice rejects the language of virtue—empathy-building, résumé enhancement, democratic hygiene—and instead emphasizes immersion, obsession, and pleasure for its own sake. As a countervailing force against technology-induced anhedonia, reading works precisely because it is slow, effortful, and resistant to optimization: it restores depth of attention, reawakens desire through sustained engagement, and reintroduces emotional risk in a landscape flattened by frictionless dopamine delivery. Where screens numb by over-stimulation, literary vice revives feeling by demanding patience, solitude, and surrender to a single, uncompromising narrative consciousness.

    ***

    Adam Kirsch’s essay “Reading Is a Vice” makes a claim that sounds perverse until you realize it is completely sane: readers are misaligned with the world. They miss its rhythms, ignore its incentives, fall out of step with its market logic—and that is precisely the point. To be poorly adapted to a cultural hellscape is not a bug; it is the feature. Reading makes you antisocial in the healthiest way possible. It pulls you off screens, out of optimization mode, and away from the endless hum of performance and productivity that passes for modern life. In a culture engineered to keep us efficient, stimulated, and vaguely numb, misalignment is a form of resistance.

    Kirsch notes, of course, that reading builds critical thinking, individual flourishing, and democratic capacity. All true. All useless as marketing slogans. Those are not selling points in a dopamine economy. No one scrolls TikTok thinking, “I wish I were more civically responsible.” If you want young people to read, Kirsch argues, stop pitching books as moral medicine and start advertising them as pleasure—private, absorbing, and maybe a little disreputable. Call reading what it once was: a vice. When literature was dangerous, people couldn’t stop reading it. Now that books have been domesticated into virtue objects—edifying, wholesome, improving—no one can be persuaded to pick one up.

    You don’t eat baklava because it’s good for you. You eat it because it is an indecent miracle of sugar, butter, and culture that makes the rest of the day briefly irrelevant. Books work the same way. There are baklava books. Yours might be Danielle Steel. Mine isn’t. Mine lives closer to Cormac McCarthy. When I was in sixth grade, my literary baklava was Herman Raucher’s Summer of ’42. That book short-circuited my brain. I was so consumed by the protagonist’s doomed crush on an older woman that I refused to leave my tent for two full days during a perfect Yosemite summer. While everyone else hiked through actual paradise, I lay immobilized by narrative obsession. I regret nothing. My body was in Yosemite; my mind was somewhere far more dangerous.

    This is why you don’t tell students to read the way you tell people to take cod liver oil or hit their protein macros. That pitch fails because it is joyless and dishonest. You tell students to read because finding the right book feels like dessert—baklava, banana splits, whatever ruins your self-control. And yes, you can also tell them what Kafka knew: that great writing is an ax that breaks the frozen sea inside us. Stay frozen long enough—numb, optimized, frictionless—and you don’t just stagnate. You risk not coming back at all.

  • What Cochinita Pibil Can Teach Us About Learning

    What Cochinita Pibil Can Teach Us About Learning

    Academic Friction is the intentional reintroduction of difficulty, resistance, and human presence into the learning process as a corrective to academic nihilism. Academic friction rejects the premise that education should be frictionless, efficient, or fully mediated by machines, insisting instead that intellectual growth requires struggle, solitude, and sustained attention. It is created through practices that cannot be outsourced or automated—live writing, oral presentations, performance, slow reading, and protected time for thought—forcing students to confront ideas without the buffer of AI assistance. Far from being punitive, academic friction restores agency, rebuilds cognitive stamina, and reawakens curiosity by making learning consequential again. It treats difficulty not as an obstacle to be removed, but as the very medium through which thinking, meaning, and human development occur.

    Greatness is born from resistance. Depth is what happens when something pushes back. Friction is not an obstacle to meaning; it is the mechanism that creates it. Strip friction away and you don’t get excellence—you get efficiency, speed, and a thin satisfaction that evaporates on contact. This is as true in food as it is in thinking.

    Consider cochinita pibil, a dish that seems to exist for the sole purpose of proving that greatness takes time. Nothing about it is casual. Pork shoulder is marinated overnight in achiote paste, bitter orange juice, garlic, cumin, oregano—an aggressive, staining bath that announces its intentions early. The meat doesn’t just absorb flavor; it surrenders to it. Traditionally, it is wrapped in banana leaves, sealed like contraband, and buried underground in a pit oven. Heat rises slowly. Smoke seeps inward. Hours pass. The pork breaks down molecule by molecule, fibers loosening until resistance gives way to tenderness. This is not cooking as convenience; it is cooking as ordeal. The reward is depth—meat so saturated with flavor it feels ancient, ceremonial, earned.

    Now here’s the confession: as much as I love food, I love convenience more. And convenience is just another word for frictionless. I will eat oatmeal three times a day without hesitation. Not because oatmeal is great, but because it is obedient. It asks nothing of me. Pour, stir, microwave, done. Oatmeal does not resist. It does not demand patience, preparation, or attention. It delivers calories with monk-like efficiency. It is fuel masquerading as a meal, and I choose it precisely because it costs me nothing.

    The life of the intellect follows the same fork in the road. There is the path of cochinita pibil and the path of oatmeal. One requires slow reading, sustained writing, confusion, revision, and the willingness to sit with discomfort until something breaks open. The other offers summaries, shortcuts, prompts, and frictionless fluency—thought calories without intellectual nutrition. Both will keep you alive. Only one will change you.

    The tragedy of our moment is not that people prefer oatmeal. It’s that we’ve begun calling it cuisine. We’ve mistaken smoothness for insight and speed for intelligence. Real thinking, like real cooking, is messy, time-consuming, and occasionally exhausting. It stains the counter. It leaves you unsure whether it will be worth it until it is. But when it works, it produces something dense, resonant, and unforgettable.

    Cochinita pibil does not apologize for the effort it requires. Neither should serious thought. If we want depth, we have to accept friction. Otherwise, we’ll live well-fed on oatmeal—efficient, unchallenged, and never quite transformed.

  • The 5-Paragraph Essay Is Dead

    The 5-Paragraph Essay Is Dead

    A 5-Paragraph Extinction Event is the moment when the five-paragraph essay ceases to function as a viable teaching tool due to a radical shift in the writing ecosystem—most notably the arrival of generative AI. What was once a crude but serviceable scaffold for novice writers becomes pedagogically obsolete, easily replicated by machines and incapable of cultivating argument, voice, or intellectual risk. Instructors who continue to assign it after this extinction event are not preserving a classic form; they are teaching a fossil, mistaking structural compliance for thinking and confusing familiarity with rigor.

    ***

    If you teach writing and are still assigning the five-paragraph essay four years into the age of generative AI, you’re the neighbor who never took down the Christmas decorations—by May. Not a stray wreath forgotten in the garage, but the full spectacle: twinkling lights still blinking in daylight, inflatable Snowman wheezing on the lawn, Santa slumped sideways like he lost a bar fight. Leaving decorations up for two weeks after Christmas is a forgivable lag. Five months is a wellness check. The neighbors start whispering. The HOA sharpens its knives. Someone calls the police just to make sure no one has been quietly mummified inside. You’re not behind the curve. You’re not even resisting change. You’re clinically unresponsive. The world has moved on, the season has ended, and you’re still assigning an essay as formulaic as a frozen TV dinner with the instructions printed on the lid. 

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • How Real Writing Survives in the Age of ChatGPT

    How Real Writing Survives in the Age of ChatGPT

    AI-Resistant Pedagogy is an instructional approach that accepts the existence of generative AI without surrendering the core work of learning to it. Rather than relying on bans, surveillance, or moral panic, it redesigns courses so that thinking must occur in places machines cannot fully inhabit: live classrooms, oral exchanges, process-based writing, personal reflection, and sustained human presence. This pedagogy emphasizes how ideas are formed—not just what is submitted—by foregrounding drafting, revision, discussion, and decision-making as observable acts. It is not AI-proof, nor does it pretend to be; instead, it makes indiscriminate outsourcing cognitively unrewarding and pedagogically hollow. In doing so, AI-resistant pedagogy treats technology as a background condition rather than the organizing principle of education, restoring friction, accountability, and intellectual agency as non-negotiable features of learning.

    ***

    Carlo Rotella, an English writing instructor at Boston College, refuses to go the way of the dinosaurs in the Age of AI Machines. In his essay “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse,” he explains that he doesn’t lecture much at all. Instead, he talks with his students—an endangered pedagogical practice—and discovers something that flatly contradicts the prevailing moral panic: his students are not freeloading intellectual mercenaries itching to outsource their brains to robot overlords. They are curious. They want to learn how to write. They want to understand how tools work and how thinking happens. This alone punctures the apocalyptic story line that today’s students will inevitably cheat their way through college with AI while instructors helplessly clutch their blue books like rosary beads.

    Rotella is not naïve. He admits that any instructor who continues teaching on autopilot is “sleepwalking in a minefield.” Faced with Big Tech’s frictionless temptations—and humanity’s reliable preference for shortcuts—he argues that teachers must adapt or become irrelevant. But adaptation doesn’t mean surrender. It means recommitting to purposeful reading and writing, dialing back technological dependence, and restoring face-to-face intellectual community. His key distinction is surgical and useful: good teaching isn’t AI-proof; it’s AI-resistant. Resistance comes from three old-school but surprisingly radical moves—pen-and-paper and oral exams, teaching the writing process rather than just collecting finished products, and placing real weight on what happens inside the classroom. In practice, that means in-class quizzes, short handwritten essays, scaffolded drafting, and collaborative discussion—students learning how to build arguments brick by brick instead of passively absorbing a two-hour lecture like academic soup.

    Personal narrative becomes another line of defense. As Mark Edmundson notes, even when students lean on AI, reflective writing forces them to feed the machine something dangerously human: their own experience. That act alone creates friction. In my own courses, students write a six-page research paper on whether online entertainment sharpens or corrodes critical thinking. The opening paragraph is a 300-word confession about a habitual screen indulgence—YouTube, TikTok, a favorite creator—and an honest reckoning with whether it educates or anesthetizes. The conclusion demands a final verdict about their own personal viewing habits: intellectual growth or cognitive decay? To further discourage lazy outsourcing, I show them AI-generated examples in all their hollow, bloodless glory—perfectly grammatical, utterly vacant. Call it AI-shaming if you like. I call it a public service. Nothing cures overreliance on machines faster than seeing what they produce when no human soul is involved.

  • Why I Chose Mary Ann Over Ginger

    Why I Chose Mary Ann Over Ginger

    Cosmetic Overfit describes the point at which beauty becomes so heavily engineered—through makeup, styling, filtering, or performative polish—that it tips from alluring into AI-like. At this stage, refinement overshoots realism: faces grow too symmetrical, textures too smooth, gestures too rehearsed. What remains is not ugliness but artificiality—the aesthetic equivalent of a model trained too hard on a narrow dataset. Cosmetic overfit strips beauty of warmth, contingency, and human variance, replacing them with a glossy sameness that reads as synthetic. The result is a subtle loss of desire: the subject is still visually impressive but emotionally distant, admired without being longed for.

    ***

    When I was in sixth grade, the most combustible argument on the playground wasn’t nuclear war or the morality of capitalism—it was Gilligan’s Island: Ginger or Mary Ann. Declaring your allegiance carried the same social risk as outing yourself politically today. Voices rose. Insults flew. Fists clenched. Friendships cracked. For the record, both women were flawless avatars of their type. Ginger was pure Hollywood excess—sequins, wigs, theatrical glamour, a walking studio backlot. Mary Ann was the counterspell: the sun-kissed farm girl with bare legs, natural hair, wide-eyed innocence, and a smile that suggested pie cooling on a windowsill. You couldn’t lose either way, but I gave my vote to Mary Ann. She wore less makeup, less artifice, one fewer strategically placed beauty mole. She looked touched by sunlight rather than a lighting rig. In retrospect, both women were almost too beautiful—beautiful enough to register as vaguely AI-like before AI existed. But Mary Ann was the less synthetic of the two, and that mattered. When beauty is over-engineered—buried under wigs, paint, and performance—it starts to feel algorithmic, glossy, emotionally inert. Mary Ann may have been cookie-cutter gorgeous, but she wasn’t laminated. And even back then, my pre-digital brain knew the rule: the less AI-like the beauty, the more irresistible it becomes.