Tag: technology

  • Dessert from the Department of Cybersecurity

    Dessert from the Department of Cybersecurity

    Yesterday I endured my college’s annual cybersecurity training program, a ritual as joyful as renewing your driver’s license at the DMV. The course came complete with a quiz—an “opportunity,” they called it—to demonstrate that I had absorbed the essential lesson of modern digital survival: pause before you click.

    The training was earnest, repetitive, and soaked in the bureaucratic optimism that a thirty-minute slideshow can transform ordinary humans into elite cyber-defense agents. The core commandment appeared again and again like scripture: use common sense and do not click suspicious emails.

    I completed the training, collected my imaginary gold star, and moved on with my day.

    The following morning the universe presented its practical exam.

    An email arrived addressed to everyone in my department. The subject line screamed with theatrical desperation: “Please! I need some assistance!” The sender was a student who had never taken my class, never spoken to me, and almost certainly had no idea who I was. Attached to the email were several transcripts, as if she had dumped a stack of paperwork onto the digital sidewalk.

    Her message contained a four-paragraph narrative describing the tragic injustice that had befallen her: she had not been admitted to the university of her dreams. She wanted me—a total stranger—to read the attachments and vouch for her qualifications. The request carried the confident tone of someone who had mistaken mass-emailing professors for a reasonable life strategy.

    My reaction was immediate and uncharitable. This was not a cry for help. This was hubris wearing sweatpants. The entire message radiated a level of absurd entitlement that made the delete key glow with moral clarity.

    So I deleted it.

    Later that day I was in the garage swinging kettlebells, grunting my way through a set, when a thought crept into my mind. What if this email had been the cybersecurity department’s final exam? Perhaps after forcing me through their mandatory training, they had decided to test whether I would actually apply the lesson.

    Pause before you click.

    Did I pass because I exercised common sense?

    Possibly.

    But if I’m honest, I passed because the email offended me. Its sheer stupidity triggered the one defensive system that never fails: irritation. Suspicion might falter. Curiosity might betray you. But righteous annoyance is a powerful cybersecurity tool.

    So thank you, Department of Cybersecurity. You were not content to burden me with a half-hour training session. You also sent along dessert.

    And I did exactly what you hoped I would do with it.

    I sent it back to the kitchen.

  • The G-Shock Frogman and the Bureaucratic State

    The G-Shock Frogman and the Bureaucratic State

    Over the past forty-eight hours, DHL has sent me approximately two dozen updates about my G-Shock Frogman GWF-1000. Each message arrives with the urgency of a geopolitical crisis, as if the watch were a sensitive diplomatic asset being escorted through a chain of unstable regimes.

    Update received.
    Status changed.
    Action required.

    At one point, a text informed me that I needed to verify my identity—name, address, confirmation that I am indeed the lawful civilian awaiting a rubber-strapped amphibious instrument. I complied immediately. Filled out the form. Submitted the data. Received confirmation.

    Case closed, I thought.

    Case not closed.

    The Frogman is now stranded in customs, apparently under suspicion of either espionage, tariff evasion, or unauthorized aquatic activity.

    I contacted DHL customer service. A courteous representative informed me that my shipment would be “investigated” and that I should expect an email within a few hours. At this stage, I am waiting to learn what additional documentation, declaration, or ceremonial tribute will be required before the watch is released back into the general population.

    The order was placed eleven days ago through Sakura. I’ve purchased from them before without incident. This time, however, the experience feels less like shipping and more like applying for a mid-level government clearance. Whether the delay is caused by tariffs, enforcement changes, or the invisible hand of bureaucratic entropy, I cannot say.

    What I do know is that the process introduces a new emotional variable into overseas buying: friction. Not the minor inconvenience of delay, but the slow accumulation of uncertainty—the growing suspicion that any international purchase may evolve into a procedural endurance event.

    Buying a watch is supposed to generate anticipation.

    This generates vigilance.

    The promise of modern commerce is frictionless efficiency: click, ship, deliver. What I’m experiencing is its bureaucratic inverse. Identity verification. Clearance holds. Investigation windows. Status alerts arriving like play-by-play commentary from a logistics obstacle course.

    This isn’t tracking.

    This is surveillance—of my own anxiety.

    I appear to be suffering from Customs Suspense Syndrome: a condition in which a routine shipment becomes a serialized drama of ambiguity and delay. The buyer no longer follows a package; he refreshes a timeline the way a patient checks for lab results, searching for signs of life.

    Ordering a watch should not feel like running a gauntlet.

    Yet here we are.

    This is not frictionless commerce.

    This is American Gladiators: Customs Edition.

  • Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Obsolescence With Benefits: Life in the Age of Being Unnecessary

    Existential Redundancy is what happens when the world keeps running smoothly—and you slowly realize it no longer needs you to keep the lights on. It isn’t unemployment; it’s obsolescence with benefits. Machines cook your meals, balance your passwords, drive your car, curate your entertainment, and tuck you into nine hours of perfect algorithmic sleep. Your life becomes a spa run by robots: efficient, serene, and quietly humiliating. Comfort increases. Consequence disappears. You are no longer relied upon, consulted, or required—only serviced. Meaning thins because it has always depended on friction: being useful to someone, being necessary somewhere, being the weak link a system cannot afford to lose. Existential Redundancy names the soft panic that arrives when efficiency outruns belonging and you’re left staring at a world that works flawlessly without your fingerprints on anything.

    Picture the daily routine. A robot prepares pasta with basil hand-picked by a drone. Another cleans the dishes before you’ve even tasted dessert. An app shepherds you into perfect sleep. A driverless car ferries you through traffic like a padded cell on wheels. Screens bloom on every wall in the name of safety, insurance, and convenience, until privacy becomes a fond memory you half suspect you invented. You have time—oceans of it. But you are not a novelist or a painter or anyone whose passions demand heroic labor. You are intelligent, capable, modestly ambitious, and suddenly unnecessary. With every task outsourced and every risk eliminated, the old question—What do you do with your life?—mutates into something colder: Where do you belong in a system that no longer needs your hands, your judgment, or your effort?

    So humanity does what it always does when it feels adrift: it forms support groups. Digital circles bloom overnight—forums, wellness pods, existential check-ins—places to talk about the hollow feeling of being perfectly cared for and utterly unnecessary. But even here, the machines step in. AI moderates the sessions. Bots curate the pain. Algorithms schedule the grief and optimize the empathy. Your confession is summarized before it lands. Your despair is tagged, categorized, and gently rerouted toward a premium subscription tier. Therapy becomes another frictionless service—efficient, soothing, and devastating in its implication. You sought human connection to escape redundancy, and found yourself processed by the very systems that made you redundant in the first place. In the end, even your loneliness is automated, and the final insult arrives wrapped in flawless customer service: Thank you for sharing. Your feelings have been successfully handled.

  • Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    Bezel Clicks and Sentence Cuts: On Watches, Writing, and the Discipline of Precision

    I am a connoisseur of fine timepieces. I notice the way a sunray dial catches light like a held breath, the authority of a bezel click that says someone cared. I’ve worn Tudor Black Bays and Omega Planet Oceans as loaners—the horological equivalent of renting a Maserati for a reckless weekend—exhilarating, loud with competence, impossible to forget. My own collection is high-end Seiko divers, watches that deliver lapidary excellence at half the tariff: fewer theatrics, just ruthless execution. Precision doesn’t need a luxury tax.

    That same appetite governs my reading. A tight, aphoristic paragraph can spike my pulse the way a Planet Ocean does on the wrist. I collect sentences the way others collect steel and sapphire. Wilde. Pascal. Kierkegaard. La Rochefoucauld. These writers practice compression as a moral discipline. A lapidary writer treats language like stone—cuts until only the hardest facet remains, then stops. Anything extra is vanity.

    I am not, however, a tourist. I have no patience for writers who mistake arch tone for insight, who wear cynicism like a designer jacket and call it wisdom. Aphorisms can curdle into poses. Style without penetration is just a shiny case housing a dead movement.

    This is why I’m unsentimental about AI. Left alone, language models are unruly factories—endless output, hollow shine, fluent nonsense by the ton. Slop with manners. But handled by someone with a lapidary sensibility, they can polish. They can refine. They can help a sentence find its edge. What they cannot do is teach taste.

    Taste precedes tools. Before you let a machine touch your prose, you must have lived with the masters long enough to feel the difference between a gem and its counterfeit. That discernment takes years. There is no shortcut. You become a jeweler by ruining stones, by learning what breaks and what holds.

    Lapidary sensibility is not impressed by abundance or fluency. It responds to compression, inevitability, and bite. It is bodily: a tightening of attention, a flicker of pleasure, the instant you know a sentence could not be otherwise. You don’t acquire it through mimicry or prompts. You acquire it through exposure, failure, and long intimacy with sentences that refuse to waste your time.

    Remember this, then: AI can assist only where judgment already exists. Without that baseline, you are not collaborating with a tool. You are feeding quarters into a very expensive Slop Machine.

  • AI as Tool, Toy, or Idol: A Taxonomy of Belief

    AI as Tool, Toy, or Idol: A Taxonomy of Belief

    Your attitude toward AI machines is not primarily technical; it is theological—whether you admit it or not. Long before you form an opinion about prompts, models, or productivity gains, you have already decided what you believe about human nature, meaning, and salvation. That orientation quietly determines whether AI strikes you as a tool, a toy, or a temptation. There are three dominant postures.

    If you are a political-sapien, you believe history is the only stage that matters and justice is the closest thing we have to salvation. There is no eternal kingdom waiting in the wings; this world is the whole play, and it must be repaired with human hands. Divine law holds no authority here—only reason, negotiation, and evolving ethical frameworks shaped by shared notions of fairness. Humans, you believe, are essentially good if the scaffolding is sound. Build the right systems and decency will follow. Politics is not mere governance; it is moral engineering. AI machines, from this view, are tools on probation. If they democratize power, flatten hierarchies, and distribute wealth more equitably, they are allies. If they concentrate power, automate inequality, or deepen asymmetry, they are villains in need of constraint or dismantling.

    If you are a hedonist-sapien, you turn away from society’s moral drama and toward the sovereign self. The highest goods are pleasure, freedom, and self-actualization. Politics is background noise; transcendence is unnecessary. Life is about feeling good, living well, and removing friction wherever possible. AI machines arrive not as a problem but as a gift—tools that streamline consumption, curate taste, and optimize comfort. They promise a smoother, more luxurious life with fewer obstacles and more options. Of the three orientations, the hedonist-sapien embraces AI with the least hesitation and the widest grin, welcoming it as the ultimate personal assistant in the lifelong project of maximizing pleasure and minimizing inconvenience.

    If you are a devotional-sapien, you begin with a darker diagnosis. Humanity is fallen, and no amount of policy reform, pleasure, or purchasing power can make it whole. You don’t expect salvation from governments, markets, or optimization schemes; you expect it only from your Maker. You may share the political-sapien’s concern for justice and enjoy the hedonist-sapien’s creature comforts, but you refuse to confuse either with redemption. You are not shopping for happiness; you are seeking restoration. Spiritual health—not efficiency—is the measure that matters. From this vantage, AI machines look less like neutral tools and more like idols-in-training: shiny substitutes promising mastery, insight, or transcendence without repentance or grace. Unsurprisingly, the devotional-sapien is the most skeptical of AI’s expanding role in human life.

    Because your orientation shapes what you think humans need most—justice, pleasure, or redemption—it also shapes how you use AI, how much you trust it, and what you expect it to deliver. Before asking what AI can do for you, it is worth asking a more dangerous question: what are you secretly hoping it will save you from?

  • What Cochinita Pibil Can Teach Us About Learning

    What Cochinita Pibil Can Teach Us About Learning

    Academic Friction is the intentional reintroduction of difficulty, resistance, and human presence into the learning process as a corrective to academic nihilism. Academic friction rejects the premise that education should be frictionless, efficient, or fully mediated by machines, insisting instead that intellectual growth requires struggle, solitude, and sustained attention. It is created through practices that cannot be outsourced or automated—live writing, oral presentations, performance, slow reading, and protected time for thought—forcing students to confront ideas without the buffer of AI assistance. Far from being punitive, academic friction restores agency, rebuilds cognitive stamina, and reawakens curiosity by making learning consequential again. It treats difficulty not as an obstacle to be removed, but as the very medium through which thinking, meaning, and human development occur.

    Greatness is born from resistance. Depth is what happens when something pushes back. Friction is not an obstacle to meaning; it is the mechanism that creates it. Strip friction away and you don’t get excellence—you get efficiency, speed, and a thin satisfaction that evaporates on contact. This is as true in food as it is in thinking.

    Consider cochinita pibil, a dish that seems to exist for the sole purpose of proving that greatness takes time. Nothing about it is casual. Pork shoulder is marinated overnight in achiote paste, bitter orange juice, garlic, cumin, oregano—an aggressive, staining bath that announces its intentions early. The meat doesn’t just absorb flavor; it surrenders to it. Traditionally, it is wrapped in banana leaves, sealed like contraband, and buried underground in a pit oven. Heat rises slowly. Smoke seeps inward. Hours pass. The pork breaks down molecule by molecule, fibers loosening until resistance gives way to tenderness. This is not cooking as convenience; it is cooking as ordeal. The reward is depth—meat so saturated with flavor it feels ancient, ceremonial, earned.

    Now here’s the confession: as much as I love food, I love convenience more. And convenience is just another word for frictionless. I will eat oatmeal three times a day without hesitation. Not because oatmeal is great, but because it is obedient. It asks nothing of me. Pour, stir, microwave, done. Oatmeal does not resist. It does not demand patience, preparation, or attention. It delivers calories with monk-like efficiency. It is fuel masquerading as a meal, and I choose it precisely because it costs me nothing.

    The life of the intellect follows the same fork in the road. There is the path of cochinita pibil and the path of oatmeal. One requires slow reading, sustained writing, confusion, revision, and the willingness to sit with discomfort until something breaks open. The other offers summaries, shortcuts, prompts, and frictionless fluency—thought calories without intellectual nutrition. Both will keep you alive. Only one will change you.

    The tragedy of our moment is not that people prefer oatmeal. It’s that we’ve begun calling it cuisine. We’ve mistaken smoothness for insight and speed for intelligence. Real thinking, like real cooking, is messy, time-consuming, and occasionally exhausting. It stains the counter. It leaves you unsure whether it will be worth it until it is. But when it works, it produces something dense, resonant, and unforgettable.

    Cochinita pibil does not apologize for the effort it requires. Neither should serious thought. If we want depth, we have to accept friction. Otherwise, we’ll live well-fed on oatmeal—efficient, unchallenged, and never quite transformed.

  • How Cheating with AI Accidentally Taught You How to Write

    How Cheating with AI Accidentally Taught You How to Write

    Accidental Literacy is what happens when you try to sneak past learning with a large language model and trip directly into it face-first. You fire up the machine hoping for a clean escape—no thinking, no struggling, no soul-searching—only to discover that the output is a beige avalanche of competence-adjacent prose that now requires you to evaluate it, fix it, tone it down, fact-check it, and coax it into sounding like it was written by a person with a pulse. Congratulations: in attempting to outsource your brain, you have activated it. System-gaming mutates into a surprise apprenticeship. Literacy arrives not as a noble quest but as a penalty box—earned through irritation, judgment calls, and the dawning realization that the machine cannot decide what matters, what sounds human, or what won’t embarrass you in front of an actual reader. Accidental literacy doesn’t absolve cheating; it mocks it by proving that even your shortcuts demand work.

    If you insist on using an LLM for speed, there is a smart way and a profoundly dumb way. The smart way is to write the first draft yourself—ugly, human, imperfect—and then let the machine edit, polish, and reorganize after the thinking is done. The dumb way is to dump a prompt into the algorithm and accept the resulting slurry of AI slop, then spend twice as long performing emergency surgery on sentences that have no spine. Editing machine sludge is far more exhausting than editing your own draft, because you’re not just fixing prose—you’re reverse-engineering intention. Either way, literacy sneaks in through the back door, but the human-first method is faster, cleaner, and far less humiliating. The machine can buff the car; it cannot build the engine. Anyone who believes otherwise is just outsourcing frustration at scale.

  • Everyone in Education Wants Authenticity–Just Not for Themselves

    Everyone in Education Wants Authenticity–Just Not for Themselves

    Reciprocal Authenticity Deadlock names the breakdown of trust that occurs when students and instructors simultaneously demand human originality, effort, and intellectual presence from one another while privately relying on AI to perform that very labor for themselves. In this condition, authenticity becomes a weapon rather than a value: students resent instructors whose materials feel AI-polished and hollow, while instructors distrust students whose work appears frictionless and synthetic. Each side believes the other is cheating the educational contract, even as both quietly violate it. The result is not merely hypocrisy but a structural impasse in which sincerity is expected but not modeled, and education collapses into mutual surveillance—less a shared pursuit of understanding than a standoff over who is still doing the “real work.”

    ***

    If you are a college student today, you are standing in the middle of an undeclared war over AI, with no neutral ground and no clean rules of engagement. Your classmates are using AI in wildly different ways: some are gaming the system with surgical efficiency, some are quietly hollowing out their own education, and others are treating it like a boot camp for future CEOhood. From your desk, you can see every outcome at once. And then there’s the other surprise—your instructors. A growing number of them are now producing course materials that carry the unmistakable scent of machine polish: prose that is smooth but bloodless, competent but lifeless, stuffed with clichés and drained of voice. Students are taking to Rate My Professors to lodge the very same complaints teachers have hurled at student essays for years. The irony is exquisite. The tables haven’t just turned; they’ve flipped.

    What emerges is a slow-motion authenticity crisis. Teachers worry that AI will dilute student learning into something pre-chewed and nutrient-poor, while students worry that their education is being outsourced to the same machines. In the worst version of this standoff, each side wants authenticity only from the other. Students demand human presence, originality, and intellectual risk from their professors—while reserving the right to use AI for speed and convenience. Professors, meanwhile, embrace AI as a labor-saving miracle for themselves while insisting that students do the “real work” the hard way. Both camps believe they are acting reasonably. Both are convinced the other is cutting corners. The result is not collaboration but a deadlock: a classroom defined less by learning than by a mutual suspicion over who is still doing the work that education is supposed to require.

  • The Seductive Assistant

    The Seductive Assistant

    Auxiliary Cognition describes the deliberate use of artificial intelligence as a secondary cognitive system that absorbs routine mental labor—drafting, summarizing, organizing, rephrasing, and managing tone—so that the human mind can conserve energy for judgment, creativity, and higher-order thinking. In this arrangement, the machine does not replace thought but scaffolds it, functioning like an external assistant that carries cognitive weight without claiming authorship or authority. At its best, auxiliary cognition restores focus, reduces fatigue, and enables sustained intellectual work that might otherwise be avoided. At its worst, when used uncritically or excessively, it risks dulling the very capacities it is meant to protect, quietly shifting from support to substitution.

    ***

    Yale creative writing professor Meghan O’Rourke approaches ChatGPT the way a sober adult approaches a suspicious cocktail: curious, cautious, and alert to the hangover. In her essay “I Teach Creative Writing. This Is What A.I. Is Doing to Students,” she doesn’t offer a manifesto so much as a field report. Her conversations with the machine, she writes, revealed a “seductive cocktail of affirmation, perceptiveness, solicitousness, and duplicity”—a phrase that lands like a raised eyebrow. Sometimes the model hallucinated with confidence; sometimes it surprised her with competence. A few of its outputs were polished enough to pass as “strong undergraduate work,” which is both impressive and unsettling, depending on whether you’re grading or paying tuition.

    What truly startled O’Rourke, however, wasn’t the quality of the prose but the way the machine quietly lifted weight from her mind. Living with the long-term effects of Lyme disease and Covid, her energy is a finite resource, and AI nudged her toward tasks she might otherwise postpone. It conserved her strength for what actually mattered: judgment, creativity, and “higher-order thinking.” More than a glorified spell-checker, the system proved tireless and oddly soothing, a calm presence willing to draft, rephrase, and organize without complaint. When she described this relief to a colleague, he joked that she was having an affair with ChatGPT. The joke stuck because it carried a grain of truth. “Without intending it,” she admits, the machine became a partner in shouldering the invisible mental load that so many women professors and mothers carry. Freed from some of that drain, she found herself kinder, more patient, even gentler in her emails.

    What lingers after reading O’Rourke isn’t naïveté but honesty. In academia, we are flooded with essays cataloging AI’s classroom chaos, and rightly so—I live in that turbulence myself. But an exclusive fixation on disaster obscures a quieter fact she names without flinching: used carefully, AI can reduce cognitive load and return time and energy to the work and “higher-order thinking” that actually requires a human mind. The challenge ahead isn’t to banish the machine or worship it, but to put a bridle on it—to insist that it serve rather than steer. O’Rourke’s essay doesn’t promise salvation, but it does offer a shaft of light in a dim tunnel: a reminder that if we use these tools deliberately, we might reclaim something precious—attention, stamina, and the capacity to think deeply again.

  • Why I Clean Before the Cleaners

    Why I Clean Before the Cleaners

    Preparatory Leverage

    Preparatory Leverage is the principle that the effectiveness of any assistant—human or machine—is determined by the depth, clarity, and intentionality of the work done before assistance is invited. Rather than replacing effort, preparation multiplies its impact: well-structured ideas, articulated goals, and thoughtful constraints give collaborators something real to work with. In the context of AI, preparatory leverage preserves authorship by ensuring that insight originates with the human and that the machine functions as an amplifier, not a substitute. When preparation is absent, assistance collapses into superficiality; when preparation is rigorous, assistance becomes transformative.

    ***

    This may sound backward—or mildly unhinged—but for the past twenty years I’ve cleaned my house before the cleaners arrive. Every two weeks, before Maria and Lupe ring the bell, I’m already at work: clearing counters, freeing floors, taming piles of domestic entropy. The logic is simple. The more order I impose before they show up, the better they can do what they do best. They aren’t there to decipher my chaos; they’re there to perfect what’s already been prepared. The result is not incremental improvement but multiplication. The house ends up three times cleaner than it would if I had handed them a battlefield and wished them luck.

    I treat large language models the same way. I don’t dump half-formed thoughts into the machine and hope for alchemy. I prep. I think. I shape the argument. I clarify the stakes. When I give an LLM something dense and intentional to work with, it can elevate the prose—sharpen the rhetoric, adjust tone, reframe purpose. But when I skip that work, the output is a limp disappointment, the literary equivalent of a wiped-down countertop surrounded by cluttered floors. Through trial and error, I’ve learned the rule: AI doesn’t rescue lazy thinking; it amplifies whatever you bring to the table. If you bring depth, it gives you polish. If you bring chaos, it gives you noise.