Category: technology

  • Bottom-Trawling and Other Sins That Ruin My Appetite

    Bottom-Trawling and Other Sins That Ruin My Appetite

    Watching a David Attenborough documentary feels less like casual viewing and more like sliding into the pew for the Church of Planet Earth. The man’s diction alone could resurrect the dead—each syllable polished, each pause wielded like a scalpel—while he preaches an all-natural gospel: paradise isn’t some vaporous hereafter; it’s right here, pulsing under our sneakers. And we, the congregation of carbon footprints, are the sinners. We bulldoze forests, mainline fossil fuels, and still have the gall to call ourselves stewards. His sermons don’t merely entertain; they indict. Ten minutes in and I’m itching to mulch my own receipts and swear off cheeseburgers for life.

    I’ve basked in Attenborough’s velvet reprimands for decades, often drifting into a blissful half-sleep as he murmurs about the “delicate balance of nature” and the tender devotion of a mother panda—as soothing as chamomile tea and twice as guilt-inducing. His newest homily, Ocean on Hulu, finds the maestro wide-eyed as ever, a silver-haired Burl Ives guiding us through Rudolph’s wilderness—only this time the Abominable Snowman is industrial bottom trawling. Picture a gargantuan steel mouth dragging across the seabed, gulping everything in its path. Rays flutter, fish scatter, and then—slam—the net’s iron curtain drops. Most of the hapless catch is unceremoniously dumped, lifeless, back into the brine.

    The footage left me queasy, a queasiness only partly soothed by Attenborough’s grandfatherly timbre. I’ve already been flirting with a plant-forward diet; Ocean shoved me into a full-blown breakup with seafood. Good luck unseeing hundreds of doomed creatures funneled into a floating abattoir while an octogenarian sage explains—as gently as one can—that we’re devouring our own Eden.

    So yes, I’ll skip the shrimp cocktail, thanks. My conscience already has acid reflux.

  • Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    In the HBO Max film Mountainhead and the Black Mirror episode “Joan Is Awful,” viewers are plunged into unnerving dystopias shaped not by evil governments or alien invasions, but by tech corporations whose influence surpasses state power and whose tools penetrate the most intimate corners of human consciousness.

    Both works dramatize a chilling premise: that the very notion of an autonomous self is under siege. We are not simply consumers of technology but the raw material it digests, distorts, and reprocesses. In these narratives, the protagonists find their sense of self unraveled, their identities replicated, manipulated, and ultimately owned by forces they cannot control. Whether through digital doppelgängers, surveillance entertainment, or techno-induced psychosis, these stories illustrate the terrifying consequences of surrendering power to those who build technologies faster than they can understand or ethically manage them.

    In this essay, write a 1,700-word argumentative exposition responding to the following claim:

    In the age of runaway innovation, where the ambitions of tech elites override democratic values and psychological safeguards, the very concept of free will, informed consent, and the autonomous self is collapsing under the weight of its digital imitation.

    Use Mountainhead and “Joan Is Awful” as your core texts. Analyze how each story addresses the themes of free will, consent, identity, and power. You are encouraged to engage with outside sources—philosophical, journalistic, or theoretical—that help you interrogate these themes in a broader context.

    Consider addressing:

    • The illusion of choice and algorithmic determinism
    • The commodification of human identity
    • The satire of corporate terms of service and performative consent
    • The psychological toll of being digitally duplicated or manipulated
    • Whether technological “progress” is outpacing moral development

    Your argument should include a strong thesis, counterargument with rebuttal, and close textual analysis that connects narrative detail to broader social and philosophical stakes.


    Five Sample Thesis Statements with Mapping Components


    1. The Death of the Autonomous Self

    In Mountainhead and Joan Is Awful, the protagonists’ loss of agency illustrates how modern tech empires undermine the very concept of selfhood by reducing human experience to data, delegitimizing consent through obfuscation, and accelerating psychological collapse under the guise of innovation.

    Mapping:

    • Reduction of human identity to data
    • Meaningless or manipulated consent
    • Psychological consequences of tech-induced identity collapse

    2. Mock Consent in the Age of Surveillance Entertainment

    Both narratives expose how user agreements and passive digital participation mask deeply coercive systems, revealing that what tech companies call “consent” is actually a legalized form of manipulation, moral abdication, and commercial exploitation.

    Mapping:

    • Consent as coercion disguised in legal language
    • Moral abdication by tech designers and executives
    • Profiteering through exploitation of personal identity

    3. From Users to Subjects: Tech’s New Authoritarianism

    Mountainhead and Joan Is Awful warn that the unchecked ambitions of tech elites have birthed a new form of soft authoritarianism—where control is exerted not through force but through omnipresent surveillance, AI-driven personalization, and identity theft masquerading as entertainment.

    Mapping:

    • Tech ambition and loss of oversight
    • Surveillance and algorithmic control
    • Identity theft as entertainment and profit

    4. The Algorithm as God: Tech’s Unholy Ascendancy

    These works portray the tech elite as digital deities who reprogram reality without ethical limits, revealing a cultural shift where the algorithm—not the soul, society, or state—determines who we are, what we do, and what versions of ourselves are publicly consumed.

    Mapping:

    • Tech elites as godlike figures
    • Algorithmic reality creation
    • Destruction of authentic identity in favor of profitable versions

    5. Selfhood on Lease: How Tech Undermines Freedom and Flourishing

    The protagonists’ descent into confusion and submission in both Mountainhead and Joan Is Awful show that freedom and personal flourishing are now contingent upon platforms and policies controlled by distant tech overlords, whose tools amplify harm faster than they can prevent it.

    Mapping:

    • Psychological dependency on digital platforms
    • Collapse of personal flourishing under tech influence
    • Lack of accountability from the tech elite

    Sample Outline


    I. Introduction

    • Hook: A vivid description of Joan discovering her life has become a streamable show, or the protagonist in Mountainhead questioning his own sanity.
    • Context: Rise of tech empires and their control over identity and consent.
    • Thesis: (Insert selected thesis statement)

    II. The Disintegration of the Self

    • Analyze how Joan and the Mountainhead protagonist experience a crisis of identity.
    • Discuss digital duplication, surveillance, and manipulated perception.
    • Use scenes to show how each story fractures the idea of an integrated, autonomous self.

    III. Consent as a Performance, Not a Principle

    • Explore how both stories critique the illusion of informed consent in the tech age.
    • Examine the use of user agreements, surveillance participation, and passive digital exposure.
    • Link to real-world examples (terms of service, data collection, facial recognition use).

    IV. Tech Elites as Unaccountable Gods

    • Compare the figures or systems in charge—Streamberry in Joan Is Awful, the nebulous forces in Mountainhead.
    • Analyze how the lack of ethical oversight allows systems to spiral toward harm.
    • Use real-world examples like social media algorithms and AI misuse.

    V. Counterargument and Rebuttal

    • Counterargument: Technology isn’t inherently evil—it’s how we use it.
    • Rebuttal: These works argue that the current infrastructure privileges power, speed, and profit over reflection, ethics, or restraint—and humans are no longer the ones in control.

    VI. Conclusion

    • Restate thesis with higher stakes.
    • Reflect on what these narratives ask us to consider about our current digital lives.
    • Pose an open-ended question: Can we build a future where tech enhances human agency instead of annihilating it?

  • Trapped in the AI Age’s Metaphysical Tug-of-War

    Trapped in the AI Age’s Metaphysical Tug-of-War

    I’m typing this to the sound of Beethoven—1,868 MP3s of compressed genius streamed through the algorithmic convenience of a playlist. It’s a 41-hour-and-8-minute monument to compromise: a simulacrum of sonic excellence that can’t hold a candle to the warmth of an LP. But convenience wins. Always.

    I make Faustian bargains like this daily. Thirty-minute meals instead of slow-cooked transcendence. Athleisure instead of tailoring. A Honda instead of high horsepower. The good-enough over the sublime. Not because I’m lazy—because I’m functional. Efficient. Optimized.

    And now, writing.

    For a year, my students and I have been feeding prompts into ChatGPT like a pagan tribe tossing goats into the volcano—hoping for inspiration, maybe salvation. Sometimes it works. The AI outlines, brainstorms, even polishes. But the more we rely on it, the more I feel the need to write without it—just to remember what my own voice sounds like. Just as the vinyl snob craves the imperfections of real analog music or the home cook insists on peeling garlic by hand, I need to suffer through the process.

    We’re caught in a metaphysical tug-of-war. We crave convenience but revere authenticity. We binge AI-generated sludge by day, then go weep over a hand-made pie crust YouTube video at night. We want our lives frictionless, but our souls textured. It’s the new sacred vs. profane: What do we reserve for real, and what do we surrender to the machine?

    I can’t say where this goes. Maybe real food will be phased out, like Blockbuster or bookstores. Maybe we’ll subsist on GLP-1 drugs, AI-tailored nutrient paste, and the joyless certainty of perfect lab metrics.

    As for entertainment, I’m marginally more hopeful. Chris Rock, Sarah Silverman—these are voices, not products. AI can churn out sitcoms, but it can’t bleed. It can’t bomb. It can’t riff on childhood trauma with perfect timing. Humans know the difference between a story and a story-shaped thing.

    Still, writing is in trouble. Reading, too. AI erodes attention spans like waves on sandstone. Books? Optional. Original thought? Delegated. The more AI floods the language, the more we’ll acclimate to its sterile rhythm. And the more we acclimate, the less we’ll even remember what a real voice sounds like.

    Yes, there will always be the artisan holdouts—those who cook, write, read, and listen with intention. But they’ll be outliers. A boutique species. The rest of us will be lean, medicated, managed. Data-optimized units of productivity.

    And yet, there will be stories. There will always be stories. Because stories aren’t just culture—they’re our survival instinct dressed up as entertainment. When everything else is outsourced, commodified, and flattened, we’ll still need someone to stand up and tell us who we are.

  • The Death of Dinner: How AI Could Replace Pleasure Eating with Beige, Compliant Goo

    The Death of Dinner: How AI Could Replace Pleasure Eating with Beige, Compliant Goo

    Savor that croissant while you still can—flaky, buttery, criminally indulgent. In a few decades, it’ll be contraband nostalgia, recounted in hushed tones by grandparents who once lived in a time when bread still had a soul and cheese wasn’t “shelf-stable.” Because AI is coming for your taste buds, and it’s not bringing hot sauce.

    We are entering the era of algorithm-approved alimentation—a techno-utopia where food isn’t eaten, it’s administered. Where meals are no longer social rituals or sensory joys but compliance events optimized for satiety curves and glucose response. Your plate is now a spreadsheet, and your fork is a biometric reporting device.

    Already, AI nutrition platforms like Noom, Lumen, and MyFitnessPal’s AI-diet overlords are serving up daily menus based on your gut flora’s mood and whether your insulin levels are feeling emotionally regulated. These platforms don’t ask what you’re craving—they tell you what your metrics will tolerate. Dinner is no longer about joy; it’s about hitting your macros and earning a dopamine pellet for obedience.

    Tech elites have already evacuated the dinner table. For them, food is just software for the stomach. Soylent, Huel, Ka’chava—these aren’t meals, they’re edible flowcharts. Designed not for delight but for efficiency, these drinkable spreadsheets are powdered proof that the future of food is just enough taste to make you swallow.

    And let’s not forget Ozempic and its GLP-1 cousins—the hormonal muzzle for hunger. Pair that with AI wearables whispering sweet nothings like “Time for your lentil paste” and you’ve got a whole generation learning that wanting flavor is a failure of character. Forget foie gras. It’s psy-ops via quinoa gel.

    Even your grocery cart is under surveillance. AI shopping assistants—already lurking in apps like Instacart—will gently steer you away from handmade pasta and toward fermented fiber bars and shelf-stable cheese-like products. Got a hankering for camembert? Sorry, your AI gut-coach has flagged it as non-compliant dairy-based frivolity. Enjoy your pea-protein puck, peasant.

    Soon, your lunch break won’t be lunch or a break. It’ll be a Pomodoro-synced ingestion window in which you sip an AI-formulated mushroom slurry while doom-scrolling synthetic influencers on GLP-1. Your food won’t comfort you—it will stabilize you, and that’s the most terrifying part. Three times a day, you’ll sip the same beige sludge of cricket protein, nootropic fibers, and psychoactive stabilizers, each meal a contract with the status quo: You will feel nothing, and you will comply.

    And if you’re lucky enough to live in an AI-UBI future, don’t expect dinner to be celebratory. Expect it to be regulated, subsidized, and flavor-neutral. Your government food credits won’t cover artisan cheddar or small-batch bread. Instead, your AI grocery budget assistant will chirp:

    “This selection exceeds your optimal cost-to-nutrient ratio. May I suggest oat crisps and processed cheese spread at 50% less and 300% more compliance?”

    Even without work, you won’t have the freedom to indulge. Your wearable will monitor your blood sugar, cholesterol, and moral fiber. Have a rogue bite of truffle mac & cheese? That spike in glucose just docked you two points from your UBI wellness score:

    “Indulgent eating may affect eligibility for enhanced wellness bonuses. Consider lentil loaf next time, citizen.”

    Eventually, pleasure eating becomes a class marker, like opera tickets or handwritten letters. Rich eccentrics will dine on duck confit in secrecy while the rest of us drink our AI-approved nutrient slurry in 600-calorie increments at 13:05 sharp. Flavor becomes a crime of privilege.

    The final insult? Your children won’t even miss it. They’ll grow up thinking “food joy” is a myth—like cursive writing or butter. They’ll hear stories of crusty baguettes and sizzling fat the way Boomers talk about jazz clubs and cigarettes. Romantic, but reckless.

    In this optimized hellscape, eating is no longer an art. It’s a biometric negotiation between your body and a neural net that no longer trusts you to feed yourself responsibly.

    The future of food is functional. Beige. Pre-chewed by code. And flavor? That’s just a bug in the system.

  • How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    After flying to Miami recently, I finally understood the full appeal of noise-canceling headphones—not just for travel, but for the everyday, ambient escape act they offer my college students. Several claim, straight-faced, that they “hear the lecture better” while playing ASMR in their headphones because it soothes their anxiety and makes them better listeners. Is this neurological wizardry? Or performance art? I’m not sure. But apocryphal or not, the explanation has stuck with me.

    It made me see the modern, high-grade headphone as something far more than a listening device. It’s a sanctuary, or to use the modern euphemism, an aural safe space in a chaotic world. You may not have millions to seal yourself in a hyperbaric oxygen pod inside a luxury doomsday bunker carved into the Montana granite during World War Z, but if you’ve got $500 and a credit score above sea level, you can disappear in style—into a pair of Sony MX6s or Audio-Technica ATH-R70s.

    The headphone, in this context, is not just gear—it’s armor. Whether cocobolo wood or carbon fiber, it communicates something quietly radical: “I have opted out.”

    You’re not rejecting the world with malice—you’re simply letting it know that you’ve found something better. Something more reliable. Something calibrated to your nervous system. In fact, you’ve severed communication so politely that all they hear is the faint thump of curated escapism pulsing through your earpads.

    For my students, these headphones are not fashion statements—they’re boundary-drawing devices. The outside world is a cacophony of canvas announcements, attention fatigue, and algorithmically optimized despair. Inside the headphones? Rain sounds. Lo-fi beats from a YouTube loop titled “study with me until the world ends.” Maybe even a softly muttering AI voice telling them they are enough.

    It doesn’t matter whether it’s true. It matters that it works.

    And here’s the deeper point: the headphone isn’t just a sanctuary. It’s a non-accountability device. You can’t be blamed for ghosting a group chat or zoning out during a team huddle when you’re visibly plugged into something more profound. You’re no longer rude—you’re occupied. Your silence is now technically sound.

    In a hyper-networked world that expects your every moment to be a node of productivity or empathy, the headphone is the last affordable luxury that buys you solitude without apology. You don’t need a manifesto. You just need active noise-canceling and a decent DAC.

    You’re not ignoring anyone. You’ve just entered your own monastery of midrange clarity, bass-forward detachment, and spatially engineered peace.

    And if someone wants your attention?

    Tell them to knock louder. You’re in sanctuary.

  • Siri at 30,000 Feet: Watch Reviews from the Android Abyss

    Siri at 30,000 Feet: Watch Reviews from the Android Abyss

    I’ve recently fallen into a strange corner of YouTube, where watch reviews by non-English speakers are automatically dubbed into English by an AI translator. The result? A surreal auditory hallucination that sounds like Siri moonlighting as a flight attendant. Every video becomes a low-budget dream sequence: a monotone voice calmly explaining bezel alignment while I mentally brace for instructions on how to locate the nearest flotation device.

    These AI-dubbed reviews don’t just kill the vibe—they exterminate it. What might have been a charming deep dive into dial texture or lug curvature turns into a bureaucratic fever dream. I’m not learning about watches. I’m trapped in a dystopian airline safety video, narrated by an android who sounds like he’s instructing me on what to do in the event the cabin has a drop in oxygen.

    The silver lining? These videos are the perfect antidote to impulsive spending. No matter how alluring the lume or limited the edition, the second I hear that synthetic drone describing a in robot voice a strange new word– “sapphireklysteelcasebackwithantimagneticresistance”–my urge to buy evaporates. The watch becomes a prop in an uncanny AI daymare—and I, mercifully, return to reality with my wallet intact.

  • You don’t need Soma. You’ve got ChatNumb

    You don’t need Soma. You’ve got ChatNumb

    In Brave New World, Aldous Huxley introduced Soma—a state-sanctioned sedative that numbed the masses into docile contentment. It didn’t solve problems; it dissolved them. Conflict, anxiety, existential dread—gone in a puff of pharmaceutical fog. Soma didn’t spark joy; it scrubbed discomfort. It was emotional Febreze for a society allergic to depth. People weren’t living—they were coasting on a chemically induced flatline, their critical faculties dulled to the point of extinction. No questions. No friction. No soul.

    Well, good news. Soma’s here—but it’s not in a pill bottle. It’s in your search bar, your chatbot, your synthetic co-pilot. It’s called AI. And unlike Huxley’s version, this one doesn’t need to sedate you. You do it to yourself. Every time you delegate thought, judgment, or original insight to an algorithm, your self-reliance shrinks like a muscle in a cast. The fewer mental reps you do, the more comfortable you get in the warm bath of synthetic cognition. The mind adapts. It flattens. You feel “efficient,” “optimized,” “smart”—but the uncomfortable truth is, you’re just well-lubricated for obedience.

    You don’t need Soma. You’ve got ChatNumb–the condition that sets in after tens of thousands of reps with your favorite AI assistant. The symptoms? A faux sense of competence, lizard-eyed placidity, and a vague suspicion that you’ve stopped thinking altogether. It’s not that you’ve been silenced. You’ve been auto-filled.

    The Age of Soma isn’t coming. It’s here. And we welcomed it with open thumbs. God help us all.

  • The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

    The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

    Last semester, in my college critical thinking class—a room full of bright minds and burnt-out spirits—we were dissecting what feels like a nationwide breakdown in mental health. Students tossed around possible suspects like a crime scene lineup: the psychological hangover of the pandemic, TikTok influencers glamorizing nervous breakdowns with pastel filters and soft piano music, the psychic toll of watching America split like a wishbone down party lines. All plausible. All depressing.

    Then a re-entry student—a nurse with twenty years in the trenches—raised her hand and calmly dropped a depth charge into the conversation. She said she sees more patients than ever staggering into hospitals not just sick, but shattered. Demoralized. Enraged. When I asked her what she thought was behind the surge in mental illness, she didn’t hesitate. “Money,” she said. “No one has any. They’re working themselves into the ground and still can’t cover rent, groceries, and medical bills. They’re burning out and breaking down.”

    And just like that, all our theories—algorithms, influencers, red-vs-blue blood feuds—melted under the furnace heat of economic despair. She was right. She sees the raw pain daily, the kind of pain tech billionaires will never upload into a TED Talk. While they spin futuristic fables about AI liberating humanity for leisure and creativity, my nurse watches the working class crawl into urgent care with nothing left but rage and debt. The promise of Universal Basic Income sounds charming if you’re already lounging in a beanbag chair at Singularity HQ, but out here in the world of late rent and grocery inflation, it’s a pipe dream sold by people who wouldn’t recognize a shift worker if one collapsed on their marble floors. The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

  • WordPress: My Kettlebell Gym of the Mind

    WordPress: My Kettlebell Gym of the Mind

    I launched my WordPress blog on March 12, evicting myself from Typepad after it was sold to a company that treats blogs the way landlords treat rent-controlled tenants: with bored disdain. Typepad became a ghost town in a bad neighborhood, so I packed up and moved to the gated community of WordPress—cleaner streets, better lighting, and fewer trolls.

    For the past ten weeks, I’ve treated WordPress like a public journal—a digital sweat lodge where I sweat out my thoughts, confessions, and pedagogical war stories from the frontlines of college teaching. I like the routine, the scaffolding, and the habits of self-control. Blogging gives me something I never got from social media or committee meetings: a sense of order in a culture that’s spun off its axis.

    But let’s not kid ourselves. WordPress isn’t some utopian agora where meaningful discourse flourishes in the shade of civility. It’s still wired into the dopamine economy. The minute I start checking likes, follows, and view counts, I’m no longer a writer—I’m a lab rat pressing the pellet button. Metrics are the new morality. And brother, I’m not immune.

    Case in point: I can craft a thoughtful post, click “Publish,” and watch it sink into the abyss like a message in a bottle tossed into a septic tank. One view. Maybe. Post the same thing on Reddit, and suddenly I’m performing for an arena full of dopamine-addled gladiators. They’ll upvote, sure—but only after the professional insulters have had their turn at bat. Reddit is where clever sociopaths go to sharpen their knives and call it discourse.

    WordPress, by contrast, is a chill café with decent lighting and no one live-tweeting your every existential sigh. It’s a refuge from the snarling hordes of hot-take hustlers and ideological bloodsport. A place where I can escape not only digital toxicity, but the wider derangement of our post-shame, post-truth society—where influencers and elected officials are often the same con artist in two different blazers.

    Instead of doomscrolling or screaming into the algorithmic void, I’ve taken to reading biographies—public intellectuals, athletes who aged with dignity, tech pioneers who are obsessed with taking over the world. Or I’ll go spelunking into gadget rabbit holes to distract myself from the spiritual hangover that comes from living in a country where charisma triumphs over character and truth is whatever sells ad space.

    In therapy-speak, my job on WordPress is to “use the tools,” as Phil Stutz says: to strengthen my relationship with myself, with others, and with the crumbling world around me. It’s a discipline, not a dopamine drip. Writing here won’t make me famous, won’t make me rich, and sure as hell won’t turn me into some cardigan-clad oracle for the digital age.

    What it will do is give me structure. WordPress is where I wrestle with my thoughts the way I wrestle kettlebells in my garage: imperfectly, regularly, and with just enough sweat to keep the madness at bay.

  • 5 Ways We Can Get Addicted to AI Writing Platforms

    5 Ways We Can Get Addicted to AI Writing Platforms

    I’ve tried to stay current with the way technology is affecting my college writing classes. I dipped into the pool of AI-writing platforms like ChatGPT, and after 16 months or so, I can say that the program has gotten the best of me on many occasions and caused me to step back and look at its power to trap us. These platforms are addictive for 5 reasons. 

    One, AI polishes and strengthens your prose in flattering ways that can give you false confidence even as it can be wordy and obscure the clarity of your original draft. I call this false confidence “writer’s dysmorphia,” the idea that AI gives your prose a “muscle-flex” that you can’t muster without it. 

    Two. Another cause of addiction is the way we anthropomorphize AI, giving it a pet name and developing a fake relationship with it. This relationship exists in our heads. In many ways, this relationship can suffocate us as AI insidiously creeps into our brains. 

    Three. As we develop this “relationship” with AI and become grateful for its services, we feel like we owe it our attention. In this regard, it becomes the abusive spouse who wants to be addressed and to remain relevant in our lives. 

    Four. Our addiction grows as we lose confidence in our non-AI writing and, in turn, our non-Ai self. We constantly want to adorn ourselves with AI’s ability to razzle-dazzle.

    Five. Over time as we outsource more and more work to AI, we become more and more lazy and suffer Brain Atrophy Creep, losing our brain power slowly but surely.

    For these reasons, I’m doing more non-AI writing, such as this piece, and learning to find confidence on my own.