Category: technology

  • The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

    The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

    Last semester, in my college critical thinking class—a room full of bright minds and burnt-out spirits—we were dissecting what feels like a nationwide breakdown in mental health. Students tossed around possible suspects like a crime scene lineup: the psychological hangover of the pandemic, TikTok influencers glamorizing nervous breakdowns with pastel filters and soft piano music, the psychic toll of watching America split like a wishbone down party lines. All plausible. All depressing.

    Then a re-entry student—a nurse with twenty years in the trenches—raised her hand and calmly dropped a depth charge into the conversation. She said she sees more patients than ever staggering into hospitals not just sick, but shattered. Demoralized. Enraged. When I asked her what she thought was behind the surge in mental illness, she didn’t hesitate. “Money,” she said. “No one has any. They’re working themselves into the ground and still can’t cover rent, groceries, and medical bills. They’re burning out and breaking down.”

    And just like that, all our theories—algorithms, influencers, red-vs-blue blood feuds—melted under the furnace heat of economic despair. She was right. She sees the raw pain daily, the kind of pain tech billionaires will never upload into a TED Talk. While they spin futuristic fables about AI liberating humanity for leisure and creativity, my nurse watches the working class crawl into urgent care with nothing left but rage and debt. The promise of Universal Basic Income sounds charming if you’re already lounging in a beanbag chair at Singularity HQ, but out here in the world of late rent and grocery inflation, it’s a pipe dream sold by people who wouldn’t recognize a shift worker if one collapsed on their marble floors. The future, we’re told, is full of freedom—unless you’re the one still cleaning the mess.

  • WordPress: My Kettlebell Gym of the Mind

    WordPress: My Kettlebell Gym of the Mind

    I launched my WordPress blog on March 12, evicting myself from Typepad after it was sold to a company that treats blogs the way landlords treat rent-controlled tenants: with bored disdain. Typepad became a ghost town in a bad neighborhood, so I packed up and moved to the gated community of WordPress—cleaner streets, better lighting, and fewer trolls.

    For the past ten weeks, I’ve treated WordPress like a public journal—a digital sweat lodge where I sweat out my thoughts, confessions, and pedagogical war stories from the frontlines of college teaching. I like the routine, the scaffolding, and the habits of self-control. Blogging gives me something I never got from social media or committee meetings: a sense of order in a culture that’s spun off its axis.

    But let’s not kid ourselves. WordPress isn’t some utopian agora where meaningful discourse flourishes in the shade of civility. It’s still wired into the dopamine economy. The minute I start checking likes, follows, and view counts, I’m no longer a writer—I’m a lab rat pressing the pellet button. Metrics are the new morality. And brother, I’m not immune.

    Case in point: I can craft a thoughtful post, click “Publish,” and watch it sink into the abyss like a message in a bottle tossed into a septic tank. One view. Maybe. Post the same thing on Reddit, and suddenly I’m performing for an arena full of dopamine-addled gladiators. They’ll upvote, sure—but only after the professional insulters have had their turn at bat. Reddit is where clever sociopaths go to sharpen their knives and call it discourse.

    WordPress, by contrast, is a chill café with decent lighting and no one live-tweeting your every existential sigh. It’s a refuge from the snarling hordes of hot-take hustlers and ideological bloodsport. A place where I can escape not only digital toxicity, but the wider derangement of our post-shame, post-truth society—where influencers and elected officials are often the same con artist in two different blazers.

    Instead of doomscrolling or screaming into the algorithmic void, I’ve taken to reading biographies—public intellectuals, athletes who aged with dignity, tech pioneers who are obsessed with taking over the world. Or I’ll go spelunking into gadget rabbit holes to distract myself from the spiritual hangover that comes from living in a country where charisma triumphs over character and truth is whatever sells ad space.

    In therapy-speak, my job on WordPress is to “use the tools,” as Phil Stutz says: to strengthen my relationship with myself, with others, and with the crumbling world around me. It’s a discipline, not a dopamine drip. Writing here won’t make me famous, won’t make me rich, and sure as hell won’t turn me into some cardigan-clad oracle for the digital age.

    What it will do is give me structure. WordPress is where I wrestle with my thoughts the way I wrestle kettlebells in my garage: imperfectly, regularly, and with just enough sweat to keep the madness at bay.

  • 5 Ways We Can Get Addicted to AI Writing Platforms

    5 Ways We Can Get Addicted to AI Writing Platforms

    I’ve tried to stay current with the way technology is affecting my college writing classes. I dipped into the pool of AI-writing platforms like ChatGPT, and after 16 months or so, I can say that the program has gotten the best of me on many occasions and caused me to step back and look at its power to trap us. These platforms are addictive for 5 reasons. 

    One, AI polishes and strengthens your prose in flattering ways that can give you false confidence even as it can be wordy and obscure the clarity of your original draft. I call this false confidence “writer’s dysmorphia,” the idea that AI gives your prose a “muscle-flex” that you can’t muster without it. 

    Two. Another cause of addiction is the way we anthropomorphize AI, giving it a pet name and developing a fake relationship with it. This relationship exists in our heads. In many ways, this relationship can suffocate us as AI insidiously creeps into our brains. 

    Three. As we develop this “relationship” with AI and become grateful for its services, we feel like we owe it our attention. In this regard, it becomes the abusive spouse who wants to be addressed and to remain relevant in our lives. 

    Four. Our addiction grows as we lose confidence in our non-AI writing and, in turn, our non-Ai self. We constantly want to adorn ourselves with AI’s ability to razzle-dazzle.

    Five. Over time as we outsource more and more work to AI, we become more and more lazy and suffer Brain Atrophy Creep, losing our brain power slowly but surely.

    For these reasons, I’m doing more non-AI writing, such as this piece, and learning to find confidence on my own. 

  • Beware of the ChatGPT Strut

    Beware of the ChatGPT Strut

    Yesterday my critical thinking students and I talked about the ways we could revise our original content with ChatGPT give it instructions and train this AI tool to go beyond its bland, surface-level writing style. I showed my students specific prompts that would train it to write in a persona:

    “Rewrite the passage with acid wit.”

    “Rewrite the passage with lucid, assured prose.”

    “Rewrite the passage with mild academic language.”

    “Rewrite the passage with overdone academic language.”

    I showed the students my original paragraphs and ChatGPT’s versions of my sample arguments agreeing and disagreeing with Gustavo Arellano’s defense of cultural appropriation, and I said in the ChatGPT rewrites of my original there were linguistic constructions that were more witty, dramatic, stunning, and creative than I could do, and that to post these passages as my own would make me look good, but they wouldn’t be me. I would be misrepresenting myself, even though most of the world will be enhancing their writing like this in the near future. 

    I compared writing without ChatGPT to being a natural bodybuilder. Your muscles may not be as massive and dramatic as the guy on PEDS, but what you see is what you get. You’re the real you. In contrast, when you write with ChatGPT, you are a bodybuilder on PEDS. Your muscle-flex is eye-popping. You start doing the ChatGPT strut. 

    I gave this warning to the class: If you use ChatGPT a lot, as I have in the last year as I’m trying to figure out how I’m supposed to use it in my teaching, you can develop writer’s dysmorphia, the sense that your natural, non-ChatGPT writing is inadequate compared to the razzle-dazzle of ChatGPT’s steroid-like prose. 

    One student at this point disagreed with my awe of ChatGPT and my relatively low opinion of my own “natural” writing. She said, “Your original is better than the ChatGPT versions. Yours makes more sense to me, isn’t so hidden behind all the stylistic fluff, and contains an important sentence that ChatGPT omitted.”

    I looked at the original, and I realized she was right. My prose wasn’t as fancy as ChatGPT’s but the passage about Gustavo Arellano’s essay defending cultural appropriation was more clear than the AI versions.

    At this point, I shifted metaphors in describing ChatGPT. Whereas I began the class by saying that AI revisions are like giving steroids to a bodybuilder with body dysmorphia, now I was warning that ChatGPT can be like an abusive boyfriend or girlfriend. It wants to hijack our brains because the main objective of any technology is to dominate our lives. In the case of ChatGPT, this domination is sycophantic: It gives us false flattery, insinuates itself into our lives, and gradually suffocates us. 

    As an example, I told the students that I was getting burned out using ChatGPT, and I was excited to write non-ChatGPT posts on my blog, and to live in a space where my mind could breathe the fresh air apart from ChatGPT’s presence. 

    I wanted to see how ChatGPT would react to my plan to write non-ChatGPT posts, and ChatGPT seemed to get scared. It started giving me all of these suggestions to help me implement my non-ChatGPT plan. I said back to ChatGPT, “I can’t use your suggestions or plans or anything because the whole point is to live in the non-ChatGPT Zone.” I then closed my ChatGPT tab. 

    I concluded by telling my students that we need to reach a point where ChatGPT is a tool like Windows and Google Docs, but as soon as we become addicted to it, it’s an abusive platform. At that point, we need to use some self-agency and distance ourselves from it.  

  • If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    No ChatGPT or AI of any kind was used in the following:

    For close to 2 years, I’ve been editing and collaborating with ChatGPT for my personal and professional writing. I teach my college writing students how to engage with it, giving it instructions to avoid its default setting for bland, anodyne prose and teaching it how to adopt various writing personas. 

    For my own writing, ChatGPT has boosted my prose and imagery, making my writing more stunning, dramatic, and vivid.

    Because I have been a bodybuilder since 1974, I will use a bodybuilding analogy: Writing with ChatGPT is like bodybuilding with PEDS. I get addicted to the boost, the extra pump, and the extra muscle. Just as a bodybuilder can get body dysmorphia, ChatGPT can give writers a sort of writer’s dysmorphia. 

    But posting a few articles on Reddit recently in which a few readers were put off by what they saw as “fake writing,” I stopped in my tracks to question my use of ChatGPT. Part of me thinks that the hunger for authenticity is such that I should be writing content that is more like the natural bodybuilder, the guy who ventures forth in his endeavor with no PEDS. What you see is what you get, all human, no steroids, no AI.

    While I like the way ChatGPT pushes me in new directions that I would not explore on my own and makes the writing process engaging in new ways, I acknowledge that AI-fueled writer’s dysmorphia is real. We can get addicted to the juiced-up prose and the razzle-dazzle.

    Secondly, we can outsource too much thinking to AI and get lazy rather than do the work ourselves. In the process, our critical thinking skills begin to atrophy.

    Third, I think we can fill our heads with too much ChatGPT and live inside a hazy AI fever swamp. I recall going to middle school and on the outskirts of the campus, you could see the “burn-outs,” pot-addicted kids staring into the distance with their lizard eyes. One afternoon a friend joked, “They’re high so often, not being high must be a trip for them.” What if we become like these lizard-eyed burnouts and wander this world on a constant ChatGPT high that is so debilitating that we need to sober up in the natural world upon which we find the non-AI existence is its own form of healthy pleasure? In other words, we should be careful not to let ChatGPT live rent-free in our brains.

    Finally, people hunger for authentic, all-human writing, so moving forward on this blog, I want to continue to push myself with some ChatGPT-edited writing, but I also want to present all-natural, all-human writing, as is the case with this post. 

  • The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    I loathe smartphones. They’re tiny, slippery surveillance rectangles masquerading as tools of liberation. Typing on one feels like threading a needle while wearing oven mitts. My fingers bungle every attempt at precision, the autocorrect becomes a co-author I never hired, and the screen is so small I have to squint like I’m decoding Morse code through a peephole. Tablets aren’t much better—just larger slabs of compromise.

    Give me a mechanical keyboard, a desktop tower that hums with purpose, and twin 27-inch monitors beaming side by side like architectural blueprints of clarity. That’s how I commune with ChatGPT. I need real estate. I want to see the thinking unfold, not peer at it like a medieval monk examining a parchment shard.

    So when one of my students whipped out her phone, opened the ChatGPT app, and began speaking to it like it was her digital therapist, I nodded politely. But inside, I was muttering, “Not for me.” I’ve lived long enough to know that I don’t acclimate well to anything that fits in a jeans pocket.

    That’s why Matteo Wong’s article, “OpenAI’s Ambitions Just Became Crystal Clear,” caught my eye. Apparently, Sam Altman has teamed up with Jony Ive—the high priest of sleekness and the ghost behind Apple’s glory days—to sink $5 billion into building a “family of devices” for ChatGPT. Presumably, these will be as smooth, sexy, and addictive as the iPhone once was before it became a dopamine drip and digital leash.

    Honestly? It makes sense. In the last year, my ChatGPT use has skyrocketed, while my interaction with other platforms has withered. I now use it to write, research, plan, edit, make weight-management meal plans, and occasionally psychoanalyze myself. If there were a single device designed to serve as a “mother hub”—a central console for creativity, productivity, and digital errands—I’d buy it. But not if it’s shaped like a lapel pin. Not if it whispers in my ear like some clingy AI sprite. I don’t want a neural appendage or a mind tickler. I want a screen.

    What I’m hoping for is a ChatGPT-Book: something like a Chromebook, but with real writing DNA. A device with its own operating system that consolidates browser tabs, writing apps, and research tools. A no-nonsense, 14-inch-and-up display where I can visualize my creative process, not swipe through it.

    We all learn and create differently in this carnival of overstimulation we call the Information Age. I imagine Altman and Ive know that—and will deliver a suite of devices for different brains and temperaments. Mine just happens to want clarity, not minimalism masquerading as genius.

    Wong’s piece doesn’t surprise or shock me. It’s just the same old Silicon Valley gospel: dominate or be buried. Apple ate BlackBerry. Facebook devoured MySpace. And MySpace? It’s now a dusty relic in the basement of internet history—huddled next to beta tapes, 8-tracks, and other nostalgia-laced tech corpses.

    If ChatGPT gets its own device and redefines how we interact with the web, well… chalk it up to evolution. But for the love of all that’s analog—give me a keyboard, a screen, and some elbow room.

  • The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    Welcome to the future—where the algorithm reigns, identity is a curated filter pack, and dystopia arrives not with a boot to the face but a wellness app and a matching pair of $900 headphones that murmur Coldplay into your skull at just the right serotonin-laced frequency.

    We will all look like vaguely reprocessed versions of Salma Hayek or Brad Pitt—digitally airbrushed to remove all imperfections but retain just enough “authenticity” to keep our neuroses in play. Our playlists will be algorithmically optimized to sound like Coldplay mated with spa music and decided never to take risks again.

    We’ll wear identical headphones—sleek, matte, noise-canceling monuments to our collective disinterest in one another. Not to be rude. Just too evolved to engage. Every journal entry we write will be AI-assisted, reading like the bastard child of Brené Brown and ChatGPT: reflective, sincere, and soul-crushingly uniform.

    Our influencers? They’ll all look the same too—gender-fluid, lightly medicated, with just enough charisma to sell you an oat milk subscription while quoting Kierkegaard. Politics, entertainment, mental health, and skincare will be served up on the same TikTok platter, narrated by someone who once dated a crypto founder and now podcasts about trauma.

    Three times a day, we’ll sip our civilization smoothie: a beige sludge of cricket protein, creatine, nootropic fibers, and a lightly psychoactive GLP-1 variant that keeps hunger, sadness, and ambition at bay. It’s not a meal; it’s a contract with the status quo. We’ll all wear identical sweat-wicking athleisure in soothing desert neutrals, paired with orthopedic sneakers in punchy tech-startup orange.

    We’ll all “take breaks from social media” at the same approved hour—between 5 and 6 p.m.—so we can “reconnect with the analog world” by staring at a sunset long enough to photograph it and post our profound revelations online at 6:01.

    Nobody will want children, because who wants to drag a baby into a climate-controlled apartment where the rent is half your nervous system? Marriage? A relic of a time when humans still believed in eye contact. Romances will be managed by chatbots programmed to simulate caring without requiring reciprocation. You’ll tell the app your love language, it’ll write your messages, and your partner’s app will do the same. Everyone’s emotionally satisfied, no one’s truly known.

    And vacations? Pure fiction. Deepfakes will show us in Bali, Tuscany, or the moon—beaming with digital joy, sipping pixelated espresso. Real travel is for the ultra-rich and the deluded.

    As for existential despair? Doesn’t exist anymore. Our moods will be finely tuned by micro-dosed pharmacology and AI-generated affirmations. No more late-night crises or 3 a.m. sobbing into a pillow. Just an endless, gentle hum of stabilized contentment—forever.

  • Deepfakes and Detentions: My Career as an Unwilling Digital Cop

    Deepfakes and Detentions: My Career as an Unwilling Digital Cop

    Yesterday, in the fluorescent glow of my classroom, I broke the fourth wall with my college students. We weren’t talking about comma splices or rhetorical appeals—we were talking about AI and cheating, which is to say, the slow erosion of trust in education, digitized and streamed in real time.

    I told them, point blank: every time I design an assignment that I believe is AI-resistant, some clever student will run it through an AI backchannel and produce a counterfeit good polished enough to win a Pulitzer.

    Take my latest noble attempt at authenticity: an interview-based paragraph. I assign them seven thoughtful questions. They’re supposed to talk to someone they know who struggles with weight management—an honest, human exchange that becomes the basis for their introduction. A few will do it properly, bless their analog souls. But others? They’ll summon a fictional character from the ChatGPT multiverse, conduct a fake interview, and then outsource the writing to the very bot that cooked up their imaginary source.

    At this point, I could put on my authoritarian costume—Digital Police cap, badge, mirrored shades—and demand proof: “Upload an audio or video clip of your interview to Canvas.” I imagine myself pounding my chest like a TSA agent catching a contraband shampoo bottle. Academic integrity: enforced!

    Wrong.

    They’ll serve me a deepfake. A synthetic voice, a synthetic face, synthetic sincerity. I’ll counter with new tech armor, and they’ll leapfrog it with another trick, and on and on it goes—an infinite arms race in the valley of uncanny computation.

    So I told them: “This isn’t why I became a teacher. I’m not here to play narc in a dystopian techno-thriller. I’ll make this class as compelling as I can. I’ll appeal to your intellect, your curiosity, your hunger to be more than a prompt-fed husk. But I’m not going to turn into a surveillance drone just to catch you cheating.”

    They stared back at me—quiet, still, alert. Not scrolling. Not glazed over. I had them. Because when we talk about AI, the room gets cold. They sense it. That creeping thing, coming not just for grades but for jobs, relationships, dreams—for the very idea of effort. And in that moment, we were on the same sinking ship, looking out at the rising tide.

  • The Futility of Being Ready

    The Futility of Being Ready

    In December of 2019, my wife and I, both lifelong members of the National Society of Worrywarts, stumbled upon reports of a deadly virus brewing in China. Most people shrugged. We did not. I jumped on eBay and ordered a bulk box of masks the size of a hotel mini-fridge. It felt ridiculous at the time—a paranoid lark, like filling a doomsday bunker because you heard thunder on a Tuesday. But three months later, on March 13, 2020, the world shut down, and that cardboard box of N95s felt less like overreaction and more like prophecy.

    These days, I teach college in what I call the ChatGPT Era—a time when my students and I sit around analyzing how artificial intelligence is rewiring our habits, our thinking, and possibly the scaffolding of our humanity. I don’t dread AI the way I dreaded COVID. It doesn’t make me stock canned beans or disinfect door handles. But it does give me that same uneasy tremor in the gut—the sense that something vast is shifting beneath us, and that whatever emerges will make the present feel quaint and maybe a little foolish.

    It’s like standing on a beach after the earthquake and watching the water disappear from the shore. You can back up your files, rewrite your syllabus, and pretend to adapt, but you know deep down you’re stuck in Prepacolypse Mode—that desperate, irrational phase where you try to outmaneuver the future with your label maker. You prepare for the unpreparable, perform rituals of control that offer all the protection of a paper shield.

    And through it all comes that strange, electric sensation—Dreadrenaline. It’s not just fear. It’s a kind of alertness, a humming, high-voltage awareness that your life is about to be edited at the molecular level. You’re not just anticipating change—you’re bracing for a version of yourself that will be unrecognizable on the other side. You’re watching history draft your name onto the roster and realizing, too late, that you’re not a spectator anymore. You’re in the game.

  • Why I’m Sure the $450 Sony WH-100XM6 Headphones Would Make Me Miserable

    Why I’m Sure the $450 Sony WH-100XM6 Headphones Would Make Me Miserable

    At $89, my Sony WH-CH720N headphones are like a charming B-movie that knows what it is—solid, dependable, and blessedly low on expectations. I’m content, maybe even grateful. But shelling out $450 for the Sony WH-1000XM6? That’s not just buying headphones—that’s enrolling them in the Ivy League of Audio. For that kind of money, I expect sonic transcendence, noise-cancellation that erases my student debt, and bass so rich it pays taxes. 

    This is the curse of Pricefectionism—a condition where the higher the sticker, the more unreasonable your expectations become. At four hundred and fifty bucks, I don’t want headphones. I want a personal sound butler whispering hi-res lullabies directly into my cerebral cortex.