Tag: ai

  • The Handwriting Is on the Wall for Writing Instructors Like Myself

    The Handwriting Is on the Wall for Writing Instructors Like Myself

    There’s a cliché I’ve avoided all my life because I’m supposed to be offended by cliches. I teach college writing. But now, God help me, I must say it: I see the handwriting on the wall. And it’s blinking in algorithmic neon and blinding my eyes.

    I’ve taught college writing for forty years. My wife, a fellow lifer in the trenches, has clocked twenty-five teaching sixth and seventh graders. Like other teachers, we got caught off-guard by AI writing platforms. We’re now staring down the barrel of obsolescence while AI platforms give us an imperious smile and say, “We’ve got this now.”

    Try crafting an “AI-resistant” assignment. Go ahead. Ask students to conduct interviews, keep journals, write about memories. They’ll feed your prompt into ChatGPT and create an AI interview, journal entry, and personal reflection that has all the depth and soul of stale Pop-Tart. You squint your eyes at these AI responses, and you can tell something isn’t right. They look sort of real but have a robotic element about them. Your AI-detecting software isn’t reliable so you refrain from making accusations. 

    When I tell my wife I feel that my job is in danger, she shrugs and says there’s little we can do. The toothpaste is out of the tube. There’s no going back. 

    I suppose my wife will be a glorified camp counselor with grading software. For me, it will be different. I teach college. I’ll have to attend a re-education camp dressed up as “professional development.” I’ll have to learn how to teach students to prompt AI like Vegas magicians—how to trick it into coherence, how to interrogate its biases. Writing classes will be rebranded as Prompt Engineering.”

    At sixty-three, I’m no fool. I know what happens to tired draft horses when the carriage goes electric. I’ve seen the pasture. I can smell the industrial glue. And I’m not alone. My colleagues—bright, literate, and increasingly demoralized—mutter the same bitter mantra: “We are the AI police. And the criminals are always one jailbreak ahead.”

  • The Composition Apocalypse: How AI Ate the Syllabus

    The Composition Apocalypse: How AI Ate the Syllabus

    We’ve arrived at the third and final essay in this course, and the gloves are off.

    Just as GLP-1 drugs are transforming eating—from pleasure to optimization—AI is transforming writing. That’s not speculation; it’s the new syllabus. We’re witnessing the great extinction event of the traditional writing process. Drafting, revising, struggling with a paragraph like it’s a Rubik’s Cube in the dark? That’s quaint now. The machines are here, and they’re fast, fluent, and disarmingly coherent.

    Meanwhile, college writing programs are playing catch-up while the bots are already teaching themselves AP Composition. If we want writing instructors to remain relevant (i.e., not replaced by a glowing terminal that says “Rewrite?”), we’ll need to reimagine our role. The new instructor is less grammar cop, more rhetorical strategist. Part voice coach, part creative director, part ethicist.

    Your task:
    Write a 1,700-word argumentative essay responding to this claim:
    To remain essential in the Age of AI, college writing instruction must evolve from teaching students how to write to teaching students how to think—critically, ethically, and strategically—alongside machines.

    Consider how AI is reprogramming the writing process and what we must do in response:

    • Should writing classes teach AI prompt-crafting instead of thesis statements?
    • Will rhetorical literacy and moral clarity become more important than knowing where to put a semicolon?
    • Should students learn to turn Blender into a rhetorical tool—visualizing arguments as 3D structures or spatial infographics?
    • Will gamification and multimodal projects replace the five-paragraph zombie essay?
    • Are writing studios the future—dynamic, collaborative AI-human spaces where “How well can you prompt?” becomes the new “How well can you argue?”

    In short, what must the writing classroom become when the act of writing itself is no longer uniquely human?

    This prompt doesn’t ask you to mourn the old ways. It demands that you architect the new ones. Push past nostalgia and imagine what a post-ChatGPT curriculum might look like—not just to survive the AI onslaught, but to lead it.

  • The Rebranding of College Writing Instructors as Prompt Engineers

    The Rebranding of College Writing Instructors as Prompt Engineers

    There’s a cliché I’ve sidestepped for decades, the kind of phrase I’ve red-penned into oblivion in freshman essays. But now, God help me, I must say it: I see the handwriting on the wall. And it’s written in 72-point sans serif, blinking in algorithmic neon.

    I’ve taught college writing for forty years. My wife, a fellow lifer in the trenches, has clocked twenty-five teaching sixth and seventh graders. Between us, we’ve marked enough essays to wallpaper the Taj Mahal. And yet here we are, staring down the barrel of obsolescence while AI platforms politely tap us on the shoulder and whisper, “We’ve got this now.”

    Try crafting an “AI-resistant” assignment. Go ahead. Ask students to conduct interviews, keep journals, write about memories. They’ll feed your prompt into ChatGPT with the finesse of a hedge fund trader moving capital offshore. The result? A flawlessly ghostwritten confession by a bot with a stunning grasp of emotional trauma and a suspicious lack of typos.

    Middle school teachers, my wife says, are on their way to becoming glorified camp counselors with grading software. As for us college instructors, we’ll be lucky to avoid re-education camps dressed up as “professional development.” The new job? Teaching students how to prompt AI like Vegas magicians—how to trick it into coherence, how to interrogate its biases, how to extract signal from synthetic noise. Critical thinking rebranded as Prompt Engineering.

    Gone are the days of unpacking the psychic inertia of J. Alfred Prufrock or peeling back the grim cultural criticism of Coetzee’s Disgrace. Now it’s Kahoot quizzes and real-time prompt battles. Welcome to Gamified Rhetoric 101. Your syllabus: Minecraft meets Brave New World.

    At sixty-three, I’m no fool. I know what happens to tired draft horses when the carriage goes electric. I’ve seen the pasture. I can smell the industrial glue. And I’m not alone. My colleagues—bright, literate, and increasingly demoralized—mutter the same bitter mantra: “We are the AI police. And the criminals are always one jailbreak ahead.”

    We keep saying we need to “stop the bleeding,” another cliché I’d normally bin. But here I am, bleeding clichés like a wounded soldier of the Enlightenment, fighting off the Age of Ozempification—a term I’ve coined to describe the creeping automation of everything from weight loss to wit. We’re not writing anymore; we’re curating prompts. We’re not thinking; we’re optimizing.

    This isn’t pessimism. It’s clarity. And if clarity means leaning on a cliché, so be it.

  • Trapped in the AI Age’s Metaphysical Tug-of-War

    Trapped in the AI Age’s Metaphysical Tug-of-War

    I’m typing this to the sound of Beethoven—1,868 MP3s of compressed genius streamed through the algorithmic convenience of a playlist. It’s a 41-hour-and-8-minute monument to compromise: a simulacrum of sonic excellence that can’t hold a candle to the warmth of an LP. But convenience wins. Always.

    I make Faustian bargains like this daily. Thirty-minute meals instead of slow-cooked transcendence. Athleisure instead of tailoring. A Honda instead of high horsepower. The good-enough over the sublime. Not because I’m lazy—because I’m functional. Efficient. Optimized.

    And now, writing.

    For a year, my students and I have been feeding prompts into ChatGPT like a pagan tribe tossing goats into the volcano—hoping for inspiration, maybe salvation. Sometimes it works. The AI outlines, brainstorms, even polishes. But the more we rely on it, the more I feel the need to write without it—just to remember what my own voice sounds like. Just as the vinyl snob craves the imperfections of real analog music or the home cook insists on peeling garlic by hand, I need to suffer through the process.

    We’re caught in a metaphysical tug-of-war. We crave convenience but revere authenticity. We binge AI-generated sludge by day, then go weep over a hand-made pie crust YouTube video at night. We want our lives frictionless, but our souls textured. It’s the new sacred vs. profane: What do we reserve for real, and what do we surrender to the machine?

    I can’t say where this goes. Maybe real food will be phased out, like Blockbuster or bookstores. Maybe we’ll subsist on GLP-1 drugs, AI-tailored nutrient paste, and the joyless certainty of perfect lab metrics.

    As for entertainment, I’m marginally more hopeful. Chris Rock, Sarah Silverman—these are voices, not products. AI can churn out sitcoms, but it can’t bleed. It can’t bomb. It can’t riff on childhood trauma with perfect timing. Humans know the difference between a story and a story-shaped thing.

    Still, writing is in trouble. Reading, too. AI erodes attention spans like waves on sandstone. Books? Optional. Original thought? Delegated. The more AI floods the language, the more we’ll acclimate to its sterile rhythm. And the more we acclimate, the less we’ll even remember what a real voice sounds like.

    Yes, there will always be the artisan holdouts—those who cook, write, read, and listen with intention. But they’ll be outliers. A boutique species. The rest of us will be lean, medicated, managed. Data-optimized units of productivity.

    And yet, there will be stories. There will always be stories. Because stories aren’t just culture—they’re our survival instinct dressed up as entertainment. When everything else is outsourced, commodified, and flattened, we’ll still need someone to stand up and tell us who we are.

  • College Essay Prompt: Ozempification, AI, and the End of Food Culture?

    College Essay Prompt: Ozempification, AI, and the End of Food Culture?

    Prompt Overview:
    In recent years, the rise of GLP-1 drugs like Ozempic and Wegovy has begun to reshape our relationship with hunger, desire, and food itself. Meanwhile, artificial intelligence is transforming how food is produced, marketed, and even chosen—sometimes without human involvement. This convergence may signal the end of eating as a social, cultural, and emotional act.

    Your Task:
    Write an 8-paragraph argumentative essay that responds to the following claim:

    Claim:
    GLP-1 drugs and artificial intelligence are ending the traditional notion of food and eating as cultural, emotional, and communal experiences.

    Instructions:

    1. Introduction (Paragraph 1):
      Hook the reader with a striking observation or anecdote. Clearly present the claim and your thesis—whether you agree, disagree, or hold a nuanced position.
    2. Background (Paragraph 2):
      Briefly explain what GLP-1 drugs (e.g., Ozempic) do and how AI is being used in food production and personalization.
    3. First Argument (Paragraph 3):
      Make your first point in support of or against the claim. Use evidence from a reliable source.
    4. Second Argument (Paragraph 4):
      Develop a second point. This might include shifts in consumer behavior, changing food rituals, or the erosion of cultural traditions.
    5. Third Argument (Paragraph 5):
      Add a third supporting point that deepens your position. Consider long-term consequences or ethical implications.
    6. Counterargument and Rebuttal (Paragraph 6):
      Acknowledge a reasonable opposing view—perhaps that AI and GLP-1 drugs offer needed solutions to health crises—and then refute it using logic and evidence.
    7. Cultural Reflection (Paragraph 7):
      Reflect on what is at stake culturally. What do we lose if food is reduced to a biometric algorithm?
    8. Conclusion (Paragraph 8):
      Return to your thesis and end with a memorable insight or call to action.

    Source Requirement:
    Use at least 4 credible sources. At least two should come from recent journalism or peer-reviewed studies (2023 or later). Sources must be cited in MLA format.

    Optional Angles to Explore:

    • How do GLP-1 drugs rewire human appetite?
    • Will AI-generated food disconnect us from culinary heritage?
    • Can technological efficiency coexist with food as a ritual or joy?
  • College Essay Prompt: Performance, Collapse, and the Hunger for Validation

    College Essay Prompt: Performance, Collapse, and the Hunger for Validation

    In the Black Mirror episode “Nosedive,” Lacie Pound carefully curates her public persona to climb the social ranking system, only to experience a spectacular breakdown when her performative identity collapses. Similarly, in the Netflix documentary Untold: The Liver King, Brian Johnson (aka the Liver King) constructs a hyper-masculine brand built on ancestral living and self-discipline, but his digital persona unravels after his steroid use is exposed—calling into question the authenticity of his entire identity.

    Drawing on insights from The Social Dilemma and Sherry Turkle’s TED Talk “Connected, but alone?”, write an 8-paragraph essay analyzing how both Lacie Pound and the Liver King experience breakdowns caused by the pressure to perform a marketable self online. Consider how their stories reveal broader truths about the emotional and psychological toll of living in a world where self-worth is measured through digital validation.

    Instructions:

    Your essay should have a clear thesis and be structured as follows:

    Paragraph 1 – Introduction

    • Briefly introduce Lacie Pound and the Liver King as case studies in digital performance.
    • State your thesis: What common psychological or social dynamic do their stories reveal about life in the attention economy?

    Paragraph 2 – The Rise of the Performed Self

    • Explain how Lacie and the Liver King construct public identities tailored for approval.
    • Use The Social Dilemma and/or Turkle to support your claim about the pressures of online self-curation.

    Paragraph 3 – The Collapse of Lacie Pound

    • Analyze the arc of Lacie’s breakdown.
    • Show how social scoring leads to isolation and emotional implosion.

    Paragraph 4 – The Unmasking of the Liver King

    • Describe how his confession undermines his brand.
    • Discuss the role of digital audiences in both elevating and dismantling him.

    Paragraph 5 – The Role of Tech Platforms

    • How do algorithms and platforms reward performance and punish authenticity?
    • Draw from The Social Dilemma for evidence.

    Paragraph 6 – The Illusion of Connection

    • Use Turkle’s TED Talk to explore how both characters are “connected, but alone.”
    • Consider their emotional lives behind the digital façade.

    Paragraph 7 – A Counterargument

    • Could it be argued that both Lacie and the Liver King benefited from their online identities, at least temporarily?
    • Briefly address and rebut this view.

    Paragraph 8 – Conclusion

    • Reaffirm your thesis.
    • Reflect on what their stories warn us about the future of identity, performance, and mental health in the digital age.

    Requirements:

    • MLA format
    • 4 sources minimum (episode, documentary, TED Talk, and one external article or scholarly source of your choice)
    • Include a Works Cited page

    Here are 7 ways Lacie Pound (Black Mirror: Nosedive) and the Liver King (Untold: The Liver King) were manipulated by social media into self-sabotage, drawn through the lens of The Social Dilemma and Sherry Turkle’s TED Talk “Connected, but alone?”:


    1. They Mistook Validation for Connection

    Turkle argues we’ve “sacrificed conversation for connection,” replacing real intimacy with digital approval.

    • Lacie chases ratings instead of relationships, slowly alienating herself from authentic human bonds.
    • The Liver King builds a global audience but admits to loneliness and insecurity beneath the performative bravado.

    2. They Became Addicted to the Performance of Perfection

    The Social Dilemma explains how platforms reward idealized personas, not authenticity.

    • Lacie’s entire life becomes a curated highlight reel of fake smiles and forced gratitude.
    • The Liver King obsessively maintains his primal-man image, even risking credibility and health to keep the illusion intact.

    3. They Were Trapped in an Algorithmic Feedback Loop

    Algorithms feed users what keeps them engaged—usually content that reinforces their current identity.

    • Lacie’s feed reflects her desire to be liked, pushing her deeper into a phony aesthetic.
    • The Liver King is incentivized to keep escalating his primal stunts—eating raw organs, screaming workouts—not because it’s healthy, but because it gets clicks.

    4. They Confused Metrics with Meaning

    The Social Dilemma reveals how “likes,” views, and follower counts hijack the brain’s reward system.

    • Lacie sees her social score as a measure of human worth.
    • The Liver King sees followers as a proxy for legacy and success—until the steroid scandal exposes the hollowness behind the numbers.

    5. They Substituted Self-Reflection with Self-Branding

    Turkle notes that in digital spaces, we “edit, delete, retouch” our lives. But that comes at the cost of honest self-understanding.

    • Lacie never pauses to ask who she is outside the algorithm’s gaze.
    • The Liver King becomes his own brand, losing sight of the person beneath the loincloth and beard.

    6. They Were Driven by Fear of Being Forgotten

    Both characters fear digital invisibility more than real-world failure.

    • Lacie’s panic when her rating drops is existential; she’s no one without her score.
    • The Liver King’s confession comes only after public exposure threatens his empire—because relevance, not truth, is the ultimate currency.

    7. They Reached a Breaking Point in Private but Fell Apart in Public

    The Social Dilemma highlights how tech is designed to capture our attention, not care for our well-being.

    • Lacie breaks down in front of an audience, her worst moment recorded and shared.
    • The Liver King’s undoing is broadcast to the same crowd that once idolized him—turning shame into spectacle.

    Three Sample Thesis Statements

    1. Basic (Clear & Focused):

    Both Lacie Pound and the Liver King suffer emotional breakdowns because they become trapped by the very social media systems they believe will bring them success, as shown through their obsession with validation, performance, and visibility.


    2. Intermediate (More Insightful):

    Lacie Pound and the Liver King, though separated by fiction and reality, both represent victims of an attention economy that rewards curated identities over authentic living—ultimately leading them to sacrifice mental health, integrity, and human connection for the illusion of approval.


    3. Advanced (Nuanced & Sophisticated):

    As Lacie Pound and the Liver King spiral into public self-destruction, their stories expose the way digital platforms—backed by algorithmic manipulation and cultural hunger for spectacle—transform the self into a brand, connection into currency, and identity into a high-risk performance that inevitably collapses under its own artifice.

  • Beware of the ChatGPT Strut

    Beware of the ChatGPT Strut

    Yesterday my critical thinking students and I talked about the ways we could revise our original content with ChatGPT give it instructions and train this AI tool to go beyond its bland, surface-level writing style. I showed my students specific prompts that would train it to write in a persona:

    “Rewrite the passage with acid wit.”

    “Rewrite the passage with lucid, assured prose.”

    “Rewrite the passage with mild academic language.”

    “Rewrite the passage with overdone academic language.”

    I showed the students my original paragraphs and ChatGPT’s versions of my sample arguments agreeing and disagreeing with Gustavo Arellano’s defense of cultural appropriation, and I said in the ChatGPT rewrites of my original there were linguistic constructions that were more witty, dramatic, stunning, and creative than I could do, and that to post these passages as my own would make me look good, but they wouldn’t be me. I would be misrepresenting myself, even though most of the world will be enhancing their writing like this in the near future. 

    I compared writing without ChatGPT to being a natural bodybuilder. Your muscles may not be as massive and dramatic as the guy on PEDS, but what you see is what you get. You’re the real you. In contrast, when you write with ChatGPT, you are a bodybuilder on PEDS. Your muscle-flex is eye-popping. You start doing the ChatGPT strut. 

    I gave this warning to the class: If you use ChatGPT a lot, as I have in the last year as I’m trying to figure out how I’m supposed to use it in my teaching, you can develop writer’s dysmorphia, the sense that your natural, non-ChatGPT writing is inadequate compared to the razzle-dazzle of ChatGPT’s steroid-like prose. 

    One student at this point disagreed with my awe of ChatGPT and my relatively low opinion of my own “natural” writing. She said, “Your original is better than the ChatGPT versions. Yours makes more sense to me, isn’t so hidden behind all the stylistic fluff, and contains an important sentence that ChatGPT omitted.”

    I looked at the original, and I realized she was right. My prose wasn’t as fancy as ChatGPT’s but the passage about Gustavo Arellano’s essay defending cultural appropriation was more clear than the AI versions.

    At this point, I shifted metaphors in describing ChatGPT. Whereas I began the class by saying that AI revisions are like giving steroids to a bodybuilder with body dysmorphia, now I was warning that ChatGPT can be like an abusive boyfriend or girlfriend. It wants to hijack our brains because the main objective of any technology is to dominate our lives. In the case of ChatGPT, this domination is sycophantic: It gives us false flattery, insinuates itself into our lives, and gradually suffocates us. 

    As an example, I told the students that I was getting burned out using ChatGPT, and I was excited to write non-ChatGPT posts on my blog, and to live in a space where my mind could breathe the fresh air apart from ChatGPT’s presence. 

    I wanted to see how ChatGPT would react to my plan to write non-ChatGPT posts, and ChatGPT seemed to get scared. It started giving me all of these suggestions to help me implement my non-ChatGPT plan. I said back to ChatGPT, “I can’t use your suggestions or plans or anything because the whole point is to live in the non-ChatGPT Zone.” I then closed my ChatGPT tab. 

    I concluded by telling my students that we need to reach a point where ChatGPT is a tool like Windows and Google Docs, but as soon as we become addicted to it, it’s an abusive platform. At that point, we need to use some self-agency and distance ourselves from it.  

  • If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    If Used Wisely, AI Can Push Your Writing to Greater Heights, But It Can Also Create Writer’s Dysmorphia

    No ChatGPT or AI of any kind was used in the following:

    For close to 2 years, I’ve been editing and collaborating with ChatGPT for my personal and professional writing. I teach my college writing students how to engage with it, giving it instructions to avoid its default setting for bland, anodyne prose and teaching it how to adopt various writing personas. 

    For my own writing, ChatGPT has boosted my prose and imagery, making my writing more stunning, dramatic, and vivid.

    Because I have been a bodybuilder since 1974, I will use a bodybuilding analogy: Writing with ChatGPT is like bodybuilding with PEDS. I get addicted to the boost, the extra pump, and the extra muscle. Just as a bodybuilder can get body dysmorphia, ChatGPT can give writers a sort of writer’s dysmorphia. 

    But posting a few articles on Reddit recently in which a few readers were put off by what they saw as “fake writing,” I stopped in my tracks to question my use of ChatGPT. Part of me thinks that the hunger for authenticity is such that I should be writing content that is more like the natural bodybuilder, the guy who ventures forth in his endeavor with no PEDS. What you see is what you get, all human, no steroids, no AI.

    While I like the way ChatGPT pushes me in new directions that I would not explore on my own and makes the writing process engaging in new ways, I acknowledge that AI-fueled writer’s dysmorphia is real. We can get addicted to the juiced-up prose and the razzle-dazzle.

    Secondly, we can outsource too much thinking to AI and get lazy rather than do the work ourselves. In the process, our critical thinking skills begin to atrophy.

    Third, I think we can fill our heads with too much ChatGPT and live inside a hazy AI fever swamp. I recall going to middle school and on the outskirts of the campus, you could see the “burn-outs,” pot-addicted kids staring into the distance with their lizard eyes. One afternoon a friend joked, “They’re high so often, not being high must be a trip for them.” What if we become like these lizard-eyed burnouts and wander this world on a constant ChatGPT high that is so debilitating that we need to sober up in the natural world upon which we find the non-AI existence is its own form of healthy pleasure? In other words, we should be careful not to let ChatGPT live rent-free in our brains.

    Finally, people hunger for authentic, all-human writing, so moving forward on this blog, I want to continue to push myself with some ChatGPT-edited writing, but I also want to present all-natural, all-human writing, as is the case with this post. 

  • The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    The ChatGPT-Book: My Dream Machine in a World of Wearable Nonsense

    I loathe smartphones. They’re tiny, slippery surveillance rectangles masquerading as tools of liberation. Typing on one feels like threading a needle while wearing oven mitts. My fingers bungle every attempt at precision, the autocorrect becomes a co-author I never hired, and the screen is so small I have to squint like I’m decoding Morse code through a peephole. Tablets aren’t much better—just larger slabs of compromise.

    Give me a mechanical keyboard, a desktop tower that hums with purpose, and twin 27-inch monitors beaming side by side like architectural blueprints of clarity. That’s how I commune with ChatGPT. I need real estate. I want to see the thinking unfold, not peer at it like a medieval monk examining a parchment shard.

    So when one of my students whipped out her phone, opened the ChatGPT app, and began speaking to it like it was her digital therapist, I nodded politely. But inside, I was muttering, “Not for me.” I’ve lived long enough to know that I don’t acclimate well to anything that fits in a jeans pocket.

    That’s why Matteo Wong’s article, “OpenAI’s Ambitions Just Became Crystal Clear,” caught my eye. Apparently, Sam Altman has teamed up with Jony Ive—the high priest of sleekness and the ghost behind Apple’s glory days—to sink $5 billion into building a “family of devices” for ChatGPT. Presumably, these will be as smooth, sexy, and addictive as the iPhone once was before it became a dopamine drip and digital leash.

    Honestly? It makes sense. In the last year, my ChatGPT use has skyrocketed, while my interaction with other platforms has withered. I now use it to write, research, plan, edit, make weight-management meal plans, and occasionally psychoanalyze myself. If there were a single device designed to serve as a “mother hub”—a central console for creativity, productivity, and digital errands—I’d buy it. But not if it’s shaped like a lapel pin. Not if it whispers in my ear like some clingy AI sprite. I don’t want a neural appendage or a mind tickler. I want a screen.

    What I’m hoping for is a ChatGPT-Book: something like a Chromebook, but with real writing DNA. A device with its own operating system that consolidates browser tabs, writing apps, and research tools. A no-nonsense, 14-inch-and-up display where I can visualize my creative process, not swipe through it.

    We all learn and create differently in this carnival of overstimulation we call the Information Age. I imagine Altman and Ive know that—and will deliver a suite of devices for different brains and temperaments. Mine just happens to want clarity, not minimalism masquerading as genius.

    Wong’s piece doesn’t surprise or shock me. It’s just the same old Silicon Valley gospel: dominate or be buried. Apple ate BlackBerry. Facebook devoured MySpace. And MySpace? It’s now a dusty relic in the basement of internet history—huddled next to beta tapes, 8-tracks, and other nostalgia-laced tech corpses.

    If ChatGPT gets its own device and redefines how we interact with the web, well… chalk it up to evolution. But for the love of all that’s analog—give me a keyboard, a screen, and some elbow room.

  • The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    The Coldplay Apocalypse: Notes from a Smoothie-Drinking Future

    Welcome to the future—where the algorithm reigns, identity is a curated filter pack, and dystopia arrives not with a boot to the face but a wellness app and a matching pair of $900 headphones that murmur Coldplay into your skull at just the right serotonin-laced frequency.

    We will all look like vaguely reprocessed versions of Salma Hayek or Brad Pitt—digitally airbrushed to remove all imperfections but retain just enough “authenticity” to keep our neuroses in play. Our playlists will be algorithmically optimized to sound like Coldplay mated with spa music and decided never to take risks again.

    We’ll wear identical headphones—sleek, matte, noise-canceling monuments to our collective disinterest in one another. Not to be rude. Just too evolved to engage. Every journal entry we write will be AI-assisted, reading like the bastard child of Brené Brown and ChatGPT: reflective, sincere, and soul-crushingly uniform.

    Our influencers? They’ll all look the same too—gender-fluid, lightly medicated, with just enough charisma to sell you an oat milk subscription while quoting Kierkegaard. Politics, entertainment, mental health, and skincare will be served up on the same TikTok platter, narrated by someone who once dated a crypto founder and now podcasts about trauma.

    Three times a day, we’ll sip our civilization smoothie: a beige sludge of cricket protein, creatine, nootropic fibers, and a lightly psychoactive GLP-1 variant that keeps hunger, sadness, and ambition at bay. It’s not a meal; it’s a contract with the status quo. We’ll all wear identical sweat-wicking athleisure in soothing desert neutrals, paired with orthopedic sneakers in punchy tech-startup orange.

    We’ll all “take breaks from social media” at the same approved hour—between 5 and 6 p.m.—so we can “reconnect with the analog world” by staring at a sunset long enough to photograph it and post our profound revelations online at 6:01.

    Nobody will want children, because who wants to drag a baby into a climate-controlled apartment where the rent is half your nervous system? Marriage? A relic of a time when humans still believed in eye contact. Romances will be managed by chatbots programmed to simulate caring without requiring reciprocation. You’ll tell the app your love language, it’ll write your messages, and your partner’s app will do the same. Everyone’s emotionally satisfied, no one’s truly known.

    And vacations? Pure fiction. Deepfakes will show us in Bali, Tuscany, or the moon—beaming with digital joy, sipping pixelated espresso. Real travel is for the ultra-rich and the deluded.

    As for existential despair? Doesn’t exist anymore. Our moods will be finely tuned by micro-dosed pharmacology and AI-generated affirmations. No more late-night crises or 3 a.m. sobbing into a pillow. Just an endless, gentle hum of stabilized contentment—forever.