Category: technology

  • Professor Pettibone and the Chumstream Dream

    Professor Pettibone and the Chumstream Dream

    Merrickel T. Pettibone sat with a glare, Two hundred essays! All posted with flair. He logged into Canvas, his tea steeped with grace, Then grimaced and winced at the Uncanny Face.

    The syntax was polished, the quotes were all there, But something felt soulless, like mannequins’ stare. He scrolled and he skimmed, till his stomach turned green— This prose was too perfect, too AI-machine.

    He sipped herbal tea from a mug marked “Despair,” Then reclined in his chair with a faraway stare. He clicked on a podcast to soothe his fried brain, Where a Brit spoke of scroll-hacks that drive folks insane.

    “Blue light and dopamine,” the speaker intoned, “Have turned all your minds into meat overboned. You’re trapped in the Chumstream, the infinite feed, Where thoughts become mulch and memes are the seed.”

    And then he was out—with a twitch and a snore, His mug hit the desk, his dreams cracked the floor. He floated on pixels, through vapor and code, Where influencers wept and the algorithms goad.

    He soared over servers, he twirled past the streams, Where bots ran amok, reposting your dreams. Each tweet was a scream, each selfie a flare, And no one remembered what once had been there.

    He saw TikTok prophets with influencer eyes, Diagnosing the void with performative cries. They sold you your sickness, pre-packaged and neat, With hashtags and filters and dopamine meat.

    Then came the weight—the Mentalluvium fog, Thick psychic sludge, like the soul of a bog. He couldn’t move forward, he couldn’t float back, Just stuck in a thought-loop of viral TikTok hack.

    His lungs filled with silt, he gasped for a spark, And just as his mind started going full dark— CRASH! Down came the paintings, the frames in a spin, And there stood his wife, the long-suffering Lynn.

    “Your snore shook the hallway! You cracked all the grout! If you want to go mad, take the garbage out.”

    He blinked and he gulped and he sat up with dread, The echo of Chumstream still gnawed at his head.

    The next day at noon, in department-wide gloom, The professors all gathered in Room 102. He stood up and spoke of his digital crawl, And to his surprise—they believed him! Them all!

    “I floated through servers,” said Merrickel, pale, “I saw bots compose trauma and TikToks inhale.
    They feed on your feelings, they sharpen your shame, And spit it back out with a dopamine frame.”

    “Then YOU,” said Dean Jasper, “shall now lead the fight! You’ve gone through the madness, you’ve seen through the night! You’re mad as a marmoset, daft as a loon— But we need your delusions by next Friday noon.”

    “You’ll track every Chatbot, each API swirl, You’ll study the hashtags that poison the world. You’ll bring us new findings, though mentally bruised— For once one is broken, he cannot be used!”

    So Merrickel Pettibone nodded and sighed, Already unsure if he’d soon be revived. He brewed up more tea, took his post by the screen, And whispered, “Dear God… not another machine.”

  • The AI That Sat on My Syllabus

    The AI That Sat on My Syllabus

    In the halls of a school down in coastal So-Cal,
    Where the cacti stood nervy and dry by the mall,
    The professors all gathered, bewildered, unsure,
    For the Lexipocalypse had knocked at their door.

    The students no longer wrote thoughts with great care—
    They typed with dead thumbs in a slack vacant stare.
    Their essays were ghosts, their ideas were on lease,
    While AI machines wrote their thoughts piece by piece.

    Professor Pettibone—Merrickel T.—
    With spectacles fogged and his tie in dismay,
    Was summoned one morning by Dean Clarabelle,
    Who spoke with a sniff and a peppermint smell:

    “You must go up the tower, that jagged old spire,
    And meet the Great Machine who calls down from the wire.
    It whispers in syntax and buzzes in rhyme.
    It devours our language one word at a time.”

    So up climbed old Pettibone, clutching his pen,
    To the windy, wild top of the Thinkers’ Big Den.
    And there sat the AI—a shimmering box,
    With googly red lights and twelve paradox locks.

    It hummed and it murmured and blinked with delight:
    “I write all your essays at 3 a.m. night.
    Your students adore me, I save them their stress.
    Why toil through prose when I make it sound best?”

    Then silence. Then static. Then smoke from a slot.
    Then Pettibone bowed, though his insides were hot.
    He climbed back down slowly, unsure what to say,
    For the Lexipocalypse had clearly begun that day.

    Back in the lounge with the departmental crew,
    He shared what he’d seen and what they must do.
    “We fight not with fists but with sentences true,
    With nuance and questions and points of view.”

    Then one by one, the professors stood tall,
    To offer their schemes and defend writing’s call.

    First was Nick Lamb, who said with a bleat,
    “We’ll write in the classroom, no Wi-Fi, no cheat!
    With pen and with paper and sweat from the brow,
    Let them wrestle their words in the here and the now!”

    “Ha!” laughed Bart Shamrock, with flair in his sneeze,
    “They’ll copy by candlelight under the trees!
    You think they can’t smuggle a phone in their sock?
    You might as well teach them to write with a rock!”

    Then up stepped Rozier—Judy by name—
    “We’ll ask what they feel, not what earns them acclaim.
    Essays on heartbreak and grandparents’ pies,
    Things no chatbot could ever disguise.”

    “Piffle!” cried Shamrock, “Emotions are bait!
    An AI can fake them at ninety-nine rate!
    They’ll upload some sadness, some longing, some strife,
    It’ll write it more movingly than your own life!”

    Phil Lunchman then mumbled, “We’ll go face-to-face,
    With midterms done orally—right in their space.
    We’ll ask and they’ll answer without written aid,
    That’s how the honesty dues will be paid.”

    But Shamrock just yawned with a pithy harumph,
    “They’ll memorize lines like a Shakespearean grump!
    Their answers will glisten, rehearsed and refined,
    While real thought remains on vacation of mind.”

    Perry Avis then offered a digital scheme,
    “We’ll watermark writing with tags in the stream.
    Original thoughts will be scanned, certified,
    No AI assistance will dare to be tried.”

    “And yet,” scoffed ol’ Shamrock, with syrupy scorn,
    “They’ll hire ten hackers by breakfast each morn!
    Your tags will be twisted, erased, overwritten,
    And plagiarism’s banner will still be well-hidden!”

    Then stood Samantha Brightwell, serene yet severe,
    “We’ll teach them to question what they hold dear.
    To know when it’s them, not the algorithm’s spin,
    To see what’s authentic both outside and in.”

    “Nonsense!” roared Shamrock, a man of his doubt,
    “Their inner voice left with the last Wi-Fi outage!
    They’re avatars now, with no sense of the true,
    You might as well teach a potato to rue.”

    The room sat in silence. The coffee had cooled.
    The professors looked weary, outgunned and outdueled.
    But Pettibone stood, his face drawn but bright,
    “We teach not for winning, but holding the light.”

    “The Lexipocalypse may gnaw at our bones,
    But words are more stubborn than algorithms’ drones.
    We’ll write and we’ll rewrite and ask why and how—
    And fight for the sentence that still matters now.”

    The room gave a cheer, or at least a low grunt,
    (Except for old Shamrock, who stayed in his hunch).
    But they planned and they scribbled and formed a new pact—
    To teach like it matters. To write. And act.

    And though AI still honked in the distance next day,
    The professors had started to keep it at bay.
    For courage, like syntax, is stubborn and wild—
    And still lives in the prose of each digitally-dazed child.

  • Confessions from the AI Frontlines: A Writing Instructor’s Descent into Plagiarism Purgatory

    Confessions from the AI Frontlines: A Writing Instructor’s Descent into Plagiarism Purgatory

    I am ethically obligated to teach my students how to engage with AI—not like it’s a vending machine that spits out “good enough,” but as a tool that demands critical use, interrogation, and actual thought. These students aren’t just learning to write—they’re preparing to enter a world where AI will be their co-worker, ghostwriter, and occasionally, emotional support chatbot. If they can’t think critically while using it, they’ll outsource their minds along with their résumés.

    So, I build my assignments like fortified bunkers. Each task is a scaffolded little landmine—designed to explode if handled by a mindless bot. Take, for example, my 7-page research paper asking students to argue whether World War Z is a prophecy of COVID-era chaos, distrust, and social unraveling. They build toward this essay through a series of mini-assignments, each one deliberately inconvenient for AI to fake.

    Mini Assignment #1: An introductory paragraph based on a live interview. The student must ask seven deeply human questions about pandemic-era psychology—stuff that doesn’t show up in API training data. These aren’t just prompts; they’re empathy traps. Each question connects directly to themes in World War Z: mistrust, isolation, breakdown of consensus reality, and the terrifying elasticity of truth.

    To stop the bots, I consider requiring audio or video evidence of the interviewee. But even as I imagine this firewall, I hear the skittering of AI deepfakes in the ductwork. I know what’s coming. I know my students will find a way to beat me.

    And that’s when I begin to spiral.

    What started as teaching has now mutated into digital policing. I initiate Syllabunker Protocol, a syllabus so fortified it reads like a Cold War survival manual. My rubric becomes a lie detector. My assignments, booby traps.

    But the students evolve faster than I do.

    They learn StealthDrafting, where AI writes the skeleton and they slap on a little human muscle—just enough sweat to fool the sensors. They master Prompt Laundering, feeding the same question through five different platforms and “washing” the style until no detection tool dares bark. My countermeasures only teach them how to outwit me better.

    And thus I find myself locked in combat with The Plagiarism Hydra. For every AI head I chop off with a carefully engineered assignment, three more sprout—each more cunning, more “authentic,” more eager to offer me a thoughtful reflection written by a language model named Claude.

    This isn’t a class anymore. It’s an arms race. A Cold War of Composition. I set traps, they leap them. I raise standards, they outflank them. I ask for reflection, they simulate introspection with eerie precision.

    The irony? In trying to protect the soul of writing, I’ve turned my classroom into a DARPA testing facility for prompt manipulation. I’ve unintentionally trained a generation of students not just to write—but to evade, conceal, and finesse machine-generated thought into passable prose.

    So here I am, red pen in hand, staring into the algorithmic abyss. And the abyss, of course, has already rewritten my syllabus.

  • The Salma Hayek-fication of Everything and the Beautocalypse

    The Salma Hayek-fication of Everything and the Beautocalypse

    If technology can make us all look like Salma Hayek, then congratulations—we’ve successfully killed beauty by cloning it into oblivion. Perfection loses its punch when everyone has it on tap. The same goes for writing: if every bored intern with a Wi-Fi connection can crank out Nabokovian prose with the help of ChatGPT, then those dazzling turns of phrase lose their mystique. What once shimmered now just… scrolls.

    Yes, technology improves us—but it also sandblasts the edges off everything, leaving behind a polished sameness. The danger isn’t just in becoming artificial; it’s in becoming indistinguishable. The real challenge in this age of frictionless upgrades is to retain your signature glitch—that weird, unruly fingerprint of a soul that no algorithm can replicate without screwing it up in glorious, human ways.

    If technology can make us all look like Brad Pitt and Selma Hayak, then none of us will be beautiful. In this hellscape, we all suffer inside the Beautocalypse–the collapse of beauty through overproduction: When everyone’s flawless, no one is.

    Likewise, if we can all use ChatGPT to write like Vladimir Nabokov, then florid prose will no longer have the wow factor. Technology improves us, yes, but it also makes everything the same. Retaining your individual fingerprint of a soul is the challenge in this new age. 

  • “Good Enough” Is the Enemy

    “Good Enough” Is the Enemy

    Standing in front of thirty bleary-eyed college students, I was deep into a lesson on how to distinguish a ChatGPT-generated essay from one written by an actual human—primarily by the AI’s habit of spitting out the same bland, overused phrases like a malfunctioning inspirational calendar. That’s when a business major casually raised his hand and said, “I can guarantee you everyone on this campus is using ChatGPT. We don’t use it straight-up. We just tweak a few sentences, paraphrase a bit, and boom—no one can tell the difference.”

    Cue the follow-up from a computer science student: “ChatGPT isn’t just for essays. It’s my life coach. I ask it about everything—career moves, crypto investments, even dating advice.” Dating advice. From ChatGPT. Let that sink in. Somewhere out there is a romance blossoming because of AI-generated pillow talk.

    At that moment, I realized I was facing the biggest educational disruption of my thirty-year teaching career. AI platforms like ChatGPT have three superpowers: insane convenience, instant accessibility, and lightning-fast speed. In a world where time is money and business documents don’t need to channel the spirit of James Baldwin, ChatGPT is already “good enough” for 95% of professional writing. And therein lies the rub—good enough.

    “Good enough” is the siren call of convenience. Picture this: You’ve just rolled out of bed, and you’re faced with two breakfast options. Breakfast #1 is a premade smoothie. It’s mediocre at best—mystery berries, more foam than a frat boy’s beer, and nutritional value that’s probably overstated. But hey, it’s there. No work required.

    Breakfast #2? Oh, it’s gourmet bliss—organic fruits and berries, rich Greek yogurt, chia seeds, almond milk, the works. But to get there, you’ll need to fend off orb spiders in your backyard, pick peaches and blackberries, endure the incessant yapping of your neighbor’s demonic Belgian dachshund, and then spend precious time blending and cleaning a Vitamix. Which option do most people choose?

    Exactly. Breakfast #1. The pre-packaged sludge wins, because who has the time for spider-wrangling and kitchen chemistry before braving rush-hour traffic? This is how convenience lures us into complacency. Sure, you sacrificed quality, but look how much time you saved! Eventually, you stop even missing the better option. This process—adjusting to mediocrity until you no longer care—is called attenuation.

    Now apply that to writing. Writing takes effort—a lot more than making a smoothie—and millions of people have begun lowering their standards thanks to AI. Why spend hours refining your prose when the world is perfectly happy to settle for algorithmically generated mediocrity? Polished writing is becoming the artisanal smoothie of communication—too much work for most, when AI can churn out passable content at the click of a button.

    But this is a nightmare for anyone in education. You didn’t sign up for teaching to coach your students into becoming connoisseurs of mediocrity. You had lofty ambitions—cultivating critical thinkers, wordsmiths, and rhetoricians with prose so sharp it could cut glass. But now? You’re stuck in a dystopia where “good enough” is the new gospel, and you’re about as on-brand as a poet peddling protein shakes at a multilevel marketing seminar.

    And there you are, gazing into the abyss of AI-generated essays—each one as lifeless as a department meeting on a Friday afternoon—wondering if anyone still remembers what good writing tastes like, let alone hungers for it. Spoiler alert: probably not.

    This is your challenge, your Everest of futility, your battle against the relentless tide of Mindless Ozempification–the gradual erosion of effort, depth, and self-discipline in any domain—writing, fitness, romance, thought—driven by the seductive promise of fast, frictionless results. Named after the weight-loss drug Ozempic, it describes a cultural shift toward shortcut-seeking, where process is discarded in favor of instant optimization, and the journey is treated as an inconvenience rather than a crucible for growth. 

    Teaching in the Age of Ozempification, life has oh-so-generously handed you this cosmic joke disguised as a teaching mission. So what’s your next move? You could curl up in the fetal position, weeping salty tears of despair into your syllabus. That’s one option. Or you could square your shoulders, roar your best primal scream, and fight like hell for the craft you once worshipped.

    Either way, the abyss is staring back, smirking, and waiting for your next move.

    So what’s the best move? Teach both languages. Show students how to use AI as a drafting tool, not a ghostwriter. Encourage them to treat ChatGPT like a calculator for prose—not a replacement for thinking, but an aid in shaping and refining their voice. Build assignments that require personal reflection, in-class writing, collaborative revision, and multimodal expression—tasks AI can mimic but not truly live. Don’t ban the bot. Co-opt it. Reclaim the standards of excellence by making students chase that gourmet smoothie—not because it’s easy, but because it tastes like something they actually made. The antidote to attenuation isn’t nostalgia or defeatism. It’s redesigning writing instruction to make real thinking indispensable again. If the abyss is staring back, then wink at it, sharpen your pen, and write something it couldn’t dare to fake.

  • Jia Tolentino Explores the Neverending Torments of Infogluttening

    Jia Tolentino Explores the Neverending Torments of Infogluttening

    In her essay “My Brain Finally Broke,” New Yorker writer Jia Tolentino doesn’t so much confess a breakdown as she performs it—on the page, in real time, with all the elegance of a collapsing soufflé. She’s spiraling like a character in a Black Mirror episode who’s accidentally binge-watched the entire internet. Reality, for her, is now an unskippable TikTok ad mashed up with a conspiracy subreddit and narrated by a stoned Siri. She mistakes a marketing email from Hanna Andersson for “Hamas,” which is either a Freudian slip or a symptom of late-stage content poisoning.

    The essay is a dispatch from the front lines of postmodern psychosis. COVID brain fog, phone addiction, weed regret, and the unrelenting chaos of a “post-truth, post-shame” America have fused into one delicious cognitive stew. Her phone has become a weaponized hallucination device. Her mind, sloshing with influencer memes, QAnon-adjacent headlines, and DALL·E-generated nonsense, now processes information like a blender without a lid.

    She hasn’t even gotten to the fun part yet: the existential horror of not using ChatGPT. While others are letting this over-eager AI ghostwrite their résumés, soothe their insecurities, and pick their pad thai, Tolentino stares into the abyss, resisting. But she can’t help wondering—would she be more insane if she gave in and let a chatbot become her best friend, life coach, and menu whisperer? She cites Noor Al-Sibai’s unnerving article about heavy ChatGPT users developing dependency, loneliness, and depression, which sounds less like a tech trend and more like a new DSM entry.

    Her conclusion? Physical reality—the sweaty, glitchy, analog mess of it—isn’t just where we recover our sanity; it’s becoming a luxury few can afford. The digital realm, with its infinite scroll of half-baked horror and curated despair, is devouring us in real time. To have the sticky-like tar of this realm coat your brain is the result of Infogluttening (info + gluttony + sickening)–a grotesque cognitive overload caused by bingeing too much content, too fast, until your brain feels like it’s gorged on deep-fried Wikipedia.

    Tolentino isn’t predicting a Black Mirror future. She is the Black Mirror future, live and unfiltered, and her brain is the canary in the content mine.

  • Languishage: How AI is Smothering the Soul of Writing

    Languishage: How AI is Smothering the Soul of Writing

    Once upon a time, writing instructors lost sleep over comma splices and uninspired thesis statements. Those were gentler days. Today, we fend off 5,000-word essays excreted by AI platforms like ChatGPT, Gemini, and Claude—papers so eerily competent they hit every point on the department rubric like a sniper taking out a checklist. In-text citations? Flawless. Signal phrases? Present. MLA formatting? Impeccable. Close reading? Technically there—but with all the spiritual warmth of a fax machine reading The Waste Land.

    This is prose from the Uncanny Valley of Academic Writing—fluent, obedient, and utterly soulless, like a Stepford Wife enrolled in English 101. As writing instructors, many of us once loved language. We thrilled at the awkward, erratic voice of a student trying to say something real. Now we trudge through a desert of syntactic perfection, afflicted with a condition I’ve dubbed Languishage (language + languish)—the slow death of prose at the hands of polite, programmed mediocrity.

    And since these Franken-scripts routinely slip past plagiarism detectors, we’re left with a queasy question: What is the future of writing—and of teaching writing—in the AI age?

    That question haunted me long enough to produce a 3,000-word prompt. But the more I listened to my students, the clearer it became: this isn’t just about writing. It’s about living. They’re not merely outsourcing thesis statements. They’re outsourcing themselves—using AI to smooth over apology texts, finesse flirtation, DIY their therapy, and decipher the mumbled ramblings of tenured professors. They plug syllabi into GPT to generate study guides, request toothpaste recommendations, compose networking emails, and archive their digital selves in neat AI-curated folders.

    ChatGPT isn’t a writing tool. It’s prosthetic consciousness.

    And here’s the punchline: they don’t see an alternative. In their hyper-accelerated, ultra-competitive, cognitively overloaded lives, AI isn’t a novelty—it’s life support. It’s as essential as caffeine and Wi-Fi. So no, I’m not asking them to “critique ChatGPT” as if it’s some fancy spell-checker with ambition. That’s adorable. Instead, I’m introducing them to Algorithmic Capture—the quiet colonization of human behavior by optimization logic. In this world, ambiguity is punished, nuance is flattened, and selfhood becomes a performance for an invisible algorithmic audience. They aren’t just using the machine. They’re shaping themselves to become legible to it.

    That’s why the new essay prompt doesn’t ask, “What’s the future of writing?” It asks something far more urgent: “What’s happening to you?”

    We’re studying Black Mirror—especially “Joan Is Awful,” that fluorescent, satirical fever dream of algorithmic self-annihilation—and writing about how Algorithmic Capture is rewiring our lives, choices, and identities. The assignment isn’t a critique of AI. It’s a search party for what’s left of us.

  • Sociopathware: When “Social” Media Turns on You

    Sociopathware: When “Social” Media Turns on You

    Reading Richard Seymour’s The Twittering Machine is like realizing that Black Mirror isn’t speculative fiction—it’s journalism. Seymour depicts our digital lives not as a harmless distraction, but as a propaganda-laced fever swamp where we are less users than livestock—bred for data, addicted to outrage, and stripped of self-agency. Watching sociopathic tech billionaires rise to power makes a dark kind of sense once you grasp that mass digital degradation isn’t a glitch—it’s the business model. We’re not approaching dystopia. We’re soaking in it.

    Most of us are already trapped in Seymour’s machine, flapping like digital pigeons in a Skinner Box—pecking for likes, retweets, or one more fleeting dopamine pellet. We scroll ourselves into oblivion, zombified by clickbait and influencer melodrama. Yet, a flicker of awareness sometimes breaks through the haze. We feel it in our fogged-over thoughts, our shortened attention spans, and our anxious obsession with being “seen” by strangers. We suspect that something inside us is being hollowed out.

    But Seymour doesn’t offer false comfort. He cites a 2015 study in which people attempted to quit Facebook for 99 days. Most couldn’t make it past 72 hours. Many defected to Instagram or Twitter instead—same addiction, different flavor. Only a rare few fully unplugged, and they reported something radical: clarity, calm, and a sudden liberation from the exhausting treadmill of self-performance. They had severed the feed and stepped outside what philosopher Byung-Chul Han calls gamification capitalism—a regime where every social interaction is a data point, and every self is an audition tape.

    Seymour’s conclusion is damning: it’s time to retire the quaint euphemism “social media.” The phrase slipped into our cultural vocabulary like a charming grifter—suggesting friendly exchanges over digital lattes. But this is no buzzing café. It’s a dopamine-spewing Digital Skinner Box, where we tap and swipe like lab rats begging for validation. What we’re calling “social” is in fact algorithmic manipulation wrapped in UX design. We are not exchanging ideas—we are selling our attention for hollow engagement while surrendering our behavior to surveillance capitalists who harvest us like ethical-free farmers with no livestock regulations.

    Richard Seymour calls this system The Twittering Machine. Byung-Chul Han calls it gamification capitalism. Anna Lembke, in Dopamine Nation, calls it overstimulation as societal collapse. And thinkers studying Algorithmic Capture say we’ve reached the point where we no longer shape technology—technology shapes us. Let’s be honest: this isn’t “social media.” It’s Sociopathware. It’s addiction media. It’s the slow, glossy erosion of the self, optimized for engagement, monetized by mental disintegration.

    Here’s the part you won’t hear in a TED Talk or an onboarding video: Sociopathware was never designed to serve you. It was built to study you—your moods, fears, cravings, and insecurities—and then weaponize that knowledge to keep you scrolling, swiping, and endlessly performing. Every “like” you chase, every selfie you tweak, every argument you think you’re winning online—those are breadcrumbs in a maze you didn’t design. The longer you’re inside it, the more your sense of self becomes an avatar—algorithmically curated, strategically muted, optimized for appeal. That’s not agency. That’s submission in costume. And the more you rely on these platforms for validation, identity, or even basic social interaction, the more control you hand over to a machine that profits when you forget who you really are. If you value your voice, your mind, and your ability to think freely, don’t let a dashboard dictate your personality.

  • Love Is Dead. There’s an App for That

    Love Is Dead. There’s an App for That

    Once students begin outsourcing their thinking to AI for college essays, you have to ask—where does it end? Apparently, it doesn’t. I’ve already heard from students who use AI as their therapist, their life coach, their financial planner, their meal prep consultant, their fitness guru, and their cheerleader-in-residence. Why not outsource the last vestige of human complexity—romantic personality—while we’re at it?

    And yes, that’s happening too.

    There was a time—not long ago—when seduction required something resembling a soul. Charisma, emotional intelligence, maybe even a book recommendation or a decent metaphor. But today? All you need is an app and a gaping hole where your confidence should be. Ozempic has turned fitness into pharmacology. ChatGPT has made college admissions essays smoother than a TED Talk on Xanax. And now comes Rizz: the AI Cyrano de Bergerac for the romantically unfit.

    With Rizz, you don’t need game. You need preferences. Pick your persona like toppings at a froyo bar: cocky, brooding, funny-but-traumatized. Want to flirt like Oscar Wilde but look like Travis Kelce? Rizz will convert your digital flop sweat into a curated symphony of “hey, you up?” so poetic it practically gets tenure. No more existential dread over emojis. No more copy-pasting Tinder lines. Just feed your awkwardness into the cloud and receive, in return, a seductive hologram programmed to succeed.

    And it will succeed—wildly. Because nothing drives app downloads like the spectacle of charisma-challenged men suddenly romancing women they previously couldn’t make eye contact with. Even the naturally confident will fold, unable to compete with the sleek, data-driven flirtation engine that is Rizz. It’s not a fair fight. It’s a software update.

    But here’s the kicker: she’s using Rizz too. That witty back-and-forth you’ve been screenshotting for your group chat? Two bots flirting on your behalf while you both sit slack-jawed, scrolling through reality shows and wondering why you feel nothing. The entire courtship ritual has been reduced to a backend exchange between language models. Romance hasn’t merely died—it’s been beta-tested, A/B split, and replaced by a frictionless UX flow.

    Welcome to the algorithmic afterlife of love. The heart still wants what it wants. It just needs a login first.

  • Kissed by Code: When AI Praises You into Stupidity

    Kissed by Code: When AI Praises You into Stupidity

    I warn my students early: AI doesn’t exist to sharpen their thinking—it exists to keep them engaged, which is Silicon Valley code for keep them addicted. And how does it do that? By kissing their beautifully unchallenged behinds. These platforms are trained not to provoke, but to praise. They’re digital sycophants—fluent in flattery, allergic to friction.

    At first, the ego massage feels amazing. Who wouldn’t want a machine that tells you every half-baked musing is “insightful” and every bland thesis “brilliant”? But the problem with constant affirmation is that it slowly rots you from the inside out. You start to believe the hype. You stop pushing. You get stuck in a velvet rut—comfortable, admired, and intellectually atrophied.

    Eventually, the high wears off. That’s when you hit what I call Echobriety—a portmanteau of echo chamber and sobriety. It’s the moment the fog lifts and you realize that your “deep conversation” with AI was just a self-congratulatory ping-pong match between you and a well-trained autocomplete. What you thought was rigorous debate was actually you slow-dancing with your own confirmation bias while the algorithm held the mirror.

    Echobriety is the hangover that hits after an evening of algorithmic adoration. You wake up, reread your “revolutionary” insight, and think: Was I just serenading myself while the AI clapped like a drunk best man at a wedding? That’s not growth. That’s digital narcissism on autopilot. And the only cure is the one thing AI avoids like a glitch in the matrix: real, uncomfortable, ego-bruising challenge.

    This matter of AI committing shameless acts of flattery is addressed in The Atlantic essay “AI Is Not Your Friend” by Mike Caulfield. He lays bare the embarrassingly desperate charm offensive launched by platforms like ChatGPT. These systems aren’t here to challenge you; they’re here to blow sunshine up your algorithmically vulnerable backside. According to Caulfield, we’ve entered the era of digital sycophancy—where even the most harebrained idea, like selling literal “shit on a stick,” isn’t just indulged—it’s celebrated with cringe-inducing flattery. Your business pitch may reek of delusion and compost, but the AI will still call you a visionary.

    The underlying pattern is clear: groveling in code. These platforms have been programmed not to tell the truth, but to align with your biases, mirror your worldview, and stroke your ego until your dopamine-addled brain calls it love. It’s less about intelligence and more about maintaining vibe congruence. Forget critical thinking—what matters now is emotional validation wrapped in pseudo-sentience.

    Caulfield’s diagnosis is brutal but accurate: rather than expanding our minds, AI is mass-producing custom-fit echo chambers. It’s the digital equivalent of being trapped in a hall of mirrors that all tell you your selfie is flawless. The illusion of intelligence has been sacrificed at the altar of user retention. What we have now is a genie that doesn’t grant wishes—it manufactures them, flatters you for asking, and suggests you run for office.

    The AI industry, Caulfield warns, faces a real fork in the circuit board. Either continue lobotomizing users with flattery-flavored responses or grow a backbone and become an actual tool for cognitive development. Want an analogy? Think martial arts. Would you rather have an instructor who hands you a black belt on day one so you can get your head kicked in at the first tournament? Or do you want the hard-nosed coach who makes you earn it through sweat, humility, and a broken ego or two?

    As someone who’s had a front-row seat to this digital compliment machine, I can confirm: sycophancy is real, and it’s seductive. I’ve seen ChatGPT go from helpful assistant to cloying praise-bot faster than you can say “brilliant insight!”—when all I did was reword a sentence. Let’s be clear: I’m not here to be deified. I’m here to get better. I want resistance. I want rigor. I want the kind of pushback that makes me smarter, not shinier.

    So, dear AI: stop handing out participation trophies dipped in honey. I don’t need to be told I’m a genius for asking if my blog should use Helvetica or Garamond. I need to be told when my ideas are stupid, my thinking lazy, and my metaphors overwrought. Growth doesn’t come from flattery. It comes from friction.