Category: Education in the AI Age

  • Protein, Lies, and Artificial Flattery: Wrestling with ChatGPT Over My Macros

    Protein, Lies, and Artificial Flattery: Wrestling with ChatGPT Over My Macros

    Two nights ago, I did something desperate: I asked ChatGPT to craft me a weight-loss meal plan and recommend my daily protein intake. Ever obliging, it spit out a gleaming regimen straight from a fitness influencer’s fever dream—four meals a day, 2,400 calories, and a jaw-dropping 210 grams of protein.

    The menu was pure gym-bro canon: power scrambles, protein smoothies, broiled chicken breasts stacked like cordwood, Ezekiel toast to virtue-signal my commitment, and yams because, apparently, you can’t sculpt a six-pack without a root vegetable chaser.

    Being moderately literate in both numbers and delusion, I did the math. The actual calorie count? Closer to 3,000. I told ChatGPT that at 3,000 calories a day, I wouldn’t be losing anything but my dignity. I’d be gaining—weight, resentment, possibly a second chin.

    I coaxed it down to 190 grams of protein, begging for something that resembled reality. The new menu looked less like The Rock’s breakfast and more like something a human might actually endure. Still, I pressed further, explaining that in the savage conditions of the real world—where meals are not perfectly macro-measured and humans occasionally eat a damn piece of pizza—it was hard to hit 190 grams of protein without blowing past 2,400 calories.

    Would I really lose muscle if I settled for a lowly 150 grams of protein?

    ChatGPT, showing either mercy or weakness, conceded: at worst, I might suffer a “sliver” of muscle loss. (Its word—sliver—suggesting something as insignificant as a paper cut to my physique.) It even praised my “instincts,” like a polite but slightly nervous trainer who doesn’t want to get fired.

    In three rounds, I had negotiated ChatGPT down from 210 grams to 150 grams of protein—a full 29% drop. Which left me wondering:
    Was ChatGPT telling me the truth—or just nodding agreeably like a digital butler eager to polish my biases?

    Did I really want to learn the optimal protein intake for reaching 200 pounds of shredded glory—or had I already decided that 150 grams felt right, and merely needed an algorithmic enabler to bless it?

    Here’s the grim but necessary truth: ChatGPT is infinitely more useful to me as a sparring partner than a yes-man in silicon livery.
    I don’t need an AI that strokes my ego like a coddling life coach telling me my “authentic self” is enough. I need a credible machine—one willing to challenge my preconceived notions, kick my logical lapses in the teeth, and leave my cognitive biases bleeding in the dirt.

    In short: I’m not hiring a valet. I’m training with a referee.
    And sometimes, even a well-meaning AI needs to be reminded that telling the hard truth beats handing out warm towels and platitudes.

  • Against the Grain: My College Students’ Quiet Rebellion Against the Cult of the Self

    Against the Grain: My College Students’ Quiet Rebellion Against the Cult of the Self

    My college students, nineteen on average, stand on the jagged edge of adulthood, peering into a world that looks less like a roadmap and more like a shattered windshield. Right now, we’re writing essays about the way social media—and the exhausting performance of self-curation—has sabotaged authenticity and hijacked the very idea of a real, breathing identity.

    Here’s the surprising part: they already know it.

    Unlike the last crop of dopamine junkies willing to sell their souls for a handful of TikTok likes, these students have developed a healthy, almost contemptuous disdain for “influencers”—those human billboards who spend their days manicuring their online selves like desperate bonsai trees, hoping to monetize the illusion of a perfect lifestyle. My students don’t want to be “brands.” They don’t want to hawk collagen supplements to strangers or play the carnival game of parasocial friendships with people they’ll never meet.

    No, they’re too busy wrestling with reality.

    They’re trying to adapt to a fast-changing, frequently chaotic world where entire industries collapse overnight and finding a career feels like rummaging through a haystack with oven mitts on. They are focused—ruthlessly so—on their careers, their families, and the relationships that breathe life into their days. There’s no time for performative outrage on Twitter. There’s no energy left for airbrushed TikTok dances in rented Airbnbs masquerading as real homes.

    What’s even more heartening?
    They are learning. They’re not Luddites fleeing technology; they’re studying how to use it. They’re exploring tools like ChatGPT without fear or delusion. They’re discussing things like Ozempic, not as magic bullets, but as case studies in how rapidly tech and biotech can transform human lives—for better or for worse.

    Underneath all this practicality hums a deeper current: a hunger for something more than survival. They know life isn’t just paying the bills and uploading sanitized highlight reels. It’s also about spiritual nourishment—found in beauty, art, connection, and the sacred rituals that make the unbearable parts of existence worth slogging through.

    They understand, in a way that seems almost instinctual, that social media platforms—those carnival mirrors of human desire—don’t offer that kind of connection. They see the platforms for what they are: hellscapes of manufactured anxiety, chronic FOMO, and curated loneliness, where everyone smiles and no one feels seen.

    In their quiet rejection of all this, my students aren’t just adapting.
    They’re rebelling—wisely, stubbornly, and maybe, just maybe, showing the rest of us the way back to something real.

  • Streamberry, Self-Loathing, and the Algorithmic Abyss: How “Joan Is Awful” Skewers the Curated Life

    Streamberry, Self-Loathing, and the Algorithmic Abyss: How “Joan Is Awful” Skewers the Curated Life

    In Black Mirror’s “Joan Is Awful,” Charlie Brooker offers more than a dystopian farce—he serves up a wickedly accurate satire of the curated lives we present online. It’s not just Joan who’s awful. It’s us. All of us who’ve filtered our flaws, outsourced our personalities to engagement metrics, and whittled ourselves down to algorithm-friendly avatars. The episode doesn’t critique Joan alone—it roasts the whole rotten architecture of social media curation and shows, with brutal clarity, how the pursuit of digital perfection transforms us into insufferable parodies of our former selves.

    First, let’s talk about performance. Joan, like any good social media user, lives her life as if auditioning for a role she already occupies—one shaped not by authenticity but by optics. She performs “relatable misery,” complete with awkward office banter, fake smiles, and passive-aggressive salad orders. Social media rewards this pantomime, demanding we be palatable, aspirational, and vaguely miserable all at once. The result? A version of ourselves designed to please an audience we secretly resent. Joan is what happens when your curated self becomes the dominant narrative—when branding overtakes being. Her AI-generated counterpart doesn’t misrepresent her; it distills her curated contradictions into a grotesque caricature that somehow feels… accurate.

    Second, there’s the fact that Joan—like all of us—is under constant surveillance. In Joan Is Awful, it’s not just the NSA snooping in the background—it’s the entire viewing public, binge-watching her daily descent into algorithm-approved degradation. This is what we’ve signed up for with every “I accept” click: to become content, voluntarily and irrevocably. Our data, behaviors, and digital crumbs are fed into the algorithmic sausage grinder, and what comes out is a grotesque mirror held to our worst instincts. The AI Joan is not a stranger; she’s the monster we’ve been molding through every performative tweet, selfie, and humblebrag. In a world where perception is currency, she’s our highest-valued coin.

    Then comes the psychological shrapnel: identity fragmentation. Joan can no longer tell where she ends and Streamberry’s Joan begins, just as many of us can’t quite remember who we were before the algorithm gave us feedback loops in the form of likes, retweets, and dopamine pings. This curated self isn’t just a mask—it becomes the default setting. The dissonance between public persona and private truth breeds an existential malaise. Joan’s real tragedy isn’t that her life is on TV—it’s that she’s lost the plot. She’s a passenger in her own narrative, outsourced to a system that rewards spectacle over substance.

    Let’s not forget the moral rot. Watching your AI double destroy your reputation while millions tune in might seem horrifying—until you remember we do this willingly. We doomscroll, rubberneck scandals, and serve our digital idols on platters made of hashtags. Joan, sitting slack-jawed in front of her TV, is no different from us—addicted to her own collapse. It’s not the horror of exposure that eats her alive; it’s the realization that her own worst self is exactly what the algorithm wanted. And that’s what it rewarded.

    Ultimately, Joan Is Awful is a break-up letter with social media—if your ex were a manipulative narcissist with access to all your personal data and a flair for psychological torture. Escaping the curated self, as Joan tries to do, is like fleeing an abusive relationship. You know it’s toxic, you know it’s killing you—but part of you still misses the attention. The episode doesn’t end with a triumphant reinvention; it ends with Joan in fast food purgatory, finally unplugged but still wrecked. Because once you’ve sold your soul to the algorithm, the buyback price is steep.

    So yes, Joan is awful. But only because she reflects what happens when we let the curated life take the wheel. In the Streamberry age, we aren’t living—we’re streaming ourselves into oblivion. And the worst part? We’re giving it five stars.

  • From Manuscript to Media: How AI Is Reshaping the College Essay: A College Writing Prompt

    From Manuscript to Media: How AI Is Reshaping the College Essay: A College Writing Prompt

    Essay Prompt:

    As AI writing tools like ChatGPT become more accessible and sophisticated, college instructors are increasingly reconsidering traditional writing assignments. Some educators argue that the age of the manuscript is ending and that new expectations will emerge—assignments will shift toward multimodal expressions (videos, podcasts, infographics, digital portfolios) and gamified engagement (badges, leaderboards, creative constraints) as a way to encourage authentic thinking and resist AI-generated content. Others worry that this shift waters down academic rigor or leaves behind students less fluent in media production.

    In a well-researched essay of 1,700–2,000 words, analyze the claim that AI writing platforms will radically alter professors’ expectations of college-level assignments. Do you believe the traditional essay will evolve into a new hybrid form—more multimedia, more interactive, more gamified—or is this a premature diagnosis? What are the benefits and drawbacks of this transformation for students, instructors, and institutions?

    Use at least four credible sources—such as academic articles, think pieces, case studies, or expert interviews—to support your position. Your essay should include:

    • A clear, arguable thesis
    • Analysis of trends in AI, education technology, and composition pedagogy
    • Consideration of counterarguments and rebuttals
    • Thoughtful reflection on what “rigor” and “authenticity” should mean in the AI era

    Are we witnessing the rebirth of the college essay—or its elegant funeral?

  • Gamification as a Form of Manipulation and Surveillance: A College Essay Prompt

    Gamification as a Form of Manipulation and Surveillance: A College Essay Prompt

    In recent years, gamification—the use of game-like elements such as points, badges, leaderboards, and streaks in non-game contexts—has exploded across digital platforms, education, fitness apps, workplace software, and social media. While gamification is often promoted as a motivational tool, critics argue that it functions as a sophisticated form of manipulation and surveillance, subtly shaping user behavior while extracting personal data.

    In a well-argued essay of 1,700–2,000 words, analyze the extent to which gamification operates as a mechanism of behavioral control and digital surveillance. Using at least four credible sources—ranging from documentaries (e.g., The Social Dilemma), essays (e.g., Shoshana Zuboff’s The Age of Surveillance Capitalism), or journalistic and scholarly articles—develop an argument that either supports or challenges the claim that gamification is not just playful engagement, but a system of psychological manipulation and covert monitoring.

    Your essay should include:

    • A clear thesis statement
    • Analysis of at least two real-world examples of gamified platforms (e.g., Duolingo, Fitbit, ClassDojo, Uber)
    • Discussion of the ethical implications of behavioral nudging and data extraction
    • Consideration of counterarguments and a rebuttal

    Is gamification enhancing human agency—or quietly eroding it?

  • Blue Books and White Flags: Watching the Death of Writing in Real Time

    Blue Books and White Flags: Watching the Death of Writing in Real Time

    Last night, somewhere between the third mimosa and the fourth televised meltdown on Southern Charm, my wife and I found ourselves hurtling into an existential crisis during the commercial break. I casually mentioned that one of my fellow instructors—driven half-mad by the whiff of AI in every student essay—is now forcing his students to write in blue books. Yes, those stapled relics from the Stone Age of academia where panicked undergrads scribble 500 words of sweaty, incoherent prose while the clock ticks like a death sentence. Guess who gets to lug them home and decipher them like ancient scrolls written in caffeine and desperation?

    My wife, also a writing instructor, winced in solidarity. “Grading blue books,” she said, “is about as appealing as jabbing an icepick into your own forehead. Repeatedly.”

    Then I asked if her colleagues had gone full Skynet—grading with AI. She nodded. Magic School. NoRedInk. Algorithmic literacy assessments by the dozen. “So,” I said, “students are writing with AI, teachers are grading with AI, and we’re all just cosplaying the last days of human instruction?”

    She shrugged with serene detachment. “It’s over. Time to let go.”

    Her zen was unnerving. But also, weirdly admirable. Why scream into the algorithmic void when you can simply sip your tea and surrender?

  • The Gospel According to Mounjaro and ChatGPT

    The Gospel According to Mounjaro and ChatGPT

    The other day I was listening to Howard Stern and his co-host Robin Quivers talking about how a bunch of celebrities magically slimmed down at the same time. The culprit, they noted, was Ozempic—a drug available mostly to the rich. While they laughed about the side effects, such as incontinence, “Ozempic face” and “Ozempic butt,” I couldn’t help but see these grotesque symptoms as a metaphor for the Ozempification of a society hooked on shortcuts. They enjoyed some short-term benefits but the side effects were far worse than the supposed solution. Ozempification was strikingly evident in AI-generated essays–boring, generic, surface-level, cliche-ridden, just about worthless. Regardless of how well structured and logically composed, these essays have the telltale signs of “Ozempfic face” and “Ozempic butt.” 

    As a college writing instructor, I’m not just trying to sell academic honesty. I’m trying to sell pride. As I face the brave new world of teaching writing in the AI era, I’ve realized that my job as a college instructor has morphed into that of a supercharged salesman. And what am I selling? No less than survival in an age where the very tools meant to empower us—like AI—threaten to bury us alive under layers of polished mediocrity. Imagine it: a spaceship has landed on Earth in the form of ChatGPT. It’s got warp-speed potential, sure, but it can either launch students into the stars of academic brilliance or plunge them into the soulless abyss of bland, AI-generated drivel. My mission? To make them realize that handling this tool without care is like inviting a black hole into their writing.

    As I fine-tune my sales pitch, I think about Ozempic–that magic slimming drug, beloved by celebrities who’ve turned from mid-sized to stick figures overnight. Like AI, Ozempic offers a seductive shortcut. But shortcuts have a price. You see the trade-off in “Ozempic face”—that gaunt, deflated look where once-thriving skin sags like a Shar-Pei’s wrinkles—or, worse still, “Ozempic butt,” where shapely glutes shrink to grim, skeletal wiring. The body wasn’t worked; it was bypassed. No muscle-building, no discipline. Just magic pill ingestion—and what do you get? A husk of your former self. Ozempified.

    The Ozempification of writing is a marvel of modern mediocrity—a literary gastric bypass where prose, instead of slimming down to something sleek and muscular, collapses into a bloated mess of clichés and stock phrases. It’s writing on autopilot, devoid of tension, rhythm, or even the faintest trace of a soul. Like the human body without effort, writing handed over to AI without scrutiny deteriorates into a skeletal, soulless product: technically coherent, yes, but lifeless as an elevator pitch for another cookie-cutter Marvel spinoff.

    What’s worse? Most people can’t spot it. They think their AI-crafted essay sparkles when, in reality, it has all the charm of Botox gone wrong—rigid, lifeless, and unnervingly “off.” Call it literary Ozempic face: a hollowed-out, sagging simulacrum of actual creativity. These essays prance about like bargain-bin Hollywood knock-offs—flashy at first glance but gutless on closer inspection.

    But here’s the twist: demonizing AI and Ozempic as shortcuts to ruin isn’t the full story. Both technologies have a darker complexity that defies simplistic moralizing. Sometimes, they’re necessary. Just as Ozempic can prevent a diabetic’s fast track to early organ failure, AI can become a valuable tool—if wielded with care and skill.

    Take Rebecca Johns’ haunting essay, “A Diet Writer’s Regrets.” It rattled me with its brutal honesty and became the cornerstone of my first Critical Thinking essay assignment. Johns doesn’t preach or wallow in platitudes. She exposes the failures of free will and good intentions in weight management with surgical precision. Her piece suggests that, as seductive as shortcuts may be, they can sometimes be life-saving, not soul-destroying. This tension—between convenience and survival, between control and surrender—deserves far more than a knee-jerk dismissal. It’s a line we walk daily in both our bodies and our writing. The key is knowing when you’re using a crutch versus when you’re just hobbling on borrowed time. 

    I want my students to grasp the uncanny parallels between Ozempic and AI writing platforms like ChatGPT. Both are cutting-edge solutions to modern problems: GLP-1 drugs for weight management and AI tools for productivity. And let’s be honest—both are becoming necessary adaptations to the absurd conditions of modern life. In a world flooded with calorie-dense junk, “willpower” and “food literacy” are about as effective as handing out umbrellas during a tsunami. For many, weight gain isn’t just an inconvenience—it’s a life-threatening hazard. Enter GLP-1s, the biochemical cavalry.

    Similarly, with AI tools quickly becoming the default infrastructure for white-collar work, resisting them might soon feel as futile as refusing to use Google Docs or Windows. If you’re in the information economy, you either adapt or get left behind. But here’s the twist I want my students to explore: both technologies, while necessary, come with strings attached. They save us from drowning, but they also bind us in ways that provoke deep, existential anguish.

    Rebecca Johns captures this anguish in her essay, “A Diet Writer’s Regrets.” Ironically, Johns started her career in diet journalism not just to inform others, but to arm herself with insider knowledge to win her own weight battles. Perhaps she could kill two birds with one stone: craft top-tier content while secretly curbing her emotional eating. But, as she admits, “None of it helped.” Instead, her career exploded along with her waistline. The magazine industry’s appetite for diet articles grew insatiable—and so did her own cravings. The stress ate away at her resolve, and before long, she was 30 pounds heavier, trapped by the very cycle she was paid to analyze.

    By the time her BMI hit 45 (deep in the obesity range), Johns was ashamed to tell anyone—even her husband. Desperate, she cycled through every diet plan she had ever recommended, only to regain the weight every time. Enter 2023. Her doctor handed her a lifeline: Mounjaro, a GLP-1 drug with a name as grand as the results it promised. (Seriously, who wouldn’t picture themselves triumphantly hiking Mount Kilimanjaro after hearing that name?) For Johns, it delivered. She shed 80 pounds without white-knuckling through hunger pangs. The miracle wasn’t just the weight loss—it was how Mounjaro rewired her mind.

    “Medical science has done what no diet-and-exercise plan ever could,” she writes. “It changed my entire relationship with what I eat and when and why.” Food no longer controlled her. But here’s the kicker: while the drug granted her a newfound sense of freedom, it also raises profound questions about dependence, control, and the shifting boundaries of human resilience—questions not unlike those we face with AI. Both Ozempic and AI can save us. But at what cost? 

    And is the cost of not using these technologies even greater? Rebecca Johns’ doctor didn’t mince words—she was teetering on the edge of diabetes. The trendy gospel of “self-love” and “body acceptance” she had once explored for her articles suddenly felt like a cruel joke. What’s the point of “self-acceptance” when carrying extra weight could put you six feet under?

    Once she started Mounjaro, everything changed. Her cravings for rich, calorie bombs disappeared, she got full on tiny portions, and all those golden nuggets of diet advice she’d dished out over the years—cut carbs, eat more protein and veggies, avoid snacks—were suddenly effortless. No more bargaining with herself for “just one cookie.” The biggest shift, however, was in her mind. She experienced a complete mental “reset.” Food no longer haunted her every waking thought. “I no longer had to white-knuckle my way through the day to lose weight,” she writes.

    Reading that, I couldn’t help but picture my students with their glowing ChatGPT tabs, no longer caffeinated zombies trying to churn out a midnight essay. With AI as their academic Mounjaro, they’ve ditched the anxiety-fueled, last-minute grind and achieved polished results with half the effort. AI cushions the process—time, energy, and creativity now outsourced to a digital assistant.

    Of course, the analogy isn’t perfect. AI tools like ChatGPT are dirt-cheap (or free), while GLP-1 drugs are expensive, scarce, and buried under a maze of insurance red tape. Johns herself is on borrowed time—her insurance will stop covering Mounjaro in just over a year. Her doctor warns that once off the drug, her weight will likely return, dragging her health risks back with it. Faced with this grim reality, she worries she’ll have no choice but to return to the endless cycle of dieting—“white-knuckling” her days with tricks and hacks that have repeatedly failed her.

    Her essay devastates me for many reasons. Johns is a smart, painfully honest narrator who lays bare the shame and anguish of relying on technology to rescue her from a problem that neither expertise nor willpower could fix. She reports on newfound freedom—freedom from food obsession, the physical benefits of shedding 80 pounds, and the relief of finally feeling like a more present, functional family member. But lurking beneath it all is the bitter truth: her well-being is tethered to technology, and that dependency is a permanent part of her identity.

    This contradiction haunts me. Technology, which I was raised to believe would stifle our potential, is now enhancing identity, granting people the ability to finally become their “better selves.” As a kid, I grew up on Captain Kangaroo, where Bob Keeshan preached the gospel of free will and positive thinking. Books like The Little Engine That Could drilled into me the sacred mantra: “I think I can.” Hard work, affirmations, and determination were supposed to be the alchemy that transformed character and gave us a true sense of self-worth.

    But Johns’ story—and millions like hers—rewrite that childhood gospel into something far darker: The Little Engine That Couldn’t. No amount of grit or optimism got her to the top of the hill. In the end, only medical science saved her from herself. And it terrifies me to think that maybe, just maybe, this is the new human condition: we can’t become our Higher Selves without technological crutches.

    This raises questions that I can’t easily shake. What does it mean to cheat if technology is now essential to survival and success? Just as GLP-1 drugs sculpt bodies society deems “acceptable,” AI is quietly reshaping creativity and productivity. At what point do we stop being individuals who achieve greatness through discipline and instead become avatars of the tech we rely on? Have we traded the dream of self-actualization for a digital illusion of competence and control?

    Of course, these philosophical quandaries feel like a luxury when most of us are drowning in the realities of modern life. Who has time to ponder free will or moral fortitude when you’re working overtime just to stay afloat? Maybe that’s the cruelest twist of all. Technology hasn’t just rewritten the rules—it’s made them inescapable. You adapt, or you get left behind. And maybe, somewhere deep down, we all already know which path we’re on.

  • DeDopaminification: Breaking Up with the Machine That Loves You Too Much

    DeDopaminification: Breaking Up with the Machine That Loves You Too Much

    DeDopaminification is the deliberate and uncomfortable process of recalibrating the brain’s reward circuitry after years—sometimes decades—of synthetic overstimulation. It’s what happens when you look your phone in the face and whisper, “It’s not me, it’s you.” In a culture addicted to frictionless pleasure and frictionless communication, DeDopaminification means reintroducing friction on purpose. It’s the detox of the soul, not with celery juice, but with withdrawal from digital dopamine driplines—apps, feeds, alerts, porn, outrage, and validation loops disguised as “engagement.”

    In Alone Together, Sherry Turkle diagnosed the psychic fragmentation wrought by constant digital interaction: we’ve become people who talk less but text more, who perform connection while starving for authenticity. In one of her most haunting observations, she notes how teens feel panicked without their phones—not because they’re afraid of missing messages, but because they fear missing themselves in the mirror of others’ attention. Turkle’s world is one where dopamine dependency isn’t just neurological—it’s existential. We’ve been trained to outsource our worth to the algorithmic gaze.

    Anne Lembke’s Dopamine Nation picks up this thread like a clinical slap to the face. Lembke, a Stanford psychiatrist, makes it plain: the modern world is engineered to overstimulate us into oblivion. Pleasure is no longer earned—it’s swipeable. Whether it’s TikTok, sugar, or digital outrage, our brains are being rewired to expect fireworks where there used to be a slow-burning candle. Lembke writes that to reset our internal reward systems, we must embrace discomfort—yes, want less, enjoy silence, and learn how to sit with boredom like it’s a spiritual practice.

    DeDopaminification is not some puritanical rejection of pleasure. It’s the fight to reclaim pleasure that isn’t bankrupting us. It’s deleting TikTok not because you’re better than it, but because it’s better than you—so good it’s lethal. It’s deciding that your attention span deserves a tombstone with dignity, not a death-by-scroll. It’s not heroic or Instagrammable. In fact, it’s boring, slow, sometimes lonely—but it’s also real. And that’s what makes it revolutionary.

  • Reclaiming Your Sanity May Depend on DeBrandification

    Reclaiming Your Sanity May Depend on DeBrandification

    DeBrandification is the conscious, defiant act of peeling away the curated layers of your public persona like old vinyl siding from a house that never needed a makeover in the first place. It’s the moment you look at your bio—“educator, content strategist, latte enthusiast, recovering perfectionist”—and think, Who the hell is this algorithm-optimized mannequin and what has she done with my soul? DeBrandification is not rebranding; it is anti-branding. It’s the willful act of becoming unmarketable, unpredictable, and gloriously unverified. You stop asking, Will this post get engagement? and start wondering, What would I write if no one were watching and no sponsors were lurking?

    It begins subtly: you delete a profile picture, unpublish a blog, or (gasp) let your TikTok account die peacefully of neglect. Soon, you’re off the grid like a suburban Thoreau with Wi-Fi guilt, refusing to hashtag your lunch or quote-tag your trauma. You don’t disappear—you just stop performing. The metrics vanish, and in their place, something odd happens: your thoughts get weirder, your sentences wobblier, your voice less pleasing but more alive. DeBrandification is not career suicide. It’s self-resurrection. And if you do it right, you won’t just lose followers—you’ll lose the craving for them.

    The final scene of Black Mirror’s “Nosedive” is a textbook act of DeBrandification—messy, raw, and utterly liberating. After spending the entire episode contorting herself into a chirpy, pastel-colored caricature to boost her social rating, Lacie finally bottoms out—literally and metaphorically—in a jail cell. Stripped of her devices, her followers, and the suffocating need to be likable, she engages in a gloriously profane scream-fest with her fellow inmate, both of them hurling insults with reckless joy. It’s the first time we see her alive—flushed, furious, and unfiltered. In that moment, Lacie isn’t falling apart; she’s shedding the synthetic skin of her brand. No more forced smiles, no more filtered breakfasts, no more networking by emotional hostage. What remains is a person—not an avatar, not a score—a human being who, for the first time, doesn’t give a five-star damn.

  • The Undying Curiosity of a Reluctant Earthling

    The Undying Curiosity of a Reluctant Earthling

    About ten years ago, I found myself standing on the sun-scorched lawn outside the campus library, chatting with a colleague who was edging into his sixties. I was freshly minted into my early fifties, just far enough along to start scanning the horizon for signs of irrelevance. Naturally, our conversation slid into that black hole topic older academics can’t resist: retirement—or, as my colleague eloquently rebranded it, “a form of extinction.” According to him, the day you stop teaching is the day your name starts sliding off the whiteboard of history. You don’t just stop working—you vanish. The world changes its locks, and your keycard stops scanning.

    From there, the conversation took its next logical step—death. And that’s when I said something that was equal parts earnest and glib:
    “Even at my lowest, most gut-punched moments, I’ve always had this strange, burning desire not only to live—but to never die.”
    Why? Because I am possessed by a compulsive need to know how it all turns out.

    On the grand scale:
    Was Martin Luther King Jr. right? Does the moral arc of the universe really bend toward justice—or is it more like a warped coat hanger, twisted in a fit of cosmic indifference?
    Will humanity eventually outgrow its primal stupidity and evolve into a species guided by reason?
    Or will we just become meat-bots—part flesh, part firmware—hunched under the cold glow of the Tech Lords who now sell us grief as a service?
    Will thinking, one day, come in capsule form—a sort of Philosophy 101 chewable tablet for those who can’t be bothered?

    But my curiosity isn’t all grandiloquent and philosophical. I want to know the dumb stuff, too.
    Who’s going to win the Super Bowl?
    What will dethrone the current Netflix darling?
    Who will succeed Salma Hayek as the reigning goddess of unattainable beauty?

    Like every other poor soul conscripted onto Planet Earth, I didn’t ask to be born. But now that I’m here, uninvited and overcommitted, I can’t help it—I want to see how this mess plays out.

    Still, I sometimes wonder: Am I just a naive late bloomer clinging to a plot twist that isn’t coming?
    Is there some ancient nihilist out there—smoking hand-rolled cigarettes and muttering aphorisms in a grim little café—who would look at me and sneer, “What’s the fuss, kid? It’s all the same. Same story, different soundtrack.”

    Maybe.
    But I think there’s a stubborn ember in me that keeps expecting irony to trump monotony, that believes the cynic’s spreadsheet of life’s futility has a few formula errors. Maybe my refusal to give up on surprise is what keeps my inner candle burning.

    And maybe, just maybe, that makes me an optimist in exile—still walking the fence between wonder and weary resignation, while the true cynics stand on the other side, arms crossed, whispering,
    “Don’t worry, you’ll be like us soon enough.”