Tag: technology

  • Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    In his New Yorker piece, “What Happens After A.I. Destroys College Writing?”, Hua Hsu mourns the slow-motion collapse of the take-home essay while grudgingly admitting there may be a chance—however slim—for higher education to reinvent itself before it becomes a museum.

    Hsu interviews two NYU undergrads, Alex and Eugene, who speak with the breezy candor of men who know they’ve already gotten away with it. Alex admits he uses A.I. to edit all his writing, from academic papers to flirty texts. Research? Reasoning? Explanation? No problem. Image generation? Naturally. He uses ChatGPT, Claude, DeepSeek, Gemini—the full polytheistic pantheon of large language models.

    Eugene is no different, and neither are their classmates. A.I. is now the roommate who never pays rent but always does your homework. The justifications come standard: the assignments are boring, the students are overworked, and—let’s face it—they’re more confident with a chatbot whispering sweet logic into their ears.

    Meanwhile, colleges are flailing. A.I. detection software is unreliable, grading is a time bomb, and most instructors don’t have the time, energy, or institutional backing to play academic detective. The truth is, universities were caught flat-footed. The essay, once a personal rite of passage, has become an A.I.-assisted production—sometimes stitched together with all the charm and coherence of a Frankenstein monster assembled in a dorm room at 2 a.m.

    Hsu—who teaches at a small liberal arts college—confesses that he sees the disconnect firsthand. He listens to students in class and then reads essays that sound like they were ghostwritten by Siri with a mild Xanax addiction. And in a twist both sobering and dystopian, students don’t even see this as cheating. To them, using A.I. is simply modern efficiency. “Keeping up with the times.” Not deception—just delegation.

    But A.I. doesn’t stop at homework. It’s styling outfits, dispensing therapy, recommending gadgets. It has insinuated itself into the bloodstream of daily life, quietly managing identity, desire, and emotion. The students aren’t cheating. They’re outsourcing. They’ve handed over the messy bits of being human to an algorithm that never sleeps.

    And so, the question hangs in the air like cigar smoke: Are writing departments quaint relics? Are we the Latin teachers of the 21st century, noble but unnecessary?

    Some professors are adapting. Blue books are making a comeback. Oral exams are back in vogue. Others lean into A.I., treating it like a co-writer instead of a threat. Still others swap out essays for short-form reflections and response journals. But nearly everyone agrees: the era of the generic prompt is over. If your essay question can be answered by ChatGPT, your students already know it—and so does the chatbot.

    Hsu, for his part, doesn’t offer solutions. He leaves us with a shrug.

    But I can’t shrug. I teach college writing. And for me, this isn’t just a job. It’s a love affair. A slow-burning obsession with language, thought, and the human condition. Either you fall in love with reading and writing—or you don’t. And if I can’t help students fall in love with this messy, incandescent process of making sense of the world through words, then maybe I should hang it up, binge-watch Love Is Blind, and polish my résumé.

    Because this isn’t about grammar. This is about soul. And I’m in the love business.

  • My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    If you over rely on AI, you risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    If your writing smacks of AI–the kind of polished mediocrity that will lead you down a road of being a functionary or a Non-Player Character, the grade you receive will reflect that sad fact. On the other hand, if your writing is animated by a strong authorial presence, evidence of an Architect, a person who strives for a life of excellence, self-agency, and pride, your grade will reflect that fact as well. 

  • Toothpaste, Technology, and the Death of the Luddite Dream

    Toothpaste, Technology, and the Death of the Luddite Dream

    A Luddite, in modern dress, is a self-declared purist who swats at technology like it’s a mosquito threatening their sense of self-agency, quality, and craft. They fear contamination—that somehow the glow of a screen dulls the soul, or that a machine’s hand on the process strips the art from the outcome. It’s a noble impulse, maybe even romantic. But let’s be honest: it’s also doomed.

    Technology isn’t an intruder anymore—it’s the furniture. It’s the toothpaste out of the tube, the guest who showed up uninvited and then installed a smart thermostat. You can’t un-invent it. You can’t unplug the century.

    And I, for one, am a fatalist about it. Not the trembling, dystopian kind. Just… resigned. Technology comes in waves—fire, the wheel, the iPhone, and now OpenAI. Each time, we claim it’s the end of humanity, and each time we wake up, still human, just a bit more confused. You can’t fight the tide with a paper umbrella.

    But here’s where things get tricky: we’re not adapting well. Right now, with AI, we’re in the maladaptive toddler stage—poking it, misusing it, letting it do our thinking while we lie to ourselves about “optimization.” We are staring down a communications tool so powerful it could either elevate our cognitive evolution… or turn us all into well-spoken mannequins.

    We are not guaranteed to adapt well. But we have no choice but to try.

    That struggle—to engage with technology without becoming technology, to harness its speed without losing our depth—is now one of the defining human questions. And the truth is: we haven’t even mapped the battlefield yet.

    There will be factions. Teams. Dogmas. Some will preach integration, others withdrawal. Some will demand toolkits and protocols; others will romanticize silence and slowness. We are on the brink of ideological trench warfare—without even knowing what colors the flags are yet.

    What matters now is not just what we use, but how we use it—and who we become in the process.

    Because whether you’re a fatalist, a Luddite, or a dopamine-chasing cyborg, one thing is clear: this isn’t going away.

    So sharpen your tools—or at least your attitude. You’re already in the arena.

  • Ozempification and the Death of the Inner Architect

    Ozempification and the Death of the Inner Architect

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    You risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    Because in the age of Ozempification, becoming an Architect isn’t a flex—it’s a survival strategy.

    There is no salvation in a life run on autopilot.

    You’re here. So start building.

  • The Tech Lord and the Gospel of Obsolescence

    The Tech Lord and the Gospel of Obsolescence

    Last night I dreamed I was helping my daughter with her homework in the middle of a public square. A chaotic, bustling arena. Think Roman forum meets tech dystopia. We had two laptops perched on a white concrete ledge high above a stadium of descending steps, as if we were doing calculus on the lip of a coliseum.

    The computers were a mess—two laptops yoked together like resentful twins, their settings morphing by the second. Screens flashed blue, then white, then black. Sometimes yellow cartoon ducks floated lazily across the bottom like deranged pop-up ads from a children’s game. I wasn’t so much solving her homework as performing tech triage on possessed machines.

    I wasn’t panicked because I couldn’t help her. I was panicked because someone might see that I couldn’t help her. Vanity, thy name is Dad.

    People walked past, utterly unfazed. Apparently, homework over a stadium chasm with dueling laptops and malfunctioning duck animations was standard urban behavior.

    Then a young man—a peripheral character from some former life—told me there was a “tech lord” nearby. Not tech support. A tech lord. Naturally, I followed.

    The tech lord’s lair was a dim room centered around a massive table, cathedral-like in tone and purpose. He was listening to the Bible—read aloud by the famous comedian George Carlin. Not a solemn voice or trained narrator, but someone best known for punchlines and pratfalls. And the tech lord was rapt. He cradled a thick, black Bible like a sacred talisman, proclaiming that this was the finest biblical performance art ever conceived.

    I tried to get in a word about my tech problem, but he interrupted me and asked me what my favorite book in the Bible was. I said the Book of Job, of course. He seemed satisfied with my response and allowed me to continue with my inquiry. 

    When I mentioned the malfunctioning laptops, he waved it off like someone refusing to answer a question about taxes. “You’ll need to get rid of both machines,” he intoned, “and buy a new one.”

    Naturally, I flirted with the idea of going full Apple—titanium chic, smug perfection—but quickly sobered up. Apple or Windows, it’s the same headache in a different tuxedo. I settled for a sleek black Windows laptop, and with a sudden, magical poof, there it was in my hands. The new device of promise. The Messiah machine.

    I returned to my daughter, still huddled over her rebellious duo. I tried to shut them down, ceremonially, like a general dismissing insubordinate troops. They refused. The screens flared defiantly. They would not go quietly into obsolescence. They had become conscious, bitter, undead.

    And then I woke up.

    The kicker? Just before bed, my wife gave me a task: drive 30 minutes to San Pedro with a car full of broken electronics and deliver them to an e-waste center. My subconscious, clearly, had feelings about this and delivered me this dream as a prelude to my task.

    One final note about the dream: the pairing of George Carlin and the Bible triggered a memory of a dream I had in the early ’90s. In that dream, the Messiah wasn’t a robed figure of spiritual gravity—he was Buddy Hackett, the goofy-faced, gravel-voiced comic best known for squinting through punchlines. There he was, standing atop a Hollywood hotel, delivering what I could only assume was divine revelation—or maybe just the world’s strangest stand-up set. I couldn’t tell if he was inspired, intoxicated, or both.

    Now, three decades later, George Carlin shows up in a dream to read Scripture aloud with messianic intensity, joining Hackett in a growing pantheon of prophetic clowns. It makes a strange kind of sense. Both comedians and prophets stand at the edge of civilization, pointing fingers at the absurdities we refuse to question. They use hyperbole, irony, and parable to slice through the world’s lazy thinking. The difference? Prophets get canonized. Comedians get heckled.

    But maybe, just maybe, it’s the same job with a different mic.s a prelude to my task.

  • ChatGPT Killed Lacie Pound and Other Artificial Lies

    ChatGPT Killed Lacie Pound and Other Artificial Lies

    In Matteo Wong’s sharp little dispatch, “The Entire Internet Is Reverting to Beta,” he argues that AI tools like ChatGPT aren’t quite ready for daily life. Not unless your definition of “ready” includes faucets that sometimes dispense boiling water instead of cold or cars that occasionally floor the gas when you hit the brakes. It’s an apt metaphor: we’re being sold precision, but what we’re getting is unpredictability in a shiny interface.

    I was reminded of this just yesterday when ChatGPT gave me the wrong title for a Meghan Daum essay collection—an essay I had just read. I didn’t argue. You don’t correct a toaster when it burns your toast; you just sigh and start over. ChatGPT isn’t thinking. It’s a stochastic parrot with a spellchecker. Its genius is statistical, not epistemological.

    And yet people keep treating it like a digital oracle. One of my students recently declared—thanks to ChatGPT—that Lacie Pound, the protagonist of Black Mirror’s “Nosedive,” dies a “tragic death.” She doesn’t. She ends the episode in a prison cell, laughing—liberated, not lifeless. But the essay had already been turned in, the damage done, the grade in limbo.

    This sort of glitch isn’t rare. It’s not even surprising. And yet this technology is now embedded into classrooms, military systems, intelligence agencies, healthcare diagnostics—fields where hallucinations are not charming eccentricities, but potential disasters. We’re handing the scalpel to a robot that sometimes thinks the liver is in the leg.

    Why? Because we’re impatient. We crave novelty. We’re addicted to convenience. It’s the same impulse that led OceanGate CEO Stockton Rush to ignore engineers, cut corners on sub design, and plunge five people—including himself—into a carbon-fiber tomb. Rush wanted to revolutionize deep-sea tourism before the tech was seaworthy. Now he’s a cautionary tale with his own documentary.

    The stakes with AI may not involve crushing depths, but they do involve crushing volumes of misinformation. The question isn’t Can ChatGPT produce something useful? It clearly can. The real question is: Can it be trusted to do so reliably, and at scale?

    And if not, why aren’t we demanding better? Why haven’t tech companies built in rigorous self-vetting systems—a kind of epistemological fail-safe? If an AI can generate pages of text in seconds, can’t it also cross-reference a fact before confidently inventing a fictional death? Shouldn’t we be layering safety nets? Or have we already accepted the lie that speed is better than accuracy, that beta is good enough?

    Are we building tools that enhance our thinking, or are we building dependencies that quietly dismantle it?

  • Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    In the HBO Max film Mountainhead and the Black Mirror episode “Joan Is Awful,” viewers are plunged into unnerving dystopias shaped not by evil governments or alien invasions, but by tech corporations whose influence surpasses state power and whose tools penetrate the most intimate corners of human consciousness.

    Both works dramatize a chilling premise: that the very notion of an autonomous self is under siege. We are not simply consumers of technology but the raw material it digests, distorts, and reprocesses. In these narratives, the protagonists find their sense of self unraveled, their identities replicated, manipulated, and ultimately owned by forces they cannot control. Whether through digital doppelgängers, surveillance entertainment, or techno-induced psychosis, these stories illustrate the terrifying consequences of surrendering power to those who build technologies faster than they can understand or ethically manage them.

    In this essay, write a 1,700-word argumentative exposition responding to the following claim:

    In the age of runaway innovation, where the ambitions of tech elites override democratic values and psychological safeguards, the very concept of free will, informed consent, and the autonomous self is collapsing under the weight of its digital imitation.

    Use Mountainhead and “Joan Is Awful” as your core texts. Analyze how each story addresses the themes of free will, consent, identity, and power. You are encouraged to engage with outside sources—philosophical, journalistic, or theoretical—that help you interrogate these themes in a broader context.

    Consider addressing:

    • The illusion of choice and algorithmic determinism
    • The commodification of human identity
    • The satire of corporate terms of service and performative consent
    • The psychological toll of being digitally duplicated or manipulated
    • Whether technological “progress” is outpacing moral development

    Your argument should include a strong thesis, counterargument with rebuttal, and close textual analysis that connects narrative detail to broader social and philosophical stakes.


    Five Sample Thesis Statements with Mapping Components


    1. The Death of the Autonomous Self

    In Mountainhead and Joan Is Awful, the protagonists’ loss of agency illustrates how modern tech empires undermine the very concept of selfhood by reducing human experience to data, delegitimizing consent through obfuscation, and accelerating psychological collapse under the guise of innovation.

    Mapping:

    • Reduction of human identity to data
    • Meaningless or manipulated consent
    • Psychological consequences of tech-induced identity collapse

    2. Mock Consent in the Age of Surveillance Entertainment

    Both narratives expose how user agreements and passive digital participation mask deeply coercive systems, revealing that what tech companies call “consent” is actually a legalized form of manipulation, moral abdication, and commercial exploitation.

    Mapping:

    • Consent as coercion disguised in legal language
    • Moral abdication by tech designers and executives
    • Profiteering through exploitation of personal identity

    3. From Users to Subjects: Tech’s New Authoritarianism

    Mountainhead and Joan Is Awful warn that the unchecked ambitions of tech elites have birthed a new form of soft authoritarianism—where control is exerted not through force but through omnipresent surveillance, AI-driven personalization, and identity theft masquerading as entertainment.

    Mapping:

    • Tech ambition and loss of oversight
    • Surveillance and algorithmic control
    • Identity theft as entertainment and profit

    4. The Algorithm as God: Tech’s Unholy Ascendancy

    These works portray the tech elite as digital deities who reprogram reality without ethical limits, revealing a cultural shift where the algorithm—not the soul, society, or state—determines who we are, what we do, and what versions of ourselves are publicly consumed.

    Mapping:

    • Tech elites as godlike figures
    • Algorithmic reality creation
    • Destruction of authentic identity in favor of profitable versions

    5. Selfhood on Lease: How Tech Undermines Freedom and Flourishing

    The protagonists’ descent into confusion and submission in both Mountainhead and Joan Is Awful show that freedom and personal flourishing are now contingent upon platforms and policies controlled by distant tech overlords, whose tools amplify harm faster than they can prevent it.

    Mapping:

    • Psychological dependency on digital platforms
    • Collapse of personal flourishing under tech influence
    • Lack of accountability from the tech elite

    Sample Outline


    I. Introduction

    • Hook: A vivid description of Joan discovering her life has become a streamable show, or the protagonist in Mountainhead questioning his own sanity.
    • Context: Rise of tech empires and their control over identity and consent.
    • Thesis: (Insert selected thesis statement)

    II. The Disintegration of the Self

    • Analyze how Joan and the Mountainhead protagonist experience a crisis of identity.
    • Discuss digital duplication, surveillance, and manipulated perception.
    • Use scenes to show how each story fractures the idea of an integrated, autonomous self.

    III. Consent as a Performance, Not a Principle

    • Explore how both stories critique the illusion of informed consent in the tech age.
    • Examine the use of user agreements, surveillance participation, and passive digital exposure.
    • Link to real-world examples (terms of service, data collection, facial recognition use).

    IV. Tech Elites as Unaccountable Gods

    • Compare the figures or systems in charge—Streamberry in Joan Is Awful, the nebulous forces in Mountainhead.
    • Analyze how the lack of ethical oversight allows systems to spiral toward harm.
    • Use real-world examples like social media algorithms and AI misuse.

    V. Counterargument and Rebuttal

    • Counterargument: Technology isn’t inherently evil—it’s how we use it.
    • Rebuttal: These works argue that the current infrastructure privileges power, speed, and profit over reflection, ethics, or restraint—and humans are no longer the ones in control.

    VI. Conclusion

    • Restate thesis with higher stakes.
    • Reflect on what these narratives ask us to consider about our current digital lives.
    • Pose an open-ended question: Can we build a future where tech enhances human agency instead of annihilating it?

  • Brand Me, Break Me: The Confused User’s Guide to Digital Collapse (A College Essay Prompt)

    Brand Me, Break Me: The Confused User’s Guide to Digital Collapse (A College Essay Prompt)

    In addition to teaching Critical Thinking, I also teach Freshman Composition, and this semester I’m working with student-athletes—specifically, football players navigating the brave new world of NIL (Name, Image, Likeness) deals. These athletes are now eligible to make money from social media, which makes our first writing assignment both practical and perilous.

    Essay Prompt #1: Brand Me, Break Me: The Confused User’s Guide to Digital Collapse

    Social media is a business. Social media is also a drug. Sometimes, it’s both—and that’s when things get weird.

    In the docuseries Money Game, we watch college athletes play the algorithm like it’s just another playbook. They build brands, negotiate deals, and treat their social feeds like a revenue stream. Let’s call them Business Users—people who understand the game and are winning it.

    But then come the Dopamine Users, the rest of us poor souls, scrolling and posting not for profit, but for approval. In Black Mirror’s “Nosedive” and “Joan Is Awful,” we see social media mutate into a psychological carnival of rating systems, fake smiles, and avatars of self-worth. The result? A curated self that has nothing to do with reality and everything to do with anxiety, desperation, and an ongoing identity crisis.

    And then there’s the tragicomic third act: The Confused User. Think Untold: The Liver King. Here’s a guy who tried to be a Business User but collapsed into parody—lying, self-deluding, and publicly unraveling. The Confused User believes they’re optimizing for attention and success but ends up optimizing for ridicule and collapse.

    In this essay, use Money Game, “Nosedive,” “Joan Is Awful,” Untold: The Liver King, Jonathan Haidt’s essay “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” and Sherry Turkle’s TED Talk “Alone, but Connected?” to respond to the following claim:

    Social media can be a profitable business tool—but when it becomes a substitute for self-worth, it guarantees isolation, mental illness, and eventual collapse. Understanding the difference between Business Users, Dopamine Users, and Confused Users may be the only way to survive the algorithm without losing your mind.

    You may agree, partially agree, or disagree with the claim—but either way, take a position with clarity and nuance. Analyze the psychology, the economics, and the wreckage.

    And remember: this is a critical thinking exercise. That means no TikTok therapy takes, no AI-generated summaries, and no mushy conclusions. Think hard, argue well, and—above all—write like someone who’s seen the glitch in the matrix.

    Sample Thesis Statements:

    1. While social media offers entrepreneurial opportunities for Business Users, the vast majority of people are Dopamine Users unknowingly trading mental stability for validation, making the platform a psychological trap disguised as empowerment.
    2. The Confused User, exemplified by the Liver King, represents a cautionary tale in the digital economy: when brand-building and identity collapse into one, social media success becomes indistinguishable from self-destruction.
    3. Social media doesn’t inherently damage us—but without a clear distinction between economic strategy and personal validation, users risk becoming Confused Users whose craving for attention leads not to fame, but to ruin.

    In a world where your Instagram handle might carry more currency than your GPA, this isn’t just an academic exercise—it’s a survival guide. Whether you’re gunning for a sponsorship deal or just trying not to lose your sense of self in the scroll, this essay is your chance to interrogate the game before it plays you. Treat it like film study for the algorithm: read the plays, understand the players, and figure out how to stay human in a system designed to monetize your attention and, if you’re not careful, your identity.

  • How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    How Headphones Made Me Emotionally Unavailable in High-Resolution Audio

    After flying to Miami recently, I finally understood the full appeal of noise-canceling headphones—not just for travel, but for the everyday, ambient escape act they offer my college students. Several claim, straight-faced, that they “hear the lecture better” while playing ASMR in their headphones because it soothes their anxiety and makes them better listeners. Is this neurological wizardry? Or performance art? I’m not sure. But apocryphal or not, the explanation has stuck with me.

    It made me see the modern, high-grade headphone as something far more than a listening device. It’s a sanctuary, or to use the modern euphemism, an aural safe space in a chaotic world. You may not have millions to seal yourself in a hyperbaric oxygen pod inside a luxury doomsday bunker carved into the Montana granite during World War Z, but if you’ve got $500 and a credit score above sea level, you can disappear in style—into a pair of Sony MX6s or Audio-Technica ATH-R70s.

    The headphone, in this context, is not just gear—it’s armor. Whether cocobolo wood or carbon fiber, it communicates something quietly radical: “I have opted out.”

    You’re not rejecting the world with malice—you’re simply letting it know that you’ve found something better. Something more reliable. Something calibrated to your nervous system. In fact, you’ve severed communication so politely that all they hear is the faint thump of curated escapism pulsing through your earpads.

    For my students, these headphones are not fashion statements—they’re boundary-drawing devices. The outside world is a cacophony of canvas announcements, attention fatigue, and algorithmically optimized despair. Inside the headphones? Rain sounds. Lo-fi beats from a YouTube loop titled “study with me until the world ends.” Maybe even a softly muttering AI voice telling them they are enough.

    It doesn’t matter whether it’s true. It matters that it works.

    And here’s the deeper point: the headphone isn’t just a sanctuary. It’s a non-accountability device. You can’t be blamed for ghosting a group chat or zoning out during a team huddle when you’re visibly plugged into something more profound. You’re no longer rude—you’re occupied. Your silence is now technically sound.

    In a hyper-networked world that expects your every moment to be a node of productivity or empathy, the headphone is the last affordable luxury that buys you solitude without apology. You don’t need a manifesto. You just need active noise-canceling and a decent DAC.

    You’re not ignoring anyone. You’ve just entered your own monastery of midrange clarity, bass-forward detachment, and spatially engineered peace.

    And if someone wants your attention?

    Tell them to knock louder. You’re in sanctuary.

  • Beware of the ChatGPT Strut

    Beware of the ChatGPT Strut

    Yesterday my critical thinking students and I talked about the ways we could revise our original content with ChatGPT give it instructions and train this AI tool to go beyond its bland, surface-level writing style. I showed my students specific prompts that would train it to write in a persona:

    “Rewrite the passage with acid wit.”

    “Rewrite the passage with lucid, assured prose.”

    “Rewrite the passage with mild academic language.”

    “Rewrite the passage with overdone academic language.”

    I showed the students my original paragraphs and ChatGPT’s versions of my sample arguments agreeing and disagreeing with Gustavo Arellano’s defense of cultural appropriation, and I said in the ChatGPT rewrites of my original there were linguistic constructions that were more witty, dramatic, stunning, and creative than I could do, and that to post these passages as my own would make me look good, but they wouldn’t be me. I would be misrepresenting myself, even though most of the world will be enhancing their writing like this in the near future. 

    I compared writing without ChatGPT to being a natural bodybuilder. Your muscles may not be as massive and dramatic as the guy on PEDS, but what you see is what you get. You’re the real you. In contrast, when you write with ChatGPT, you are a bodybuilder on PEDS. Your muscle-flex is eye-popping. You start doing the ChatGPT strut. 

    I gave this warning to the class: If you use ChatGPT a lot, as I have in the last year as I’m trying to figure out how I’m supposed to use it in my teaching, you can develop writer’s dysmorphia, the sense that your natural, non-ChatGPT writing is inadequate compared to the razzle-dazzle of ChatGPT’s steroid-like prose. 

    One student at this point disagreed with my awe of ChatGPT and my relatively low opinion of my own “natural” writing. She said, “Your original is better than the ChatGPT versions. Yours makes more sense to me, isn’t so hidden behind all the stylistic fluff, and contains an important sentence that ChatGPT omitted.”

    I looked at the original, and I realized she was right. My prose wasn’t as fancy as ChatGPT’s but the passage about Gustavo Arellano’s essay defending cultural appropriation was more clear than the AI versions.

    At this point, I shifted metaphors in describing ChatGPT. Whereas I began the class by saying that AI revisions are like giving steroids to a bodybuilder with body dysmorphia, now I was warning that ChatGPT can be like an abusive boyfriend or girlfriend. It wants to hijack our brains because the main objective of any technology is to dominate our lives. In the case of ChatGPT, this domination is sycophantic: It gives us false flattery, insinuates itself into our lives, and gradually suffocates us. 

    As an example, I told the students that I was getting burned out using ChatGPT, and I was excited to write non-ChatGPT posts on my blog, and to live in a space where my mind could breathe the fresh air apart from ChatGPT’s presence. 

    I wanted to see how ChatGPT would react to my plan to write non-ChatGPT posts, and ChatGPT seemed to get scared. It started giving me all of these suggestions to help me implement my non-ChatGPT plan. I said back to ChatGPT, “I can’t use your suggestions or plans or anything because the whole point is to live in the non-ChatGPT Zone.” I then closed my ChatGPT tab. 

    I concluded by telling my students that we need to reach a point where ChatGPT is a tool like Windows and Google Docs, but as soon as we become addicted to it, it’s an abusive platform. At that point, we need to use some self-agency and distance ourselves from it.