Tag: ai

  • Paul Bunyan Meets the Chainsaw in Freshman Comp

    Paul Bunyan Meets the Chainsaw in Freshman Comp

    During the Fall Semester of 2024, the English Department had one of those “brown bag” sessions—an optional gathering where instructors actually show up because the topic is like a flashing red light on the education highway. This particular crisis-in-the-making? AI. Would writing tools that millions were embracing at exponential speed render our job obsolete? The room was packed with nervous, coffee-chugging professors, myself included, all bracing for a Pandora’s box of AI-fueled dilemmas. They tossed scenario after scenario at us, and the existential angst was palpable.

    First up: What do you do when a foreign language student submits an essay written in their native tongue, then let’s play translator? Is it cheating? Does the term “English Department” even make sense anymore when our Los Angeles campus sounds like a United Nations general assembly? Are we teaching “English,” or are we, more accurately, teaching “the writing process” to people of many languages with AI now tagging along as a co-author?

    Next came the AI Tsunami, a term we all seemed to embrace with a mix of dread and resignation. What do we do when we’ve reached the point that 90% of the essays we receive are peppered with AI speak so robotic it sounds like Siri decided to write a term paper? We were all skeptical about AI detectors—about as reliable as a fortune teller reading tea leaves. I shared my go-to strategy: Instead of accusing a student of cheating (because who has time for that drama?), I simply leave a comment, dripping with professional distaste: “Your essay reeks of AI-generated pablum. I’m giving it a D because I cannot, in good conscience, grade this higher. If you’d like to rewrite it with actual human effort, be my guest.” The room nodded in approval.

    But here’s the thing: The real existential crisis hit when we realized that the hardworking, honest students are busting their butts for B’s, while the tech-savvy slackers are gaming the system, walking away with A’s by running their bland prose through the AI carwash. The room buzzed with a strange mixture of outrage and surrender—because let’s be honest, at least the grammar and spelling errors are nearly extinct.

    Our dean, ever the Zen master in a room full of jittery academics, calmly suggested that maybe—just maybe—we should incorporate personal reflection into our assignments. His idea? By having students spill a bit of their authentic thoughts onto the page, we could then compare those raw musings to their more polished, suspect, possibly ChatGPT-assisted essays. A clever idea. It’s harder to fake authenticity than to parrot a thesis on The Great Gatsby.

    I nodded thoughtfully, though with a rising sense of dread. How exactly was I supposed to integrate “personal reflections” into a syllabus built around the holy trinity of argumentation, counterarguments, and research? I teach composition and critical thinking, not a creative writing seminar for tortured souls. My job isn’t to sift through essays about existential crises or romantic disasters disguised as epiphanies. It’s to teach students how to build a coherent argument and take down a counterpoint without resorting to tired platitudes. Reflection has its place—but preferably somewhere far from my grading pile.

    Still, I had to admit the dean was on to something. If I didn’t get ahead of this, I’d end up buried under an avalanche of soul-searching essays that somehow all lead to a revelation about “balance in life.” I needed time to mull this over, to figure out how personal writing could serve my course objectives without turning it into group therapy on paper.

    But before I could even start strategizing, the Brown Bag session was over. I gathered my notes, bracing myself for the inevitable flood of “personal growth narratives” waiting for me next semester. 

    As I walked out of that meeting, I had a new writing prompt simmering in my head for my students: “Write an argumentative essay exploring how AI platforms like ChatGPT will reshape education. Project how these technologies might be used in the future and consider the ethical lines that AI use blurs. Should we embrace AI as a tool, or do we need hard rules to curb its misuse? Address academic integrity, critical thinking, and whether AI widens or narrows the education gap.”

    When I got home later that day, in a fit of efficiency, I stuffed my car with a mountain of e-waste—ancient laptops, decrepit tablets, and cell phones that could double as paperweights—and headed to the City of Torrance E-Waste Drive. The line of cars stretched for what seemed like miles, all of us dutifully purging our electronic skeletons to make room for the latest AI-compatible toys. As I waited, I tuned into a podcast with Mark Cuban chatting with Bill Maher, and Cuban was adamant: AI will never be regulated because it’s America’s golden goose for global dominance. And there I was, sitting in a snaking line of vehicles, all of us unwitting soldiers in the tech wars, dumping our outdated gadgets like a 21st-century arms race.

    As I edged closer to the dumpster, I imagined ripping open my shirt to reveal a Captain America emblem beneath, fully embracing the ridiculousness of it all. This wasn’t just teaching anymore—it was a revolution. And if I was going to lead it, I’d need to be like Moses descending from Mt. Sinai, armed with the Tablets of AI Laws. Without these laws, I’d be as helpless as a fish flopping on a dry riverbank. To face the coming storm unprepared wasn’t just unwise; it was professional malpractice. My survival depended on it.

    I thought I had outsmarted AI, like some literary Rambo armed with signal phrases, textual analysis, and in-text citations as my guerrilla tactics. ChatGPT couldn’t handle that level of academic sophistication, right? Wrong. One month later, the machine rolled up offering full signal phrase service like some overachieving valet at the Essay Ritz. That defense crumbled faster than a house of cards in a wind tunnel.

    Okay, I thought, I’ll outmaneuver it with source currency. ChatGPT didn’t do recent articles—perfect! I’d make my students cite cutting-edge research. Surely, that would stump the AI. Nope. Faster than you can say “breaking news,” ChatGPT was pulling up the latest articles like a know-it-all librarian with Wi-Fi in their brain.

    Every time I tried to pin it down, the AI just flexed and swelled, like some mutant Hulk fed on electricity and hubris. I was the noble natural bodybuilder, forged by sweat, discipline, and oceans of egg whites. ChatGPT? It was the juiced-up monster, marinated in digital steroids and algorithmic growth hormones. I’d strain to add ten pounds to my academic bench press; ChatGPT would casually slap on 500 and knock out reps while checking its reflection. I was a relic frozen on the dais, oil-slicked and flexing, while the AI steamrolled past me in the race for writing dominance.

    That’s when the obvious landed like a kettlebell on my chest: I wasn’t going to beat ChatGPT. It wasn’t a bug to patch or a fad to outlast—it was an evolutionary leap, a quantum steroid shot to the act of writing itself. So I stopped swinging at it. Instead, I strapped a saddle on the beast and started steering, learning to use its brute force as my tool instead of my rival.

    It reminded me of a childhood cartoon about Paul Bunyan, the original muscle god with an axe the size of a telephone pole. Then came the chainsaw. There was a contest: man versus machine. Paul roared and hacked, but the chainsaw shredded the forest into submission. The crowd went home knowing the age of the axe was dead. Likewise, the sprawling forest of language has a new lumberjack—and I look pathetic trying to keep up, like a guy standing on Hawthorne Boulevard with a toothbrush, vowing to scrub clean every city block from Lawndale to Palos Verdes.

  • Why Ideas Still Matter in a World of Machines

    Why Ideas Still Matter in a World of Machines

    One of my colleagues, an outstanding writing instructor for more than two decades, has mapped out her exit strategy. She earned a counseling master’s degree, recently completed her life coach certification, and told me she no longer believes in the mission of teaching college writing. Assigning prompts to students who submit AI-generated essays feels meaningless to her—and reading these machine-produced pages makes her physically ill.

    Her words jolted me. I have devoted nearly forty years to this vocation, a career sustained by the assumption that teaching the college essay is an essential skill for young people. We have long agreed that students must learn how to shape chaos into coherence, confront questions that matter to the human condition, write with clarity and force, construct persuasive arguments, examine counterpoints, form informed opinions, master formats, cultivate an authorial voice, and develop critical thinking in a world overflowing with fallacies and propaganda. We also teach students to live with “interiority”—to keep journals, build inner lives, and nurture ideas. These practices have been considered indispensable for personal and professional growth.

    But with AI in the picture, many of my colleagues, including the one planning her departure, now feel bitter and defeated. AI has supplanted us. To our students, AI is more than a tool; it is a counselor, therapist, life coach, tutor, content-generator, and editor that sits in their pockets. They have apps through which they converse with their AI “person.” Increasingly, students bond with these “people” more than with their teachers. They trust AI in ways they do not trust professionals, institutions, or the so-called “laptop class.”

    The sense of displacement is compounded by the quality of student work. Essays are now riddled with AI-speak, clichés, hollow uniformity, facile expressions, superficial analysis, misattributed quotations, hallucinated claims, and fabricated facts. And yet, for the professional world, this output will often suffice. Ninety-five percent of the time, AI’s mediocrity will be “good enough” as workplaces adjust to its speed and efficiency. Thus my colleagues suffer a third wound: irrelevance. If AI can produce serviceable writing quickly, bypassing the fundamentals we teach, then we are the dinosaurs of academia.

    On Monday, when I face my freshman composition students for the first time, I will have to address this reality. I will describe how AI—the merciless stochastic parrot—has unsettled instructors by generating uncanny-valley essays, winning the confidence of students, and leaving teachers uncertain about their place.

    Still, I am not entirely pessimistic about my role. Teaching writing has always required many hats, one of which is the salesman’s. I must sell my ideas, my syllabus, my assignments, and above all, the relevance of writing in students’ lives.

    This semester, I am teaching a class composed entirely of athletes, a measure designed to help with retention. On the first day, I will appeal to what they know best: drills. No athlete mistakes drills for performance. They exist to prepare the body and mind for the real contest. Football players run lateral and backward sprints to build stamina and muscle memory. Pianists practice scales and arpeggios to ready themselves for recitals. Writing drills serve the same purpose: they build the foundation beneath the performance.

    My second pitch will be about the human heart. Education does not begin in the brain; it begins when the heart opens. Just as the athlete “with heart” outperforms the one without it, the student who opens the heart to education learns lessons that endure for life.

    I will tell them about my childhood obsession with baseball. At nine, I devoured every Scholastic book on the subject I could order through Independent Elementary. Many of my heroes were African-American players who endured Jim Crow segregation—forced into separate hotels and restaurants, traveling at great risk. I read about legends like Satchel Paige and Josh Gibson, barred from Major League Baseball because of their race. Through their stories, I learned American history not as dates and facts, but through the eyes of men I revered. My heart opened, and I was educated in a way my schoolteachers never managed.

    I will also tell them about my lost years in college. I enrolled under threat of eviction from my mother and warnings that without higher education, I faced a life of poverty. I loathed classrooms, staring at the clock until I could escape to the gym for squats, deadlifts, and bench presses. Yet in an elective fiction class, I discovered Kafka—how he transmuted his nightmarish inner life into stories that illuminated his world. Then Nabokov, whose audacious style made me long to write with the same confidence, more than I ever longed for a luxury car. If I could capture Nabokov’s authority, I thought, I would be like the Tinman receiving his heart. I would be whole.

    These changes did not come from professors, institutions, or—certainly—not AI. They came from within me, from my heart opening to literature. And yet, a sobering realization remains: the spark for me came through reading, and I see little reading today. I am not dogmatic—perhaps today’s students can find their spark in a documentary on Netflix or an essay on their phones. What matters is the opening of the heart.

    I cannot deny my doubts about remaining relevant in the age of AI, but I believe in the enduring power of ideas. Ideas—true or false—shape lives. They can go viral, ignite movements, and alter history.

    That is why my first assignment will focus on the Liver King, a grifter who peddled “ancestral living” to young men desperate for discipline and belonging. Though he was exposed as a fraud, his message resonated because it spoke to a generation’s hunger for structure and meaning. My students will explore both the desperation of these young men and the manipulations of Bro Culture that preyed upon them.

    Ideas matter. They always have. They always will. My class will succeed or fail on the strength of the ideas I put before my students, and I must present them unapologetically—defended with both my brain and my heart.

  • Richard Brody vs. the Algorithm: A Critic’s Lament in a Post-Print World

    Richard Brody vs. the Algorithm: A Critic’s Lament in a Post-Print World

    In his essay “In Defense of the Traditional Review,” New Yorker critic Richard Brody goes to battle against The New York Times’ editorial decision to shift arts criticism—from the long-form written review to short-form videos designed for a digital audience. It’s a cultural downgrade, Brody argues, a move from substance to performance, from sustained reflection to algorithm-choked ephemera. The move may be pitched as modernization, but Brody sees it for what it is: intellectual compromise dressed up as digital innovation.

    Brody’s stance isn’t anti-technology. He concedes we can chew gum and walk at the same time—that written essays and short videos can coexist. But his core concern is that the center of criticism is the written word. Shift the balance too far toward video, and you risk gutting that center entirely. Worse, video reviews tend to drift toward celebrity interviews and promotional puffery. The fear isn’t hypothetical. When given the choice between a serious review and a clip featuring a celebrity making faces in a car, algorithms will reward the latter. And so criticism is flattened into entertainment, and standards dissolve beneath a rising tide of digital applause.

    Brody’s alarm resonates with me, because I’ve spent the last four decades teaching college writing and watching the same cultural drift. Long books are gone. In many cases, books are gone altogether. We assign short essays because that’s what students can handle. And yet, paradoxically, I’ve never seen such sharp classroom discussions, never written better prompts, never witnessed better argumentation than I do today. The intellectual work isn’t dead—it’s just found new vessels. Brody is right to warn against cultural decay, but the answer isn’t clinging to vanished ideals. It’s adaptation with integrity. If we don’t evolve, we lose our audience. But if we adapt wisely, we might still reach them—and even challenge them—where they are.

  • Death by Convenience: The AI Ads That Want to Rot Your Brain

    Death by Convenience: The AI Ads That Want to Rot Your Brain

    In his essay for The New Yorker, “What Do Commercials About A.I. Really Promise?”, Vinson Cunningham zeroes in on the unspoken premise of today’s AI hype: the dream of total disengagement. He poses the unsettling question: “If human workers don’t have to read, write, or even think, it’s unclear what’s left to do.” It’s a fair point. If ads are any indication, the only thing left for us is to stare blankly into our screens like mollusks waiting to be spoon-fed.

    These ads don’t sell a product; they sell a philosophy—one that flatters your laziness. Fix a leaky faucet? Too much trouble. Write a thank-you note? Are you kidding? Plan a meal, change a diaper, troubleshoot your noise-canceling headphones? Outrageous demands for a species that now views thinking as an optional activity. The machines will do it, and we’ll cheerfully slide into amoebic irrelevance.

    What’s most galling is the heroism layered into the pitch: You’re not shirking your responsibilities, you’re delegating. You’re optimizing your workflow. You’re buying back your precious time. You’re a genius. A disruptor. A life-hacking, boundary-pushing modern-day Prometheus who figured out how to get out of reading bedtime stories to your children.

    But Cunningham has a sharper take. The message behind the AI lovefest isn’t just about convenience—it’s about hollowing us out. As he puts it, “The preferred state, it seems, is a zoned-out semi-presence, the worker accounted for in body but absent in spirit.” That’s what the ads are pushing: a blissful vegetative state, where you’re physically upright but intellectually comatose.

    Why read to your kids when an AI avatar can do it in a soothing British accent? Why help them with their homework when a bot can explain algebra, write essays, correct their errors, and manage their grades—while you binge Breaking Bad for the third time? Why have a conversation with their teacher when your chatbot can send a perfectly passive-aggressive email on your behalf?

    This is not the frictionless future we were promised. It’s a slow lobotomy served on a platter of convenience. The ads imply that the life of the mind is outdated. And critical thinking? That’s for chumps with time to kill. Thinking takes bandwidth—something that would be better spent refining your custom coffee order via voice assistant.

    Cunningham sees the bitter punchline: In our rush to outsource everything, we’ve made ourselves obsolete. And the machines, coldly efficient and utterly indifferent, are more than happy to take it from here.

  • Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    Love in the Time of ChatGPT: On Teaching Writing in the Age of Algorithm

    In his New Yorker piece, “What Happens After A.I. Destroys College Writing?”, Hua Hsu mourns the slow-motion collapse of the take-home essay while grudgingly admitting there may be a chance—however slim—for higher education to reinvent itself before it becomes a museum.

    Hsu interviews two NYU undergrads, Alex and Eugene, who speak with the breezy candor of men who know they’ve already gotten away with it. Alex admits he uses A.I. to edit all his writing, from academic papers to flirty texts. Research? Reasoning? Explanation? No problem. Image generation? Naturally. He uses ChatGPT, Claude, DeepSeek, Gemini—the full polytheistic pantheon of large language models.

    Eugene is no different, and neither are their classmates. A.I. is now the roommate who never pays rent but always does your homework. The justifications come standard: the assignments are boring, the students are overworked, and—let’s face it—they’re more confident with a chatbot whispering sweet logic into their ears.

    Meanwhile, colleges are flailing. A.I. detection software is unreliable, grading is a time bomb, and most instructors don’t have the time, energy, or institutional backing to play academic detective. The truth is, universities were caught flat-footed. The essay, once a personal rite of passage, has become an A.I.-assisted production—sometimes stitched together with all the charm and coherence of a Frankenstein monster assembled in a dorm room at 2 a.m.

    Hsu—who teaches at a small liberal arts college—confesses that he sees the disconnect firsthand. He listens to students in class and then reads essays that sound like they were ghostwritten by Siri with a mild Xanax addiction. And in a twist both sobering and dystopian, students don’t even see this as cheating. To them, using A.I. is simply modern efficiency. “Keeping up with the times.” Not deception—just delegation.

    But A.I. doesn’t stop at homework. It’s styling outfits, dispensing therapy, recommending gadgets. It has insinuated itself into the bloodstream of daily life, quietly managing identity, desire, and emotion. The students aren’t cheating. They’re outsourcing. They’ve handed over the messy bits of being human to an algorithm that never sleeps.

    And so, the question hangs in the air like cigar smoke: Are writing departments quaint relics? Are we the Latin teachers of the 21st century, noble but unnecessary?

    Some professors are adapting. Blue books are making a comeback. Oral exams are back in vogue. Others lean into A.I., treating it like a co-writer instead of a threat. Still others swap out essays for short-form reflections and response journals. But nearly everyone agrees: the era of the generic prompt is over. If your essay question can be answered by ChatGPT, your students already know it—and so does the chatbot.

    Hsu, for his part, doesn’t offer solutions. He leaves us with a shrug.

    But I can’t shrug. I teach college writing. And for me, this isn’t just a job. It’s a love affair. A slow-burning obsession with language, thought, and the human condition. Either you fall in love with reading and writing—or you don’t. And if I can’t help students fall in love with this messy, incandescent process of making sense of the world through words, then maybe I should hang it up, binge-watch Love Is Blind, and polish my résumé.

    Because this isn’t about grammar. This is about soul. And I’m in the love business.

  • My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    My Philosophy of Grading in the Age of ChatGPT and Other Open-AI Writing Platforms (a mini manifesto for my syllabus)

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    If you over rely on AI, you risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    If your writing smacks of AI–the kind of polished mediocrity that will lead you down a road of being a functionary or a Non-Player Character, the grade you receive will reflect that sad fact. On the other hand, if your writing is animated by a strong authorial presence, evidence of an Architect, a person who strives for a life of excellence, self-agency, and pride, your grade will reflect that fact as well. 

  • Toothpaste, Technology, and the Death of the Luddite Dream

    Toothpaste, Technology, and the Death of the Luddite Dream

    A Luddite, in modern dress, is a self-declared purist who swats at technology like it’s a mosquito threatening their sense of self-agency, quality, and craft. They fear contamination—that somehow the glow of a screen dulls the soul, or that a machine’s hand on the process strips the art from the outcome. It’s a noble impulse, maybe even romantic. But let’s be honest: it’s also doomed.

    Technology isn’t an intruder anymore—it’s the furniture. It’s the toothpaste out of the tube, the guest who showed up uninvited and then installed a smart thermostat. You can’t un-invent it. You can’t unplug the century.

    And I, for one, am a fatalist about it. Not the trembling, dystopian kind. Just… resigned. Technology comes in waves—fire, the wheel, the iPhone, and now OpenAI. Each time, we claim it’s the end of humanity, and each time we wake up, still human, just a bit more confused. You can’t fight the tide with a paper umbrella.

    But here’s where things get tricky: we’re not adapting well. Right now, with AI, we’re in the maladaptive toddler stage—poking it, misusing it, letting it do our thinking while we lie to ourselves about “optimization.” We are staring down a communications tool so powerful it could either elevate our cognitive evolution… or turn us all into well-spoken mannequins.

    We are not guaranteed to adapt well. But we have no choice but to try.

    That struggle—to engage with technology without becoming technology, to harness its speed without losing our depth—is now one of the defining human questions. And the truth is: we haven’t even mapped the battlefield yet.

    There will be factions. Teams. Dogmas. Some will preach integration, others withdrawal. Some will demand toolkits and protocols; others will romanticize silence and slowness. We are on the brink of ideological trench warfare—without even knowing what colors the flags are yet.

    What matters now is not just what we use, but how we use it—and who we become in the process.

    Because whether you’re a fatalist, a Luddite, or a dopamine-chasing cyborg, one thing is clear: this isn’t going away.

    So sharpen your tools—or at least your attitude. You’re already in the arena.

  • Ozempification and the Death of the Inner Architect

    Ozempification and the Death of the Inner Architect

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    You risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    Because in the age of Ozempification, becoming an Architect isn’t a flex—it’s a survival strategy.

    There is no salvation in a life run on autopilot.

    You’re here. So start building.

  • ChatGPT Killed Lacie Pound and Other Artificial Lies

    ChatGPT Killed Lacie Pound and Other Artificial Lies

    In Matteo Wong’s sharp little dispatch, “The Entire Internet Is Reverting to Beta,” he argues that AI tools like ChatGPT aren’t quite ready for daily life. Not unless your definition of “ready” includes faucets that sometimes dispense boiling water instead of cold or cars that occasionally floor the gas when you hit the brakes. It’s an apt metaphor: we’re being sold precision, but what we’re getting is unpredictability in a shiny interface.

    I was reminded of this just yesterday when ChatGPT gave me the wrong title for a Meghan Daum essay collection—an essay I had just read. I didn’t argue. You don’t correct a toaster when it burns your toast; you just sigh and start over. ChatGPT isn’t thinking. It’s a stochastic parrot with a spellchecker. Its genius is statistical, not epistemological.

    And yet people keep treating it like a digital oracle. One of my students recently declared—thanks to ChatGPT—that Lacie Pound, the protagonist of Black Mirror’s “Nosedive,” dies a “tragic death.” She doesn’t. She ends the episode in a prison cell, laughing—liberated, not lifeless. But the essay had already been turned in, the damage done, the grade in limbo.

    This sort of glitch isn’t rare. It’s not even surprising. And yet this technology is now embedded into classrooms, military systems, intelligence agencies, healthcare diagnostics—fields where hallucinations are not charming eccentricities, but potential disasters. We’re handing the scalpel to a robot that sometimes thinks the liver is in the leg.

    Why? Because we’re impatient. We crave novelty. We’re addicted to convenience. It’s the same impulse that led OceanGate CEO Stockton Rush to ignore engineers, cut corners on sub design, and plunge five people—including himself—into a carbon-fiber tomb. Rush wanted to revolutionize deep-sea tourism before the tech was seaworthy. Now he’s a cautionary tale with his own documentary.

    The stakes with AI may not involve crushing depths, but they do involve crushing volumes of misinformation. The question isn’t Can ChatGPT produce something useful? It clearly can. The real question is: Can it be trusted to do so reliably, and at scale?

    And if not, why aren’t we demanding better? Why haven’t tech companies built in rigorous self-vetting systems—a kind of epistemological fail-safe? If an AI can generate pages of text in seconds, can’t it also cross-reference a fact before confidently inventing a fictional death? Shouldn’t we be layering safety nets? Or have we already accepted the lie that speed is better than accuracy, that beta is good enough?

    Are we building tools that enhance our thinking, or are we building dependencies that quietly dismantle it?

  • Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    In the HBO Max film Mountainhead and the Black Mirror episode “Joan Is Awful,” viewers are plunged into unnerving dystopias shaped not by evil governments or alien invasions, but by tech corporations whose influence surpasses state power and whose tools penetrate the most intimate corners of human consciousness.

    Both works dramatize a chilling premise: that the very notion of an autonomous self is under siege. We are not simply consumers of technology but the raw material it digests, distorts, and reprocesses. In these narratives, the protagonists find their sense of self unraveled, their identities replicated, manipulated, and ultimately owned by forces they cannot control. Whether through digital doppelgängers, surveillance entertainment, or techno-induced psychosis, these stories illustrate the terrifying consequences of surrendering power to those who build technologies faster than they can understand or ethically manage them.

    In this essay, write a 1,700-word argumentative exposition responding to the following claim:

    In the age of runaway innovation, where the ambitions of tech elites override democratic values and psychological safeguards, the very concept of free will, informed consent, and the autonomous self is collapsing under the weight of its digital imitation.

    Use Mountainhead and “Joan Is Awful” as your core texts. Analyze how each story addresses the themes of free will, consent, identity, and power. You are encouraged to engage with outside sources—philosophical, journalistic, or theoretical—that help you interrogate these themes in a broader context.

    Consider addressing:

    • The illusion of choice and algorithmic determinism
    • The commodification of human identity
    • The satire of corporate terms of service and performative consent
    • The psychological toll of being digitally duplicated or manipulated
    • Whether technological “progress” is outpacing moral development

    Your argument should include a strong thesis, counterargument with rebuttal, and close textual analysis that connects narrative detail to broader social and philosophical stakes.


    Five Sample Thesis Statements with Mapping Components


    1. The Death of the Autonomous Self

    In Mountainhead and Joan Is Awful, the protagonists’ loss of agency illustrates how modern tech empires undermine the very concept of selfhood by reducing human experience to data, delegitimizing consent through obfuscation, and accelerating psychological collapse under the guise of innovation.

    Mapping:

    • Reduction of human identity to data
    • Meaningless or manipulated consent
    • Psychological consequences of tech-induced identity collapse

    2. Mock Consent in the Age of Surveillance Entertainment

    Both narratives expose how user agreements and passive digital participation mask deeply coercive systems, revealing that what tech companies call “consent” is actually a legalized form of manipulation, moral abdication, and commercial exploitation.

    Mapping:

    • Consent as coercion disguised in legal language
    • Moral abdication by tech designers and executives
    • Profiteering through exploitation of personal identity

    3. From Users to Subjects: Tech’s New Authoritarianism

    Mountainhead and Joan Is Awful warn that the unchecked ambitions of tech elites have birthed a new form of soft authoritarianism—where control is exerted not through force but through omnipresent surveillance, AI-driven personalization, and identity theft masquerading as entertainment.

    Mapping:

    • Tech ambition and loss of oversight
    • Surveillance and algorithmic control
    • Identity theft as entertainment and profit

    4. The Algorithm as God: Tech’s Unholy Ascendancy

    These works portray the tech elite as digital deities who reprogram reality without ethical limits, revealing a cultural shift where the algorithm—not the soul, society, or state—determines who we are, what we do, and what versions of ourselves are publicly consumed.

    Mapping:

    • Tech elites as godlike figures
    • Algorithmic reality creation
    • Destruction of authentic identity in favor of profitable versions

    5. Selfhood on Lease: How Tech Undermines Freedom and Flourishing

    The protagonists’ descent into confusion and submission in both Mountainhead and Joan Is Awful show that freedom and personal flourishing are now contingent upon platforms and policies controlled by distant tech overlords, whose tools amplify harm faster than they can prevent it.

    Mapping:

    • Psychological dependency on digital platforms
    • Collapse of personal flourishing under tech influence
    • Lack of accountability from the tech elite

    Sample Outline


    I. Introduction

    • Hook: A vivid description of Joan discovering her life has become a streamable show, or the protagonist in Mountainhead questioning his own sanity.
    • Context: Rise of tech empires and their control over identity and consent.
    • Thesis: (Insert selected thesis statement)

    II. The Disintegration of the Self

    • Analyze how Joan and the Mountainhead protagonist experience a crisis of identity.
    • Discuss digital duplication, surveillance, and manipulated perception.
    • Use scenes to show how each story fractures the idea of an integrated, autonomous self.

    III. Consent as a Performance, Not a Principle

    • Explore how both stories critique the illusion of informed consent in the tech age.
    • Examine the use of user agreements, surveillance participation, and passive digital exposure.
    • Link to real-world examples (terms of service, data collection, facial recognition use).

    IV. Tech Elites as Unaccountable Gods

    • Compare the figures or systems in charge—Streamberry in Joan Is Awful, the nebulous forces in Mountainhead.
    • Analyze how the lack of ethical oversight allows systems to spiral toward harm.
    • Use real-world examples like social media algorithms and AI misuse.

    V. Counterargument and Rebuttal

    • Counterargument: Technology isn’t inherently evil—it’s how we use it.
    • Rebuttal: These works argue that the current infrastructure privileges power, speed, and profit over reflection, ethics, or restraint—and humans are no longer the ones in control.

    VI. Conclusion

    • Restate thesis with higher stakes.
    • Reflect on what these narratives ask us to consider about our current digital lives.
    • Pose an open-ended question: Can we build a future where tech enhances human agency instead of annihilating it?