Tag: ai

  • Toothpaste, Technology, and the Death of the Luddite Dream

    Toothpaste, Technology, and the Death of the Luddite Dream

    A Luddite, in modern dress, is a self-declared purist who swats at technology like it’s a mosquito threatening their sense of self-agency, quality, and craft. They fear contamination—that somehow the glow of a screen dulls the soul, or that a machine’s hand on the process strips the art from the outcome. It’s a noble impulse, maybe even romantic. But let’s be honest: it’s also doomed.

    Technology isn’t an intruder anymore—it’s the furniture. It’s the toothpaste out of the tube, the guest who showed up uninvited and then installed a smart thermostat. You can’t un-invent it. You can’t unplug the century.

    And I, for one, am a fatalist about it. Not the trembling, dystopian kind. Just… resigned. Technology comes in waves—fire, the wheel, the iPhone, and now OpenAI. Each time, we claim it’s the end of humanity, and each time we wake up, still human, just a bit more confused. You can’t fight the tide with a paper umbrella.

    But here’s where things get tricky: we’re not adapting well. Right now, with AI, we’re in the maladaptive toddler stage—poking it, misusing it, letting it do our thinking while we lie to ourselves about “optimization.” We are staring down a communications tool so powerful it could either elevate our cognitive evolution… or turn us all into well-spoken mannequins.

    We are not guaranteed to adapt well. But we have no choice but to try.

    That struggle—to engage with technology without becoming technology, to harness its speed without losing our depth—is now one of the defining human questions. And the truth is: we haven’t even mapped the battlefield yet.

    There will be factions. Teams. Dogmas. Some will preach integration, others withdrawal. Some will demand toolkits and protocols; others will romanticize silence and slowness. We are on the brink of ideological trench warfare—without even knowing what colors the flags are yet.

    What matters now is not just what we use, but how we use it—and who we become in the process.

    Because whether you’re a fatalist, a Luddite, or a dopamine-chasing cyborg, one thing is clear: this isn’t going away.

    So sharpen your tools—or at least your attitude. You’re already in the arena.

  • Ozempification and the Death of the Inner Architect

    Ozempification and the Death of the Inner Architect

    Let’s start with this uncomfortable truth: you’re living through a civilization-level rebrand.

    Your world is being reshaped—not gradually, but violently, by algorithms and digital prosthetics designed to make your life easier, faster, smoother… and emptier. The disruption didn’t knock politely. It kicked the damn door in. And now, whether you realize it or not, you’re standing in the debris, trying to figure out what part of your life still belongs to you.

    Take your education. Once upon a time, college was where minds were forged—through long nights, terrible drafts, humiliating feedback, and the occasional breakthrough that made it all worth it. Today? Let’s be honest. Higher ed is starting to look like an AI-driven Mad Libs exercise.

    Some of you are already doing it: you plug in a prompt, paste the results, and hit submit. What you turn in is technically fine—spelled correctly, structurally intact, coherent enough to pass. And your professors? We’re grading these Franken-essays on caffeine and resignation, knowing full well that originality has been replaced by passable mimicry.

    And it’s not just school. Out in the so-called “real world,” companies are churning out bloated, tone-deaf AI memos—soulless prose that reads like it was written by a robot with performance anxiety. Streaming services are pumping out shows written by predictive text. Whole industries are feeding you content that’s technically correct but spiritually dead.

    You are surrounded by polished mediocrity.

    But wait, we’re not just outsourcing our minds—we’re outsourcing our bodies, too. GLP-1 drugs like Ozempic are reshaping what it means to be “disciplined.” No more calorie counting. No more gym humiliation. You don’t change your habits. You inject your progress.

    So what does that make you? You’re becoming someone new: someone we might call Ozempified. A user, not a builder. A reactor, not a responder. A person who runs on borrowed intelligence and pharmaceutical willpower. And it works. You’ll be thinner. You’ll be productive. You’ll even succeed—on paper.

    But not as a human being.

    You risk becoming what the gaming world calls a Non-Player Character (NPC)—a background figure, a functionary, a placeholder in your own life. You’ll do your job. You’ll attend your Zoom meetings. You’ll fill out your forms and tap your apps and check your likes. But you won’t have agency. You won’t have fingerprints on anything real.

    You’ll be living on autopilot, inside someone else’s system.

    So here’s the choice—and yes, it is a choice: You can be an NPC. Or you can be an Architect.

    The Architect doesn’t react. The Architect designs. They choose discomfort over sedation. They delay gratification. They don’t look for applause—they build systems that outlast feelings, trends, and cheap dopamine tricks.

    Where others scroll, the Architect shapes.
    Where others echo, they invent.
    Where others obey prompts, they write the code.

    Their values aren’t crowdsourced. Their discipline isn’t random. It’s engineered. They are not ruled by algorithm or panic. Their satisfaction comes not from feedback loops, but from the knowledge that they are building something only they could build.

    So yes, this class will ask more of you than typing a prompt and letting the machine do the rest. It will demand thought, effort, revision, frustration, clarity, and eventually—agency.

    Because in the age of Ozempification, becoming an Architect isn’t a flex—it’s a survival strategy.

    There is no salvation in a life run on autopilot.

    You’re here. So start building.

  • ChatGPT Killed Lacie Pound and Other Artificial Lies

    ChatGPT Killed Lacie Pound and Other Artificial Lies

    In Matteo Wong’s sharp little dispatch, “The Entire Internet Is Reverting to Beta,” he argues that AI tools like ChatGPT aren’t quite ready for daily life. Not unless your definition of “ready” includes faucets that sometimes dispense boiling water instead of cold or cars that occasionally floor the gas when you hit the brakes. It’s an apt metaphor: we’re being sold precision, but what we’re getting is unpredictability in a shiny interface.

    I was reminded of this just yesterday when ChatGPT gave me the wrong title for a Meghan Daum essay collection—an essay I had just read. I didn’t argue. You don’t correct a toaster when it burns your toast; you just sigh and start over. ChatGPT isn’t thinking. It’s a stochastic parrot with a spellchecker. Its genius is statistical, not epistemological.

    And yet people keep treating it like a digital oracle. One of my students recently declared—thanks to ChatGPT—that Lacie Pound, the protagonist of Black Mirror’s “Nosedive,” dies a “tragic death.” She doesn’t. She ends the episode in a prison cell, laughing—liberated, not lifeless. But the essay had already been turned in, the damage done, the grade in limbo.

    This sort of glitch isn’t rare. It’s not even surprising. And yet this technology is now embedded into classrooms, military systems, intelligence agencies, healthcare diagnostics—fields where hallucinations are not charming eccentricities, but potential disasters. We’re handing the scalpel to a robot that sometimes thinks the liver is in the leg.

    Why? Because we’re impatient. We crave novelty. We’re addicted to convenience. It’s the same impulse that led OceanGate CEO Stockton Rush to ignore engineers, cut corners on sub design, and plunge five people—including himself—into a carbon-fiber tomb. Rush wanted to revolutionize deep-sea tourism before the tech was seaworthy. Now he’s a cautionary tale with his own documentary.

    The stakes with AI may not involve crushing depths, but they do involve crushing volumes of misinformation. The question isn’t Can ChatGPT produce something useful? It clearly can. The real question is: Can it be trusted to do so reliably, and at scale?

    And if not, why aren’t we demanding better? Why haven’t tech companies built in rigorous self-vetting systems—a kind of epistemological fail-safe? If an AI can generate pages of text in seconds, can’t it also cross-reference a fact before confidently inventing a fictional death? Shouldn’t we be layering safety nets? Or have we already accepted the lie that speed is better than accuracy, that beta is good enough?

    Are we building tools that enhance our thinking, or are we building dependencies that quietly dismantle it?

  • Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    Gods of Code: Tech Lords and the End of Free Will (College Essay Prompt)

    In the HBO Max film Mountainhead and the Black Mirror episode “Joan Is Awful,” viewers are plunged into unnerving dystopias shaped not by evil governments or alien invasions, but by tech corporations whose influence surpasses state power and whose tools penetrate the most intimate corners of human consciousness.

    Both works dramatize a chilling premise: that the very notion of an autonomous self is under siege. We are not simply consumers of technology but the raw material it digests, distorts, and reprocesses. In these narratives, the protagonists find their sense of self unraveled, their identities replicated, manipulated, and ultimately owned by forces they cannot control. Whether through digital doppelgängers, surveillance entertainment, or techno-induced psychosis, these stories illustrate the terrifying consequences of surrendering power to those who build technologies faster than they can understand or ethically manage them.

    In this essay, write a 1,700-word argumentative exposition responding to the following claim:

    In the age of runaway innovation, where the ambitions of tech elites override democratic values and psychological safeguards, the very concept of free will, informed consent, and the autonomous self is collapsing under the weight of its digital imitation.

    Use Mountainhead and “Joan Is Awful” as your core texts. Analyze how each story addresses the themes of free will, consent, identity, and power. You are encouraged to engage with outside sources—philosophical, journalistic, or theoretical—that help you interrogate these themes in a broader context.

    Consider addressing:

    • The illusion of choice and algorithmic determinism
    • The commodification of human identity
    • The satire of corporate terms of service and performative consent
    • The psychological toll of being digitally duplicated or manipulated
    • Whether technological “progress” is outpacing moral development

    Your argument should include a strong thesis, counterargument with rebuttal, and close textual analysis that connects narrative detail to broader social and philosophical stakes.


    Five Sample Thesis Statements with Mapping Components


    1. The Death of the Autonomous Self

    In Mountainhead and Joan Is Awful, the protagonists’ loss of agency illustrates how modern tech empires undermine the very concept of selfhood by reducing human experience to data, delegitimizing consent through obfuscation, and accelerating psychological collapse under the guise of innovation.

    Mapping:

    • Reduction of human identity to data
    • Meaningless or manipulated consent
    • Psychological consequences of tech-induced identity collapse

    2. Mock Consent in the Age of Surveillance Entertainment

    Both narratives expose how user agreements and passive digital participation mask deeply coercive systems, revealing that what tech companies call “consent” is actually a legalized form of manipulation, moral abdication, and commercial exploitation.

    Mapping:

    • Consent as coercion disguised in legal language
    • Moral abdication by tech designers and executives
    • Profiteering through exploitation of personal identity

    3. From Users to Subjects: Tech’s New Authoritarianism

    Mountainhead and Joan Is Awful warn that the unchecked ambitions of tech elites have birthed a new form of soft authoritarianism—where control is exerted not through force but through omnipresent surveillance, AI-driven personalization, and identity theft masquerading as entertainment.

    Mapping:

    • Tech ambition and loss of oversight
    • Surveillance and algorithmic control
    • Identity theft as entertainment and profit

    4. The Algorithm as God: Tech’s Unholy Ascendancy

    These works portray the tech elite as digital deities who reprogram reality without ethical limits, revealing a cultural shift where the algorithm—not the soul, society, or state—determines who we are, what we do, and what versions of ourselves are publicly consumed.

    Mapping:

    • Tech elites as godlike figures
    • Algorithmic reality creation
    • Destruction of authentic identity in favor of profitable versions

    5. Selfhood on Lease: How Tech Undermines Freedom and Flourishing

    The protagonists’ descent into confusion and submission in both Mountainhead and Joan Is Awful show that freedom and personal flourishing are now contingent upon platforms and policies controlled by distant tech overlords, whose tools amplify harm faster than they can prevent it.

    Mapping:

    • Psychological dependency on digital platforms
    • Collapse of personal flourishing under tech influence
    • Lack of accountability from the tech elite

    Sample Outline


    I. Introduction

    • Hook: A vivid description of Joan discovering her life has become a streamable show, or the protagonist in Mountainhead questioning his own sanity.
    • Context: Rise of tech empires and their control over identity and consent.
    • Thesis: (Insert selected thesis statement)

    II. The Disintegration of the Self

    • Analyze how Joan and the Mountainhead protagonist experience a crisis of identity.
    • Discuss digital duplication, surveillance, and manipulated perception.
    • Use scenes to show how each story fractures the idea of an integrated, autonomous self.

    III. Consent as a Performance, Not a Principle

    • Explore how both stories critique the illusion of informed consent in the tech age.
    • Examine the use of user agreements, surveillance participation, and passive digital exposure.
    • Link to real-world examples (terms of service, data collection, facial recognition use).

    IV. Tech Elites as Unaccountable Gods

    • Compare the figures or systems in charge—Streamberry in Joan Is Awful, the nebulous forces in Mountainhead.
    • Analyze how the lack of ethical oversight allows systems to spiral toward harm.
    • Use real-world examples like social media algorithms and AI misuse.

    V. Counterargument and Rebuttal

    • Counterargument: Technology isn’t inherently evil—it’s how we use it.
    • Rebuttal: These works argue that the current infrastructure privileges power, speed, and profit over reflection, ethics, or restraint—and humans are no longer the ones in control.

    VI. Conclusion

    • Restate thesis with higher stakes.
    • Reflect on what these narratives ask us to consider about our current digital lives.
    • Pose an open-ended question: Can we build a future where tech enhances human agency instead of annihilating it?

  • The Handwriting Is on the Wall for Writing Instructors Like Myself

    The Handwriting Is on the Wall for Writing Instructors Like Myself

    There’s a cliché I’ve avoided all my life because I’m supposed to be offended by cliches. I teach college writing. But now, God help me, I must say it: I see the handwriting on the wall. And it’s blinking in algorithmic neon and blinding my eyes.

    I’ve taught college writing for forty years. My wife, a fellow lifer in the trenches, has clocked twenty-five teaching sixth and seventh graders. Like other teachers, we got caught off-guard by AI writing platforms. We’re now staring down the barrel of obsolescence while AI platforms give us an imperious smile and say, “We’ve got this now.”

    Try crafting an “AI-resistant” assignment. Go ahead. Ask students to conduct interviews, keep journals, write about memories. They’ll feed your prompt into ChatGPT and create an AI interview, journal entry, and personal reflection that has all the depth and soul of stale Pop-Tart. You squint your eyes at these AI responses, and you can tell something isn’t right. They look sort of real but have a robotic element about them. Your AI-detecting software isn’t reliable so you refrain from making accusations. 

    When I tell my wife I feel that my job is in danger, she shrugs and says there’s little we can do. The toothpaste is out of the tube. There’s no going back. 

    I suppose my wife will be a glorified camp counselor with grading software. For me, it will be different. I teach college. I’ll have to attend a re-education camp dressed up as “professional development.” I’ll have to learn how to teach students to prompt AI like Vegas magicians—how to trick it into coherence, how to interrogate its biases. Writing classes will be rebranded as Prompt Engineering.”

    At sixty-three, I’m no fool. I know what happens to tired draft horses when the carriage goes electric. I’ve seen the pasture. I can smell the industrial glue. And I’m not alone. My colleagues—bright, literate, and increasingly demoralized—mutter the same bitter mantra: “We are the AI police. And the criminals are always one jailbreak ahead.”

  • The Composition Apocalypse: How AI Ate the Syllabus

    The Composition Apocalypse: How AI Ate the Syllabus

    We’ve arrived at the third and final essay in this course, and the gloves are off.

    Just as GLP-1 drugs are transforming eating—from pleasure to optimization—AI is transforming writing. That’s not speculation; it’s the new syllabus. We’re witnessing the great extinction event of the traditional writing process. Drafting, revising, struggling with a paragraph like it’s a Rubik’s Cube in the dark? That’s quaint now. The machines are here, and they’re fast, fluent, and disarmingly coherent.

    Meanwhile, college writing programs are playing catch-up while the bots are already teaching themselves AP Composition. If we want writing instructors to remain relevant (i.e., not replaced by a glowing terminal that says “Rewrite?”), we’ll need to reimagine our role. The new instructor is less grammar cop, more rhetorical strategist. Part voice coach, part creative director, part ethicist.

    Your task:
    Write a 1,700-word argumentative essay responding to this claim:
    To remain essential in the Age of AI, college writing instruction must evolve from teaching students how to write to teaching students how to think—critically, ethically, and strategically—alongside machines.

    Consider how AI is reprogramming the writing process and what we must do in response:

    • Should writing classes teach AI prompt-crafting instead of thesis statements?
    • Will rhetorical literacy and moral clarity become more important than knowing where to put a semicolon?
    • Should students learn to turn Blender into a rhetorical tool—visualizing arguments as 3D structures or spatial infographics?
    • Will gamification and multimodal projects replace the five-paragraph zombie essay?
    • Are writing studios the future—dynamic, collaborative AI-human spaces where “How well can you prompt?” becomes the new “How well can you argue?”

    In short, what must the writing classroom become when the act of writing itself is no longer uniquely human?

    This prompt doesn’t ask you to mourn the old ways. It demands that you architect the new ones. Push past nostalgia and imagine what a post-ChatGPT curriculum might look like—not just to survive the AI onslaught, but to lead it.

  • The Rebranding of College Writing Instructors as Prompt Engineers

    The Rebranding of College Writing Instructors as Prompt Engineers

    There’s a cliché I’ve sidestepped for decades, the kind of phrase I’ve red-penned into oblivion in freshman essays. But now, God help me, I must say it: I see the handwriting on the wall. And it’s written in 72-point sans serif, blinking in algorithmic neon.

    I’ve taught college writing for forty years. My wife, a fellow lifer in the trenches, has clocked twenty-five teaching sixth and seventh graders. Between us, we’ve marked enough essays to wallpaper the Taj Mahal. And yet here we are, staring down the barrel of obsolescence while AI platforms politely tap us on the shoulder and whisper, “We’ve got this now.”

    Try crafting an “AI-resistant” assignment. Go ahead. Ask students to conduct interviews, keep journals, write about memories. They’ll feed your prompt into ChatGPT with the finesse of a hedge fund trader moving capital offshore. The result? A flawlessly ghostwritten confession by a bot with a stunning grasp of emotional trauma and a suspicious lack of typos.

    Middle school teachers, my wife says, are on their way to becoming glorified camp counselors with grading software. As for us college instructors, we’ll be lucky to avoid re-education camps dressed up as “professional development.” The new job? Teaching students how to prompt AI like Vegas magicians—how to trick it into coherence, how to interrogate its biases, how to extract signal from synthetic noise. Critical thinking rebranded as Prompt Engineering.

    Gone are the days of unpacking the psychic inertia of J. Alfred Prufrock or peeling back the grim cultural criticism of Coetzee’s Disgrace. Now it’s Kahoot quizzes and real-time prompt battles. Welcome to Gamified Rhetoric 101. Your syllabus: Minecraft meets Brave New World.

    At sixty-three, I’m no fool. I know what happens to tired draft horses when the carriage goes electric. I’ve seen the pasture. I can smell the industrial glue. And I’m not alone. My colleagues—bright, literate, and increasingly demoralized—mutter the same bitter mantra: “We are the AI police. And the criminals are always one jailbreak ahead.”

    We keep saying we need to “stop the bleeding,” another cliché I’d normally bin. But here I am, bleeding clichés like a wounded soldier of the Enlightenment, fighting off the Age of Ozempification—a term I’ve coined to describe the creeping automation of everything from weight loss to wit. We’re not writing anymore; we’re curating prompts. We’re not thinking; we’re optimizing.

    This isn’t pessimism. It’s clarity. And if clarity means leaning on a cliché, so be it.

  • Trapped in the AI Age’s Metaphysical Tug-of-War

    Trapped in the AI Age’s Metaphysical Tug-of-War

    I’m typing this to the sound of Beethoven—1,868 MP3s of compressed genius streamed through the algorithmic convenience of a playlist. It’s a 41-hour-and-8-minute monument to compromise: a simulacrum of sonic excellence that can’t hold a candle to the warmth of an LP. But convenience wins. Always.

    I make Faustian bargains like this daily. Thirty-minute meals instead of slow-cooked transcendence. Athleisure instead of tailoring. A Honda instead of high horsepower. The good-enough over the sublime. Not because I’m lazy—because I’m functional. Efficient. Optimized.

    And now, writing.

    For a year, my students and I have been feeding prompts into ChatGPT like a pagan tribe tossing goats into the volcano—hoping for inspiration, maybe salvation. Sometimes it works. The AI outlines, brainstorms, even polishes. But the more we rely on it, the more I feel the need to write without it—just to remember what my own voice sounds like. Just as the vinyl snob craves the imperfections of real analog music or the home cook insists on peeling garlic by hand, I need to suffer through the process.

    We’re caught in a metaphysical tug-of-war. We crave convenience but revere authenticity. We binge AI-generated sludge by day, then go weep over a hand-made pie crust YouTube video at night. We want our lives frictionless, but our souls textured. It’s the new sacred vs. profane: What do we reserve for real, and what do we surrender to the machine?

    I can’t say where this goes. Maybe real food will be phased out, like Blockbuster or bookstores. Maybe we’ll subsist on GLP-1 drugs, AI-tailored nutrient paste, and the joyless certainty of perfect lab metrics.

    As for entertainment, I’m marginally more hopeful. Chris Rock, Sarah Silverman—these are voices, not products. AI can churn out sitcoms, but it can’t bleed. It can’t bomb. It can’t riff on childhood trauma with perfect timing. Humans know the difference between a story and a story-shaped thing.

    Still, writing is in trouble. Reading, too. AI erodes attention spans like waves on sandstone. Books? Optional. Original thought? Delegated. The more AI floods the language, the more we’ll acclimate to its sterile rhythm. And the more we acclimate, the less we’ll even remember what a real voice sounds like.

    Yes, there will always be the artisan holdouts—those who cook, write, read, and listen with intention. But they’ll be outliers. A boutique species. The rest of us will be lean, medicated, managed. Data-optimized units of productivity.

    And yet, there will be stories. There will always be stories. Because stories aren’t just culture—they’re our survival instinct dressed up as entertainment. When everything else is outsourced, commodified, and flattened, we’ll still need someone to stand up and tell us who we are.

  • College Essay Prompt: Ozempification, AI, and the End of Food Culture?

    College Essay Prompt: Ozempification, AI, and the End of Food Culture?

    Prompt Overview:
    In recent years, the rise of GLP-1 drugs like Ozempic and Wegovy has begun to reshape our relationship with hunger, desire, and food itself. Meanwhile, artificial intelligence is transforming how food is produced, marketed, and even chosen—sometimes without human involvement. This convergence may signal the end of eating as a social, cultural, and emotional act.

    Your Task:
    Write an 8-paragraph argumentative essay that responds to the following claim:

    Claim:
    GLP-1 drugs and artificial intelligence are ending the traditional notion of food and eating as cultural, emotional, and communal experiences.

    Instructions:

    1. Introduction (Paragraph 1):
      Hook the reader with a striking observation or anecdote. Clearly present the claim and your thesis—whether you agree, disagree, or hold a nuanced position.
    2. Background (Paragraph 2):
      Briefly explain what GLP-1 drugs (e.g., Ozempic) do and how AI is being used in food production and personalization.
    3. First Argument (Paragraph 3):
      Make your first point in support of or against the claim. Use evidence from a reliable source.
    4. Second Argument (Paragraph 4):
      Develop a second point. This might include shifts in consumer behavior, changing food rituals, or the erosion of cultural traditions.
    5. Third Argument (Paragraph 5):
      Add a third supporting point that deepens your position. Consider long-term consequences or ethical implications.
    6. Counterargument and Rebuttal (Paragraph 6):
      Acknowledge a reasonable opposing view—perhaps that AI and GLP-1 drugs offer needed solutions to health crises—and then refute it using logic and evidence.
    7. Cultural Reflection (Paragraph 7):
      Reflect on what is at stake culturally. What do we lose if food is reduced to a biometric algorithm?
    8. Conclusion (Paragraph 8):
      Return to your thesis and end with a memorable insight or call to action.

    Source Requirement:
    Use at least 4 credible sources. At least two should come from recent journalism or peer-reviewed studies (2023 or later). Sources must be cited in MLA format.

    Optional Angles to Explore:

    • How do GLP-1 drugs rewire human appetite?
    • Will AI-generated food disconnect us from culinary heritage?
    • Can technological efficiency coexist with food as a ritual or joy?
  • College Essay Prompt: Performance, Collapse, and the Hunger for Validation

    College Essay Prompt: Performance, Collapse, and the Hunger for Validation

    In the Black Mirror episode “Nosedive,” Lacie Pound carefully curates her public persona to climb the social ranking system, only to experience a spectacular breakdown when her performative identity collapses. Similarly, in the Netflix documentary Untold: The Liver King, Brian Johnson (aka the Liver King) constructs a hyper-masculine brand built on ancestral living and self-discipline, but his digital persona unravels after his steroid use is exposed—calling into question the authenticity of his entire identity.

    Drawing on insights from The Social Dilemma and Sherry Turkle’s TED Talk “Connected, but alone?”, write an 8-paragraph essay analyzing how both Lacie Pound and the Liver King experience breakdowns caused by the pressure to perform a marketable self online. Consider how their stories reveal broader truths about the emotional and psychological toll of living in a world where self-worth is measured through digital validation.

    Instructions:

    Your essay should have a clear thesis and be structured as follows:

    Paragraph 1 – Introduction

    • Briefly introduce Lacie Pound and the Liver King as case studies in digital performance.
    • State your thesis: What common psychological or social dynamic do their stories reveal about life in the attention economy?

    Paragraph 2 – The Rise of the Performed Self

    • Explain how Lacie and the Liver King construct public identities tailored for approval.
    • Use The Social Dilemma and/or Turkle to support your claim about the pressures of online self-curation.

    Paragraph 3 – The Collapse of Lacie Pound

    • Analyze the arc of Lacie’s breakdown.
    • Show how social scoring leads to isolation and emotional implosion.

    Paragraph 4 – The Unmasking of the Liver King

    • Describe how his confession undermines his brand.
    • Discuss the role of digital audiences in both elevating and dismantling him.

    Paragraph 5 – The Role of Tech Platforms

    • How do algorithms and platforms reward performance and punish authenticity?
    • Draw from The Social Dilemma for evidence.

    Paragraph 6 – The Illusion of Connection

    • Use Turkle’s TED Talk to explore how both characters are “connected, but alone.”
    • Consider their emotional lives behind the digital façade.

    Paragraph 7 – A Counterargument

    • Could it be argued that both Lacie and the Liver King benefited from their online identities, at least temporarily?
    • Briefly address and rebut this view.

    Paragraph 8 – Conclusion

    • Reaffirm your thesis.
    • Reflect on what their stories warn us about the future of identity, performance, and mental health in the digital age.

    Requirements:

    • MLA format
    • 4 sources minimum (episode, documentary, TED Talk, and one external article or scholarly source of your choice)
    • Include a Works Cited page

    Here are 7 ways Lacie Pound (Black Mirror: Nosedive) and the Liver King (Untold: The Liver King) were manipulated by social media into self-sabotage, drawn through the lens of The Social Dilemma and Sherry Turkle’s TED Talk “Connected, but alone?”:


    1. They Mistook Validation for Connection

    Turkle argues we’ve “sacrificed conversation for connection,” replacing real intimacy with digital approval.

    • Lacie chases ratings instead of relationships, slowly alienating herself from authentic human bonds.
    • The Liver King builds a global audience but admits to loneliness and insecurity beneath the performative bravado.

    2. They Became Addicted to the Performance of Perfection

    The Social Dilemma explains how platforms reward idealized personas, not authenticity.

    • Lacie’s entire life becomes a curated highlight reel of fake smiles and forced gratitude.
    • The Liver King obsessively maintains his primal-man image, even risking credibility and health to keep the illusion intact.

    3. They Were Trapped in an Algorithmic Feedback Loop

    Algorithms feed users what keeps them engaged—usually content that reinforces their current identity.

    • Lacie’s feed reflects her desire to be liked, pushing her deeper into a phony aesthetic.
    • The Liver King is incentivized to keep escalating his primal stunts—eating raw organs, screaming workouts—not because it’s healthy, but because it gets clicks.

    4. They Confused Metrics with Meaning

    The Social Dilemma reveals how “likes,” views, and follower counts hijack the brain’s reward system.

    • Lacie sees her social score as a measure of human worth.
    • The Liver King sees followers as a proxy for legacy and success—until the steroid scandal exposes the hollowness behind the numbers.

    5. They Substituted Self-Reflection with Self-Branding

    Turkle notes that in digital spaces, we “edit, delete, retouch” our lives. But that comes at the cost of honest self-understanding.

    • Lacie never pauses to ask who she is outside the algorithm’s gaze.
    • The Liver King becomes his own brand, losing sight of the person beneath the loincloth and beard.

    6. They Were Driven by Fear of Being Forgotten

    Both characters fear digital invisibility more than real-world failure.

    • Lacie’s panic when her rating drops is existential; she’s no one without her score.
    • The Liver King’s confession comes only after public exposure threatens his empire—because relevance, not truth, is the ultimate currency.

    7. They Reached a Breaking Point in Private but Fell Apart in Public

    The Social Dilemma highlights how tech is designed to capture our attention, not care for our well-being.

    • Lacie breaks down in front of an audience, her worst moment recorded and shared.
    • The Liver King’s undoing is broadcast to the same crowd that once idolized him—turning shame into spectacle.

    Three Sample Thesis Statements

    1. Basic (Clear & Focused):

    Both Lacie Pound and the Liver King suffer emotional breakdowns because they become trapped by the very social media systems they believe will bring them success, as shown through their obsession with validation, performance, and visibility.


    2. Intermediate (More Insightful):

    Lacie Pound and the Liver King, though separated by fiction and reality, both represent victims of an attention economy that rewards curated identities over authentic living—ultimately leading them to sacrifice mental health, integrity, and human connection for the illusion of approval.


    3. Advanced (Nuanced & Sophisticated):

    As Lacie Pound and the Liver King spiral into public self-destruction, their stories expose the way digital platforms—backed by algorithmic manipulation and cultural hunger for spectacle—transform the self into a brand, connection into currency, and identity into a high-risk performance that inevitably collapses under its own artifice.