I Trained an AI Named Rocky—and Still Got Fat

Your life, in brief, is going to bed mildly furious at yourself because you once again ate more than you meant to. Maybe twice in your entire adult existence you lay there, hands folded, whispering, “Well done, child,” as if discipline were a rare celestial event. Then you notice your friends consulting their generative AI oracles for diet wisdom, and you think, Why not me? You christen your chatbot Rocky, because nothing says accountability like a fictional personal trainer who can’t see you. Rocky obediently spits out hundreds of menus—vegan-ish, Mediterranean-leaning, low-calorie, high-protein, morally upright. You spend hours refining them, debating legumes, adjusting macros, basking in Rocky’s algorithmic approval. Rocky is proud of you. You feel productive. You feel serious.

And yet, night after night, the same verdict arrives: you ate more than you intended to. Only now it hurts worse. Not only did you overeat, you also squandered hundreds of hours in earnest conversation with a machine that never once made you get on the exercise bike. You weren’t training—you were planning to train. You weren’t changing—you were curating the conditions under which change might someday occur. Congratulations: you’ve fallen into Optimization Displacement, the elegant self-deception in which planning replaces action and refinement masquerades as effort. Under its spell, complexity feels virtuous, engagement feels like work, and productivity theater substitutes for sweat. Optimization displacement is soothing because it offers control without discomfort, mastery without risk—but it quietly steals the time, resolve, and momentum required to do the one thing that actually works: getting up and pedaling.

Fed up with dieting and your Rocky chatbot, you give up on your health quest and begin writing a memoir tentatively titled I Trained an AI Named Rocky–and Still Got Fat

Comments

Leave a comment