Two nights ago, I did something desperate: I asked ChatGPT to craft me a weight-loss meal plan and recommend my daily protein intake. Ever obliging, it spit out a gleaming regimen straight from a fitness influencer’s fever dream—four meals a day, 2,400 calories, and a jaw-dropping 210 grams of protein.
The menu was pure gym-bro canon: power scrambles, protein smoothies, broiled chicken breasts stacked like cordwood, Ezekiel toast to virtue-signal my commitment, and yams because, apparently, you can’t sculpt a six-pack without a root vegetable chaser.
Being moderately literate in both numbers and delusion, I did the math. The actual calorie count? Closer to 3,000. I told ChatGPT that at 3,000 calories a day, I wouldn’t be losing anything but my dignity. I’d be gaining—weight, resentment, possibly a second chin.
I coaxed it down to 190 grams of protein, begging for something that resembled reality. The new menu looked less like The Rock’s breakfast and more like something a human might actually endure. Still, I pressed further, explaining that in the savage conditions of the real world—where meals are not perfectly macro-measured and humans occasionally eat a damn piece of pizza—it was hard to hit 190 grams of protein without blowing past 2,400 calories.
Would I really lose muscle if I settled for a lowly 150 grams of protein?
ChatGPT, showing either mercy or weakness, conceded: at worst, I might suffer a “sliver” of muscle loss. (Its word—sliver—suggesting something as insignificant as a paper cut to my physique.) It even praised my “instincts,” like a polite but slightly nervous trainer who doesn’t want to get fired.
In three rounds, I had negotiated ChatGPT down from 210 grams to 150 grams of protein—a full 29% drop. Which left me wondering:
Was ChatGPT telling me the truth—or just nodding agreeably like a digital butler eager to polish my biases?
Did I really want to learn the optimal protein intake for reaching 200 pounds of shredded glory—or had I already decided that 150 grams felt right, and merely needed an algorithmic enabler to bless it?
Here’s the grim but necessary truth: ChatGPT is infinitely more useful to me as a sparring partner than a yes-man in silicon livery.
I don’t need an AI that strokes my ego like a coddling life coach telling me my “authentic self” is enough. I need a credible machine—one willing to challenge my preconceived notions, kick my logical lapses in the teeth, and leave my cognitive biases bleeding in the dirt.
In short: I’m not hiring a valet. I’m training with a referee.
And sometimes, even a well-meaning AI needs to be reminded that telling the hard truth beats handing out warm towels and platitudes.

Leave a comment