In his bracing essay “Colleges Are Preparing to Self-Lobotomize,” Michael Clune accuses higher education of handling AI with the institutional equivalent of a drunk chainsaw. The subtitle gives away the indictment: “The skills that students will need in an age of automation are precisely those that are eroding by inserting AI into the educational process.” Colleges, Clune argues, spent the first three years of generative AI staring at the floor. Now they’re overcorrecting—embedding AI everywhere as if saturation were the same thing as competence. It isn’t. It’s panic dressed up as innovation.
The prevailing fantasy is that if AI is everywhere, mastery will seep into students by osmosis. But the opposite is happening. Colleges are training students to rely on frictionless services while quietly abandoning the capacities that make AI usable in any serious way: judgment, learning agility, and flexible analysis. The tools are getting smarter. The users are getting thinner.
That thinning has a name. Cognitive Thinning is the gradual erosion of critical thinking that occurs when sustained mental effort is replaced by convenience. It sets in when institutions assume that constant exposure to powerful tools will produce competence, even as they dismantle the practices that build it. As AI grows more capable, students are asked to do less thinking, tolerate less uncertainty, and carry less intellectual weight. The result is a widening imbalance: smarter systems paired with slimmer minds—efficient, polished, and increasingly unable to move beyond the surface of what machines provide.
Clune wants students to avoid this fate, but he faces a rhetorical problem. He keeps insisting on abstractions—critical thinking, intellectual flexibility, judgment—in a culture trained to distrust anything abstract. Telling a screen-saturated society to imagine thinking outside screens is like telling a fish to imagine life outside water. The first task isn’t instruction. It’s translation.
The fish analogy holds. A fish is aquatic; water isn’t a preference—it’s a prison. A young person raised entirely on screens, prompts, and optimization tools treats that ecosystem as reality itself. Like the fish, they know only one environment. We can name this condition precisely. They are cognitively outsourced, trained to delegate thought as if it were healthy. They are algovorous, endlessly stimulated by systems that quietly erode attention and resilience. They are digitally obligate, unable to function without mediation. By definition, these orientations crowd out critical thinking. They produce people who function smoothly inside digital systems and falter everywhere else.
Drop such a person into a college that recklessly embeds AI into every course in the name of being “future-proof,” and you don’t produce adaptability—you produce fragility. In some fields, this fragility is fatal. Clune cites a telling statistic: history majors now have roughly half the unemployment rate of recent computer science graduates. The implication is blunt. Liberal education builds range. Narrow technical training builds specialists who snap when the environment shifts. As the New York Times put it in a headline Clune references: “Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.” AI is replacing coders. Life inside a tiny digital ecosystem does not prepare you for a world that mutates.
Is AI the cause of this dysfunction? No. The damage predates ChatGPT. I use AI constantly—and enjoy it. It sharpens my curiosity. It helps me test ideas. It makes me smarter because I am not trapped inside it. I have a life beyond screens. I’ve read thousands of books. I can zoom in and out—trees and forest—without panic. I have language for my inner life, which means I can catch myself when I become maudlin, entropic, dissolute, misanthropic, lugubrious, or vainglorious. I have history, philosophy, and religion as reference points. We call this bundle “critical thinking,” but what it really amounts to is being fully human.
Someone who has outsourced thought and imagination since childhood cannot suddenly use AI well. They aren’t liberated. They’re brittle—dependent, narrow, and easily replaced.
Because I’m a lifelong weightlifter, let me be concrete. AI is a massive, state-of-the-art gym: barbells, dumbbells, Smith machines, hack squats, leg presses, lat pulldowns, pec decks, cable rows—the works. Now imagine you’ve never trained. You’re twenty-eight, inspired by Instagram physiques, vaguely determined to “get in shape.” You walk into this cathedral of iron with no plan, no understanding of recovery, nutrition, progressive overload, or discipline. You’re surrounded by equipment—and completely lost. Within a month, you quit. You join the annual migration of January optimists who vanish by February, leaving the gym to the regulars.
AI is that gym. It doesn’t eject users out of malice. It ejects them because it demands capacities they never built. Some people learn isolated tricks—prompting here, automating there—but only the way someone learns to push a toaster lever. When these tasks define a person, the result is a Non Player Character: reactive, scripted, interchangeable.
Students already understand what an NPC is. That’s why they fear becoming one.
If colleges embed AI everywhere without building the human capacities required to use it, they aren’t educating thinkers. They’re manufacturing NPCs—and they deserve to be called out for it.
Don’t wait for your institution to save you. Approach education the way you’d approach a gym. Learn how bodies actually grow before touching the weights. Know the muscle groups. Respect recovery. Understand volume, exhaustion, and nutrition. Do the homework so the gym doesn’t spit you out.
The same rule applies to AI. To use it well, you need a specific kind of mental strength: Cognitive Load-Bearing Capacity. This is the ability to use AI without surrendering your thinking. You can see it in ordinary behaviors: reading before summarizing, drafting before prompting, distrusting answers that sound too smooth, and revising because an idea is weak—not because a machine suggested a synonym. It’s the capacity to sit with confusion, compare sources, and arrive at judgment rather than outsource it.
This capacity isn’t innate, and it isn’t fast. It’s built through resistance: sustained reading, outlining by hand, struggling with unfamiliar ideas, revising after failure. Students with cognitive load-bearing capacity use AI to pressure-test their thinking. Students without it use AI to replace thinking. One group grows stronger and more adaptable. The other becomes dependent—and replaceable.
Think of AI like a piano. You can sit down and bang out notes immediately, but you won’t produce music. Beautiful playing requires trained fingers, disciplined ears, and years of wrong notes. AI works the same way. Without cognitive load-bearing capacity, you get noise—technically correct, emotionally dead. With it, the tool becomes expressive. The difference isn’t the instrument. It’s the musician.
If you want to build this capacity, forget grand reforms. Choose consistent resistance. Read an hour a day with no tabs open. Write before prompting. Ask AI to attack your argument instead of finishing it. Keep a notebook where you explain ideas in your own words, badly at first. Sit with difficulty instead of dodging it. These habits feel inefficient—and that’s the point. They’re the mental equivalent of scales and drills. Over time, they give you the strength to use powerful tools without being used by them.