Against AI Moral Optimism: Why Tristan Harris Underestimates Power

Clarity Idealism

noun

Clarity Idealism, in the context of AI and the future of humanity, is the belief that sufficiently explaining the stakes of artificial intelligence—its risks, incentives, and long-term consequences—will naturally lead societies, institutions, and leaders to act responsibly. It assumes that confusion is the core threat and that once humanity “sees clearly,” agency and ethical restraint will follow. What this view underestimates is how power actually operates in technological systems. Clarity does not neutralize domination, profit-seeking, or geopolitical rivalry; it often accelerates them. In the AI era, bad actors do not require ignorance to behave destructively—they require capability, leverage, and advantage, all of which clarity can enhance. Clarity Idealism mistakes awareness for wisdom and shared knowledge for shared values, ignoring the historical reality that humans routinely understand the dangers of their tools and proceed anyway. In the race to build ever more powerful AI, clarity may illuminate the cliff—but it does not prevent those intoxicated by power from pressing the accelerator.

Tristan Harris takes the TED stage like a man standing at the shoreline, shouting warnings as a tidal wave gathers behind him. Social media, he says, was merely a warm-up act—a puddle compared to the ocean of impact AI is about to unleash. We are at a civilizational fork in the road. One path is open-source AI, where powerful tools scatter freely and inevitably fall into the hands of bad actors, lunatics, and ideologues who mistake chaos for freedom. The other path is closed-source AI, where a small priesthood of corporations and states hoard godlike power and call it “safety.” Either route, mishandled, ends in dystopia. Harris’s plea is urgent and sincere: we must not repeat the social-media catastrophe, where engagement metrics metastasized into addiction, outrage, polarization, and civic rot. AI, he argues, demands global coordination, shared norms, and regulatory guardrails robust enough to make the technology serve humanity rather than quietly reorganize it into something meaner, angrier, and less human.

Harris’s faith rests on a single, luminous premise: clarity. Confusion, denial, and fatalism are the true villains. If we can see the stakes clearly—if we understand how AI can slide toward chaos or tyranny—then we can choose wisely. “Clarity creates agency,” he says, trusting that informed humans will act in their collective best interest. I admire the moral courage of this argument, but I don’t buy its anthropology. History suggests that clarity does not restrain power; it sharpens it. The most dangerous people in the world are not confused. They are lucid, strategic, and indifferent to collateral damage. They understand exactly what they are doing—and do it anyway. Harris believes clarity liberates agency; I suspect it often just reveals who is willing to burn the future for dominance. The real enemy is not ignorance but nihilistic power-lust, the ancient human addiction to control dressed up in modern code. Harris should keep illuminating the terrain—but he should also admit that many travelers, seeing the cliff clearly, will still sprint toward it. Not because they are lost, but because they want what waits at the edge.

Comments

Leave a comment